Skip to content

New EU AI law in force - what's changing?

AI regulations are unified across the EU for the first time. However, fully implementing this law in all member states will take some time.

KI applications are used in critical infrastructures as well as by private individuals.
KI applications are used in critical infrastructures as well as by private individuals.

- New EU AI law in force - what's changing?

The EU's Artificial Intelligence (AI) Act has come into force. Member states now have two years to implement its provisions into national law. The act aims to regulate AI more sharply and uniformly within the European Union, better protecting fundamental rights, democracy, and the rule of law in the face of this technology. What the act means:

What is Artificial Intelligence anyway?

Artificial Intelligence typically refers to applications based on machine learning, where software sifts through large datasets to find patterns and draw conclusions. This can mimic human abilities like logical thinking, learning, planning, and creativity, enabling machines to perceive and react to their environment.

AI is already used in many areas. For instance, such programs can analyze computer tomography scans faster and more accurately than humans. Self-driving cars try to predict the behavior of other road users. And chatbots or automatic playlists from streaming services also work with AI.

Why does the EU need such a law?

The law aims to make the use of AI in the European Union safer. It seeks to ensure that AI systems are as transparent, traceable, non-discriminatory, and environmentally friendly as possible. A key aspect is that AI systems are supervised by humans, not just other technologies.

What rules does the law contain?

The regulations categorize AI applications into different risk groups. Systems deemed particularly high-risk, such as those used in critical infrastructures or in education and healthcare, must meet strict requirements. Applications with lower risk face fewer obligations.

AI applications that violate EU values are also prohibited. This includes evaluating social behavior ("Social Scoring"), as done in China to categorize citizens based on their behavior.

What does this mean for consumers?

The law aims to better protect consumers from risky AI applications. Face recognition in public spaces, such as through video surveillance at public places, is generally not allowed. Emotion recognition at the workplace and in educational institutions is also banned in the EU.

Moreover, AI applications must be more transparently labeled. Consumers can then more easily identify where Artificial Intelligence is used. Private individuals who discover violations of the regulations can file complaints with national authorities.

What changes exactly from August 1st?

Initially, not much. The AI act will be phased in gradually. Some provisions, such as the ban on AI systems presenting "unacceptable risks," must be implemented promptly by member states. These are systems deemed a threat to humans, and their ban will apply after six months.

A code of conduct for providers of AI models should be finalized by April next year, as announced by the EU Commission before the act came into force.

After two years, most points of the law must be fully implemented. However, high-risk systems will have more time to meet the requirements, with their obligations applying after three years.

What happens if someone does not follow the rules?

Upon violations, severe penalties apply: For instance, the use of prohibited technology can result in fines of up to €35 million or, for companies, up to 7% of their global annual turnover of the preceding fiscal year. However, the exact penalty amount must be determined within this range by the countries, as the Commission announced.

For other legal infringements, fines of up to €15 million or, for companies, up to 3% of their global annual turnover of the preceding fiscal year may apply.

Is there criticism of the law?

Experts have repeatedly debated whether AI would receive a boost or if its development might even be hindered by the law. Ultimately, this likely depends on the respective national implementation. Green Party Bundestag member Tobias Bacherle called for a regulation of AI-supported biometric surveillance in Germany, warning that it could easily be misused to undermine freedom rights if it falls into the wrong hands.

Federal Digital Minister Volker Wissing believes some of the EU law's provisions go too far. "I would have wished for a more innovation-friendly regulation," the FDP politician told the German Press Agency. "But in the end, a compromise is better than no regulation at all." Now, Germany aims for a "bureaucracy-light" implementation.

Earlier, it was criticized that many provisions could quickly become outdated due to the rapid technological development of AI applications and the gradual implementation of the regulations. The Commission announced that it will conduct an annual review to determine if the list of "high-risk" applications needs to be revised or expanded.

The Commission has acknowledged the need for continuous evaluation of the AI landscape, as they announced that they will conduct an annual review to determine if the list of "high-risk" applications necessitates revision or expansion. Following the Commission's recommendations, member states now have the responsibility to ensure that AI systems are as transparent and non-discriminatory as possible, in line with the provisions of the EU's AI Act.

Read also:

Comments

Latest

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria The Augsburg District Attorney's Office is currently investigating several staff members of the Augsburg-Gablingen prison (JVA) on allegations of severe prisoner mistreatment. The focus of the investigation is on claims of bodily harm in the workplace. It's

Members Public