Skip to Content

EU adopts Artificial Intelligence Act

EU adopts Artificial Intelligence Act

EU adopts Artificial Intelligence Act

STRASBOURG, France – The European Parliament recently approved the Artificial Intelligence Act, which the governing body said ensures safety and compliance with fundamental rights, while boosting innovation for artificial intelligence (AI).

The regulation was endorsed by MEPs with 523 votes in favor, 46 against and 49 abstentions. “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” the Internal Market Committee co-rapporteur Brando Benifei said. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI’s development”.

EU lawmakers said that it aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and helping to establish Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

For example, the regulation features several banned applications with regards to the use of AI. The new rules ban certain applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. According to the EU, emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.

In spite of that the parliament did outline several law enforcement exemptions to the banned applications. The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, they said, except in “exhaustively” listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization.

“Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto ('post-remote RBI') is considered a high-risk use case, requiring judicial authorization being linked to a criminal offense," the parliament stated in its release.

The rules also note obligations for high-risk systems such as those used to control critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Those systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Additionally, these systems in use must meet certain transparency requirements. General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labeled as such.

The European Parliament also said that regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Comments

To comment on this post, please log in to your account or set up an account now.