fbpx
News Hub

EU reaches landmark agreement for AI regulation in AI Act

Written by Tue 12 Dec 2023

EU policymakers have provisionally agreed on regulations for the AI Act, aiming to ensure the safe deployment of artificial intelligence (AI).

The regulations aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. It also seeks to promote innovation and position Europe as a leader in the field.

President of the European Commission, Ursula von der Leyen, said that when used wisely and widely, AI promises huge benefits to the economy and society.

“By guaranteeing the safety and fundamental rights of people and businesses, the Act will support the human-centric, transparent and responsible development, deployment, and take-up of AI in the EU,” said von der Leyen.

The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.

Reuters reported Fritz-Ulli Pieper, aspecialist in IT law at Taylor Wessing, said the final text’s details remain uncertain.

“No one knows how the final wording will look like and if or how you can really push current agreement in a final law text. The devil will be in the detail of the final text,” said Pieper.

The European Parliament will discuss details that may modify the final legislation in the upcoming weeks. The rules are expected to take effect early next year and be applicable in 2026.

Vice President of the Information Technology and Innovation Foundation, Daniel Castro, said it is still too soon to know exactly what new rules may be necessary.

“EU policymakers should re-read the tale of the tortoise and the hare. Acting quickly may give the illusion of progress, but it does not guarantee success,” said Castro.

Head of the European office of the Computer and Communications Industry Association, Daniel Friedlaender, concurred.

“Speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone,” said Friedlaender.

Meanwhile, companies have been encouraged to join a voluntary AI Pact. The pact will convene AI developers from Europe and worldwide to implement key obligations of the AI Act ahead of legal deadlines.

The Pact will encourage companies to communicate the processes and practices they are putting in place. This is in preparation for compliance and to ensure that the design, development, and use of AI are trustworthy.

Key Terms of the EU AI Act

The new rules will be applied uniformly across all member states of the EU and are based on a future-proof definition of AI.

The European Parliament has defined AI as software capable of generating outputs such as content, predictions, recommendations, or decisions that influence the environments they interact with. This definition is based on a given set of human-defined objectives.

The proposals include safeguards on the use of AI within the EU, as well as limitations on its adoption by law enforcement agencies.

Obligations for high-risk systems

For AI systems deemed high-risk, clear obligations have been established. High-risk applications have the potential to harm health, safety, fundamental rights, the environment, democracy, and the rule of law.

Members of the European Parliament (MEPs) included a mandatory fundamental rights impact assessment, which applies to the insurance and banking sectors.

AI systems influencing elections and voter behaviour also fall under the high-risk category. Citizens are granted the right to launch complaints and receive explanations about decisions impacting their rights made by high-risk AI systems.

To accommodate diverse AI tasks and rapid capabilities expansion, general-purpose AI systems and their models must meet transparency requirements. This includes technical documentation, compliance with EU copyright law, and dissemination of detailed training content summaries.

“During the negotiations, we were particularly committed to ensuring that AI systems are transparent, comprehensible and verifiable,” said Steffi Lemke, German Minister for Environment, Nature Conservation, Nuclear Safety and Consumer Protection.

Negotiators secured more stringent obligations for high-impact general-purpose AI models with systemic risk. These obligations include requirements for model evaluations, systemic risk assessments and mitigation, adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on energy efficiency.

“Ensuring a balanced AI legislative framework that promotes responsible technology and protects citizens’ rights is of the utmost importance,” said Matteo Quattrocchi, Director of EMEA Policy at The Software Alliance.

Until coordinated EU standards are set, general-purpose AI models with systemic risk may adhere to codes of practice for regulatory compliance.

Banned Applications

Co-legislators agreed to ban several applications that pose a threat to citizen’s rights and democracy.

These included biometric categorisation systems using sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, and race.

The prohibition extends to the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Emotion recognition in the workplace and educational institutions is also banned, along with social scoring based on social behaviour or personal characteristics.

The new rules also ban AI systems designed to control human behaviour in a way that bypasses or undermines individuals’ free will.

The use of AI to exploit the vulnerabilities of people, whether due to age, disability, or social and economic situations, is also banned.

Law Enforcement Exemptions

Negotiators have established exceptions for deploying biometric identification systems in publicly accessible spaces for law enforcement. These exceptions require prior judicial authorisation and are limited to specific lists of crimes.

“Despite an uphill battle over several days of negotiations, it was not possible to achieve a complete ban on real-time biometric identification against the massive headwind from the member states,” said Svenja Hahn, German MEP and Shadow Rapporteur for the European AI Act, on behalf of the liberal Renew Europe group.

Reuters reported Hahn said MEPs wanted to make biometric surveillance as unregulated as possible. Only the German Government called for a ban.

Real-time remote biometric identification systems in public spaces by law enforcement are restricted to identifying kidnapping and human trafficking victims, as well as preventing immediate terrorist threats.

They are also permitted for efforts to track down individuals suspected of crimes such as terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in criminal organisations, and environmental offences.

“After a huge battle with the EU countries, we have restricted the use of these types of systems. In a free and democratic society, you should be able to walk on the street without the Government constantly following you on the street, at festivals or in football stadiums,” said Kim van Sparrentak, a Dutch MEP who worked closely on the draft AI rules.

Measures to Support Innovation and Businesses

MEPs aimed to ensure that small and medium-sized enterprises can create AI solutions without facing pressure from dominant industry players who control the value chain.

The agreement supported the implementation of regulatory sandboxes and real-world testing overseen by national authorities. These initiatives provide environments for developing and training innovative AI technologies before they are introduced to the market.

Sanctions for Violations of the AI Act

Companies found to be non-compliant with the rules will be fined. Fines depend on the severity of the infringement of the rules as well as the size of the company. Fines will begin at £6.3 million ($8 million) or 1.5% of the company’s global annual turnover. Penalties can increase to a maximum of £29 million ($37.5 million).

The agreement reached by the European Parliament comes a month after the UK published the first global guidelines to ensure the safe development of AI technology.

Agencies from 18 countries have confirmed they will endorse and co-seal the Guidelines for Secure AI System Development to enhance AI cybersecurity.

In the same month, France, Germany, and Italy reached an agreement on the formulation of AI regulation.

The three governments are in favour of binding voluntary commitments for both large and small AI providers within the European Union. This self-regulation would be enforced through adherence to predefined codes of conduct.

Join Tech Show London

6-7 March 2024, ExCeL London

Be a part of the latest tech conversations and discover pioneering innovations.

You won’t want to miss one of the most exciting technology events of the year.

Written by Tue 12 Dec 2023

Send us a correction Send us a news tip