fbpx
News Hub

European Union passes landmark EU AI Act for responsible innovation

Written by Fri 15 Mar 2024

On Wednesday, the European Union’s Parliament approved the EU Artificial Intelligence (AI) Act to ensure the safety and compliance of the technology whilst enhancing innovation.

The regulation was endorsed by Members of the European Parliament (MEPs) with 523 votes in favour, 46 against, and 49 abstentions. 

“We finally have the world’s first binding law on AI, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected,” said Brando Benifei the Internal Market Committee Co-Rapporteur and MEP.

The regulation established obligations for AI based on its potential risks and level of impact. Biometric categorisation systems established on sensitive characteristics is banned under the EU AI Act, alongside untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Under the new regulations, emotion recognition in the workplace and schools, social scoring, predictive policing based on profiling or assessing characteristics, and AI manipulation exploiting vulnerabilities are also forbidden.

“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated, leaving every other region, including the UK to play catch up,” said Enza Iannopollo, Principal Analyst at global market research company, Forrester.

Law enforcement’s use of biometric identification systems (RBI) is allowed in principle except in ‘exhaustively listed and narrowly defined situations’. Real-time RBI can be deployed if strict safeguards are met. The EU Parliament said an example for accepted use is if RBI is limited in time and geographic scope and subject to prior judicial or administrative authorisation.

Obligations for High-risk Systems

Clear obligations are expected for other high-risk AI systems due to the potential for them to cause harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. 

The European Parliament said examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services like healthcare and banking, law enforcement systems, migration and border management, and justice and democratic processes.

The European Parliament emphasised AI systems must assess and mitigate risks, maintain transparent and accurate logs, and ensure human oversight. Citizens will have the right to file complaints regarding AI systems and receive explanations for decisions impacting their rights made by high-risk AI systems.

“The EU’s framework to mitigate AI risks, coupled with robust business policies to further protect themselves and users, will allow organisations to have greater agility to react to market trends and better serve customers, all while maintaining a high level of trust,” said Sridhar Iyengar, Managing Director at software company, Zoho Europe.

Mark Molyneux, EMEA Chief Technology Officer at AI-powered data security and management company, Cohesity, said low-risk applications of AI ‘will see a lighter touch’, but the larger practical uses of the technology will face detailed compliance requirements. 

“Much of which will need to be in place before companies start to innovate with AI which means they are likely in breach of the Act already and will need to draw back on development to get their house in order,” said Molyneux. 

Dr Kjell Carlsson, Head of AI Strategy at data science and AI platform, Domino Data Lab, concurred with this sentiment, stating that with the passing of the EU AI Act, the ‘scariest’ thing about AI is the regulation itself.

“Between the astronomical fines, sweeping scope, and unclear definitions, every organisation operating in the EU now runs a potentially lethal risk in their AI, ML, and analytics-driven activities,” said Carlsson.

Carlsson emphasised that leveraging these technologies is imperative for all organisations to remain competitive. Companies must improve their responsible AI capacities by establishing robust processes and platforms for effectively governing, validating, monitoring, and auditing the AI lifecycle at scale.

Transparency Requirements in the EU AI Act

Under the EU AI Act, general-purpose AI (GPAI) systems must meet transparency requirements including compliance with EU copyright law and publishing detailed summaries of the content used for training. This also applies to the GPAI models GPAI systems are based on. 

Neil Thacker, Chief Information Security Officer EMEA at cybersecurity company, Netskope, welcomed the legislation’s explicit requirements for detailed summaries of training data. Thacker stressed for the average non-specialist business, prioritising compliance with GPAI systems should be their initial focus.

“Informed decision-making is crucial to implementing AI that is ethical and meets the requirements of the new law. Knowing and documenting the use of both machine learning and AI systems within an organisation is a simple way to understand and anticipate vulnerabilities to business-critical data while ensuring responsible use of Al,” said Thacker.

The European Parliament specified high-powered GPAI models, which may pose systemic risks, will be subject to extra obligations. These include model evaluations, systemic risk assessments, incident reporting, and clear labelling of artificial or manipulated media content.

Natalia Fritzen, AI Policy and Compliance Specialist at identity verification platform, Sumsub, said the EU AI Act is promising, but doubt looms over its ability to ‘effectively tackle increasingly popular AI-powered deepfakes’.

“Worryingly, recent data reveals that deepfakes grew ten times in 2023 from the previous year. As the threat continues, we are not convinced upcoming measures will sufficiently safeguard businesses and the wider public, as well as electoral integrity,” said Fritzen.

Fritzen added whilst watermarks have been the most recommended remedy against deepfakes, several concerns have been raised around their effectiveness, like technical implementation, accuracy, and robustness. 

For these to be effective, Frtizen said the European Commission must set certain watermark standardisation requirements.

Measures to support innovation and SMEs

As part of the EU AI Act, the European Parliament said regulatory sandboxes and real-world testing environments should be created nationally. These environments should be accessible to small and medium-sized enterprises (SMEs) and start-ups. 

These sandboxes and testing environments provide a controlled space where AI technologies can be developed, trained, and tested before they are released onto the market. 

John Kirk, Deputy CEO at end-to-end marketing company, ITG, said the rapid adoption of AI will bring ‘seismic changes’ to business operations, impacting both jobs and the wider digital economy. Kirk highlighted AI must be deployed correctly to power growth and transform industries.

“Having a broad legal framework in place to ensure high standards of governance is a logical next step forward, to ensure that organisations make the most of AI, whilst adhering to the necessary regulatory rules,” said Kirk.

The CEO of Digital Poverty Alliance, Elizabeth Anderson, said the rise of AI is changing many sectors, offering new possibilities to make tasks easier and more efficient. 

“However, this progress also risks increasing the digital divide due to the ever-widening skills gap, which is already a profound problem in the UK. This divide is likely to grow as AI becomes more common, affecting people differently based on their income, education, where they live, and age,” said Anderson.

To address the problem, Anderson suggested prioritising policies to tackle digital exclusion. This will require collaborative efforts from governments, businesses, educators, and community groups to ensure AI advancements benefit everyone, ‘especially those at risk of being left behind’.

Next Steps for the EU AI Act

The EU AI Act is undergoing a final check by lawyer-linguists and is expected to be formally adopted before the end of the legislative session, following the corrigendum procedure. The Act also requires formal endorsement from the Council.

Civil Liberties Committee co-rapporteur and MEP, Dragos Tudorache, said AI will push the EU to rethink the social contract at the heart of democracies, education models, labour markets, and how warfare is conducted.

“The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice,” said Tudorache.

Once published in the official Journal, the EU AI Act will take effect 20 days later. The Act will become fully applicable 24 months after this entry.

Certain aspects of the Act will be phased in at varying intervals, including bans on prohibited practices after six months, codes of practice after nine months, general-purpose AI rules including governance after 12 months, and obligations for high-risk systems after 36 months.

In December, EU policymakers provisionally agreed on regulations for the AI Act, aiming to ensure the safe deployment of the technology.

Join Big Data & AI World Frankfurt

22-23 May 2024, Messe Frankfurt

Be at the forefront of change with thousands of technologists, data specialists, and AI pioneers.

Don’t miss the biggest opportunities to advance your business into the future.

Written by Fri 15 Mar 2024

Send us a correction Send us a news tip