fbpx
News Hub

UK publishes first global guidelines for AI security

Written by Mon 27 Nov 2023

The UK has published the first global guidelines to ensure the safe development of artificial intelligence (AI) technology.

Agencies from 18 countries have confirmed they will endorse and co-seal the Guidelines for Secure AI System Development to enhance AI cybersecurity.

The Guidelines covered four areas, with suggested behaviours for enhanced security: secure design, development, deployment, and operation.

The non-binding agreement included general recommendations, like monitoring AI systems for abuse, protecting data from tampering, and evaluating software suppliers.

The Guidelines will assist AI system developers in making informed cybersecurity decisions at every stage, whether from scratch or built on existing tools and services. 

The Guidelines follow a ‘secure by design’ approach by making cybersecurity a precondition for AI system safety.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” said Lindy Cameron, CEO at the National Cyber Security Centre (NCSC).

In June, Cameron emphasised the importance of integrating security into developing AI systems and cautioned against retrofitting in the future. Retrofitting is the addition of new technology or features to older systems.

The Guidelines did not tackle questions surrounding the appropriate use of AI or how the data that is inputted into models is gathered.

The Guidelines were developed by the NCSC, a part of GCHQ, and the US’s Cyber and Infrastructure Security Agency in collaboration with industry experts. The Guidelines also involved 21 international agencies and ministries, including members of the G7 group of nations and those from the Global South.

“We are at an inflection point in the development of AI, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said Alejandro Mayorkas, Secretary of Homeland Security for the United States.

The signatories to the Guidelines were Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, the Republic of Korea, Singapore, the United Kingdom of Great Britain and Northern Ireland, and the United States of America.

“The UK is an international standard bearer on the safe use of AI. The NCSC’s publication of these new Guidelines will put cybersecurity at the heart of AI development at every stage so protecting against risk is considered throughout,” said Michelle Donelan, Science and Technology Secretary for the UK.

The announcement of the new global Guidelines followed France, Germany, and Italy’s agreement on AI regulation.

These three governments are in favour of binding voluntary commitments for both large and small AI providers within the European Union. This self-regulation would be enforced through adherence to predefined codes of conduct.


Hungry for more tech news?

Sign up for your weekly tech briefings!

Written by Mon 27 Nov 2023

Send us a correction Send us a news tip