UN Security Council considers ‘double-edged’ sword of AI in ‘historic’ first meeting
Written by Rebecca Uffindell Wed 19 Jul 2023
The UN Security Council held its first meeting on artificial intelligence (AI) this week. Co-chair of the ‘historic’ meeting, UK Foreign Secretary James Cleverly, said the technology will ‘fundamentally alter every aspect of human life’.
Speaking to the 15 person Council in New York, Cleverly believed AI could bring momentous opportunities. Yet, concerns over ‘hugely harmful consequences for democracy and stability’ were raised.
Potential for AI within the UN
Responsibly applying AI has the potential to address climate change, reshape industries, revolutionise education, reduce violent conflict, beat corruption, boost global economies, and help the UN deliver on its Sustainable Development Goals.
“Ground-breaking discoveries in medicine may be just around the corner. The productivity boost to our economies may be vast,” said Cleverly
Ambassador Jeffrey Delaurentis for the United States said AI offers incredible promise in addressing global challenges, giving examples of automated systems used to grow food and predict storm paths.
As a global organisation for peace, Ambassador Harold Adlai Agyeman for Ghana stressed the potential that AI could present in peace mediation and negotiation efforts.
“The opportunity lies developing and applying that technology to identify early warning signs of conflict and to define responses that have a higher rate of success,” said Agyeman.
UN addresses AI fears
Despite these opportunities, the UN Security Council met to discuss how AI will affect its work.
“We urgently need to shape the global governance of transformative technologies because AI knows no borders,” said Cleverly.
The UK Foreign Secretary identified a number of threats posed by AI, including disrupting global strategic strategy, challenging assumptions about defence and deterrence, and posing moral questions about accountability for lethal decisions in combat.
“AI could aid the reckless quest for weapons of mass destruction by state and non-state actors alike. But it could also help us stop proliferation,” added Cleverly.
The UN Security Council also considered how AI can heighten cyberthreats and help threat actors in instigating cyberattacks.
UN Secretary-General Antonio Guterres, said: “Both military and non-military applications of AI could have very serious consequences for global peace and security.”
With the introduction of AI in military operations, the quest for disarmament could also become further out of reach.
“The robotisation of conflict is a great challenge for our disarmament efforts and an existential challenge that this Council ignores at its peril,” said Ecuadorian Ambassador, Hernán Pérez Loose.
In AI we trust?
To keep up with its increasing popularity and rapid application, the UN Security Council largely agreed that quickly-developed regulations are needed.
Describing the technology as a ‘double-edged sword’, China’s UN Ambassador Zhang Jun stated that there should be a focus on people and AI for good to regulate development and block the technology from becoming a ‘runaway horse’.
Keen interest was shown by the Council to address existing challenges whilst creating avenues to track and respond to future dangers. Accountability for AI technology, access and responsible human control were mentioned as fundamentals to its development.
In response, Guterres proposed the creation of a global standard and called members to create a new UN body to support collaborative efforts in governing AI.
These sentiments were echoed by Lindy Cameron, the CEO of the UK’s National Cyber Security Centre, in an interview with the BBC.
“The scale and complexity of these models is such that if we don’t apply the right basic principles as they are being developed in the early stages it will be much more difficult to retrofit security,” said Cameron.
The UK’s AI vision
The UK’s own vision for AI is founded on four key principles. Cleverly said AI should be open, responsible, secure and resilient.
“AI should support freedom and democracy; be consistent with the rule of law and human rights; be safe and predictable by design; and be trusted by the public,” said Cleverly.
The UK Foreign Secretary stressed that the challenges that AI poses must be grasped in order to unlock its potential benefits.
“Let us work together to ensure peace and security as we pass across the threshold of an unfamiliar world,” concluded Cleverly.
To this end, the UK has developed its pro-innovation White Paper on AI Regulation and the UK Research and Innovation body has announced a £50 million package to develop trustworthy and secure AI. Backed by an initial £100 million ($130 million) funding, the UK also launched an expert Foundation Model Taskforce last month, headed by Ian Hogarth, to drive safe and reliable development AI models.
“AI can help grow our economy and deliver better public services, and working with our global partners will ensure the right guardrails are in place for its safe and responsible development,” said Chloe Smith, the Secretary of State for Science, Innovation, and Technology.
Last month, Prime Minister Rishi Sunak also announced that the UK will host the first major global summit on AI safety. The summit is set to examine the AI risks and how they can be alleviated through global coordinated action.
As a leader in AI, ranking third in the world across several metrics, the UK Government is confident that they are well-placed to convene discussions surrounding the future of AI.
Perspectives from the business world
Derek Mackenzie, CEO at Investigo, said: “AI will have a seismic impact on the global economy, transforming traditional job roles and creating fresh opportunities, but the scale of this change must be properly managed. International collaboration is key to this effort, and top of the agenda should be how countries can work together to develop AI skills within existing workforces and future generations so that a robust talent pipeline is in place to adopt and make the most of this technology, responsibly.”
Responding to the news, Chris Downie, CEO at Pasabi, said: “AI is already being hijacked by cybercriminals to fuel online fraud, and fake reviews, at a cost of potentially billions to the global economy. It is vital that policymakers and industry leaders collaborate and recognise the scale of the problem and put measures in place to clamp down hard on these threats.”
Sjuul van der Leeuw, CEO of Deployteq said: “With AI set to transform the global economy beyond all recognition, having a global taskforce in place to manage the implications, challenges and risks of this change is a necessary measure. From the public sector to the creative industries, AI has the potential to turbocharge organisations for the better, but the technology should also be managed responsibly, and workers need to be given the right training and support to adapt in a rapidly changing world.”
Hungry for more tech news?
Sign up for your weekly tech briefings!
Written by Rebecca Uffindell Wed 19 Jul 2023
Most Viewed News
February 20, 2024Virgin Media O2 reveals NetCo, a new company to rival BT Openreach
February 21, 2024Ransomware group LockBit disrupted by the UK’s NCA along with FB...
February 21, 2024Data Centre World Paris 2023 sets new benchmarks with expert insights...
January 22, 2024Professor Brian Cox to headline Tech Show London 2024