An international lawyer sets out his vision of AI regulation
Thu 29 Nov 2018Jacob Turner - Robot Rules © 2019 Palgrave Macmillan
In recent years there has been a deluge of articles describing, often in apocalyptic terms, the imminent and fundamental changes to the social and political fabric artificial intelligence will bring
Our roads will soon be brimming with autonomous or semiautonomous cars, weaving in seamless formation according to the whims of an algorithm digesting data in real time; repetitive tasks will no longer be the preserve of blood pumping homosapiens, but intelligent robots fed with code, bits and bytes, potentially leading to economic displacement on a dramatic scale.
Although a concern of technologists for many years, in 2018 AI regulation had its moment in the spotlight. An international consensus has emerged: some form of regulation is required to deal with the revolutionary and widespread changes resulting from AI.
Just last week the UK government launched the Centre for Data Ethics and Innovation, which it says is the ‘world’s first’ centre aimed at safeguarding the public from unsafe and unethical AI applications. The Oxford University Future of Humanities Insitute launched “v1” of its AI governance research agenda, and the European Commission appointed 52 experts to a new expert group on Artificial Intelligence to develop an “AI strategy” for the continent.
Nonetheless, much of this discussion is precisely that: “framing the debate” or “shaping the conversation”. We are unlikely to see the formulation of national, regional or multilateral regulation for some time.
In a timely and through-provoking new book, barrister Jacob Turner offers his contribution to the debate: a 400-page scholarly survey of what AI regulation might entail, and how it might align with existing legal principles. For its part, the foreword to Robot Rules pointedly reminds of the need for debate and discussion on the road to regulation.
“Virtually any discussion of the likely effect of a significant prospective development is to be welcomed, as it plays an essential part in the vital exercise of encouraging us to think about and prepare for that development when it comes to pass,” writes Lord Neuberger, the former President of the UK Supreme Court.
Industry has indisputably made the largest strides towards regulation, with many adopting policies governing their AI use. Industry-led initiatives in recent years include the Partnership on AI, an organisation which was formed originally by Amazon, IBM and Microsoft, but now encompasses many other parties, including Chinese tech giant Baidu, as well as NGOs and international organisations.
It is absolutely essential that regulators work closely with the developers of AI and other experts in order to ensure that the proposed rules are not just ethically sound but also technically achievable
One problem with this mushrooming of industry-led AI policies, says Jacob, is that these companies are operating in a vacuum.
“Right now we are in Wild West when it comes to AI regulation. Many companies are doing their best to ensure that AI is safe and ethical, but in the absence of guidance it can be difficult for them to know what is appropriate.
This is especially difficult where other companies may want to avoid any limitation on their activities and profits in order to gain a competitive advantage (at least in the short term) by avoiding regulation.”
Jacob is also sceptical of industry-led regulation, noting that the principles firms oft-propose are deliberately vague so as to afford them beneficial leeway. Not to mention the fact that policies also lack an effective police force to ensure their enforcement.
AI is already proving lucrative for FAANG (Facebook, Apple, Amazon, Netflix and Google), innovative startups, as well as legacy firms exploring the tech in the name of ‘digital transformation’. But for a balance to be struck between economic growth and public benefit, firms must freely pursue profits in compliance with rules and regulations.
For Jacob, we currently only have the profit-making side of that equation.
“The language used by the tobacco companies in the 1950s sounds a lot like the language which companies are now using when they talk about self-regulation and appointing AI expert advisory panels,” he says.
The effects of AI will partly depend on the industry to which it is deployed, but there are also more general concerns – most notoriously bias and discrimination – that are not unique to any particular industry.
Jacob advocates a combination of industry-centric and overarching regulatory principles to ensure no externality is overlooked.
But given many predict AI to evolve from a “narrow” task performer to one more “general” capable of performing a range of tasks, it makes sense to ensure overarching principles are the priority. Especially considering the biggest developments in AI have a habit of taking experts by surprise.
What I can say for certain is that whichever body forms the regulations should be international and should consult widely – not just by listening to international elites
“Clearly [AI] is a long way from the versatility of a human brain, but all the same it may make little sense to have a completely separate set of regulations for navigating cars and navigating drones when ultimately the two tasks might be performed by a single AI system.”
Nevertheless, Jacob argues it is vital for stakeholders to take the opportunity to develop rules that are consistent across industries, to avoid a fragmented, Balkanised system of contradictory rules which he says will ultimately lead to greater costs and reduced consumer welfare.
“It will also lead to wasteful disputes on “edge cases” where it is unclear which regulatory basket that a particular product or system falls into, and companies then need to spend time and money on regulatory compliance and lawyers,” he says.
Threat to innovation
There is an inherent reluctance from those at the forefront of AI to err on the side of caution when it comes to regulation – a vision that collides with the EU blocs “precautionary principle”. Many in Silicon Valley view regulations as inconvenient rules, set out by ignorant ministers that ultimately stifle beneficial innovation.
In Robot Rules Jacob advocates for government-led policies developed in consultation with AI’s industry torchbearers to avoid the fate of regulations like the EU’s rejection of GM foods.
The language used by the tobacco companies in the 1950s sounds a lot like the language which companies are now using when they talk about self-regulation and appointing AI expert advisory panels
GM foods – which use fewer fertilisers and less water – were widely rejected in Europe, even though the scientific evidence of them causing any dangers is negligible.
“It is absolutely essential that regulators work closely with the developers of AI and other experts in order to ensure that the proposed rules are not just ethically sound but also technically achievable.
Rule-makers should try to make sure they are well-informed about how the technology works, just as they would hopefully do when seeking to set rules for any other complicated industry, such as nuclear power or chemical production.”
It is not just FAANG that should be consulted when forming regulation. As AI will impact all industries – driving, medicine, healthcare, and law to name a few, governments should look cross-industry, consulting not just with larger companies but firms of all sizes.
“Large-scale consultations are important, as well as speaking to industry bodies whose members include SME companies, such as the CBI in the UK and other chambers of industry and commerce worldwide.
A further source for consultations should be standard-setting organisations such as the British Standards Institute, the International Standards Organisation, and more specialist bodies like the Institute of Electronic and Electrical Engineers.
Right now we are in Wild West when it comes to AI regulation
Each has independently been developing some very advanced standards for AI in the past two years or more, and it will be important to ensure that there is collaboration between the different bodies to ensure maximum global coordination.”
Jacob is also quick to emphasise that the level-playing field regulations create is beneficial for business.
“Clear and consistent standards promote certainty and allow entrepreneurs and investors to understand what they can and cannot do.
Good quality regulation and innovation are not enemies. Instead, they can be mutually supportive. The key to supporting regulation and innovation is to make sure that rules are set in a flexible and responsive manner.”
Jacob points to UK Financial Conduct Authority’s Regulatory Sandbox as an exemplar of technology governance. The sandbox initiative allows technology firms to test out products and systems in a closed and safe environment, and see how they interact with the FCA rules and activities of other market players. The FCA can also simulate potential new rules to examine how these will affect market functioning.
“Large-scale virtual reality simulators are becoming increasingly powerful at modelling the effect of small changes on a complex “real world” system.
The FCA’s evaluation of the Sandbox found that it has been particularly helpful for small companies which might otherwise struggle with navigating the regulatory environment when bringing new products to market.”
Complicating multilateral cooperation are the strong economic and geopolitical incentives for states to pursue innovation. Indeed, many political analysts phrase the problem in terms of a “winner takes all” dynamic.
I ask Jacob how international bodies can feasibly hope to enforce putative regulation in the presence of these strong incentives.
Clear and consistent standards promote certainty and allow entrepreneurs and investors to understand what they can and cannot do
One of the great lessons of 20th-century, he says, is that states actually have strong economic and political incentives to participate in international regimes. Those who take the lead in regulation will have the biggest say in how regulation plays out. Conversely, resisting regulatory impulses from the international core can lead to undesirable alienation on the world stage. The geopolitical and economic incentive might therefore actually be to cooperate and develop standards for AI, rather than to act unilaterally.
“It’s sometimes said that “If you’re not at the table, you’re on the menu” and this applies as much to forming rules for AI as it does to any other field.”
One country who is making an early bid for AI dominance is China. Despite perceptions in the West that China is opposed to international law, Jacob says they are under no illusions about the enormous power that can accrue to states who are at the heart of regulatory efforts.
“In a 2017 Government paper, and another in January 2018, China signalled that its long-term aim was to achieve dominance in AI not just in terms of developing the technology but also in terms of writing the rules which govern it.
Just look at the US, which dominated economically in the second half of the 20th century – in part because of its fundamental part in setting the international monetary rules following the Bretton Woods conference.
I think China sees a similar opportunity to help set the rules for AI in the 21st century, particularly at a time when the US seems to be taking a step back from multilateralism.”
This all begs the question: which regional or international body should formulate the regulation? There are no straightforward answers, considering how the AI industry is so distributed internationally.
Jacob has no strong feelings about which specific body should formulate regulations for AI, saying it might come from existing power structures, such as the UN, or from an entirely new body created specifically for the purpose – such as an international treaty as in the case of the Paris Climate Accords.
“What I can say for certain is that whichever body forms the regulations should be international and should consult widely – not just by listening to international elites. Otherwise there is a danger that even a safe and beneficial new technology may be rejected if the public are not sufficiently involved in making the rules.”