fbpx
Features Hub Opinion

CDO Interview: Do AI unto others as you would have AI done unto you

Wed 29 Jan 2020 | Juergen Rahmel

HSBC’s Dr Juergen Rahmel believes we need an ethical framework for the coming era of AI – what might this look like?

Like many new technologies, Artificial Intelligence (AI) brings us countless potential benefits while introducing a whole raft of new dangers. Dr Juergen Rahmel, Chief Digital Officer at HSBC Germany and an AI researcher, argues that a coherent approach to ethics is needed in this fast-expanding field. “It comes down to the decision: of all that we could do, what is it that we should do?”

At root, implementing ethical governance of AI is no different to businesses promising not to pollute rivers or use child labour. It is about “conducting business not only in an individually profitable manner, but also following a holistic approach which serves the collective.” Rahmel, who will be speaking at Big Data & AI World in London this March, focuses on ethical governance for AI at HSBC. What has he learnt?

An ethical Wild West

While AI and machine learning now appear almost ubiquitous, until relatively recently it was a pursuit mainly carried on in academia. And as a growing list of companies begin using AI-powered tools to crunch their data, a number of ethical problems have been introduced. “Naturally businesses operate with the intention to maximize their profits, which might lead to morally debatable results,” points out Rahmel.

Widley-cited examples include applications which reinforce existing social injustices in policing or housing and job applications. Rahmel explains that “not every application of AI is accompanied by a good framework of ‘acceptable use’”. While AI is still relatively new in the business world, it is rapidly catching on and widely available AI tools are enabling businesses of all stripes. Until now, however, there has been relatively little clear guidance on the ethical limits and governance of AI.

Why do we need ethical governance for AI?

Notions of ethics and AI may at first appear to be more within the remit of philosophers than corporations. But for Rahmel, there is a pressing business need: “I strongly believe that the full potential of AI can only be realised within a framework that supports trust and operates in a scope that benefits each stakeholder.” If companies fail to use AI in an ethical way, customers will refuse to hand data over to them and begin seeking alternatives – as can be seen in the backlash against the world’s tech giants in recent years.

Join Juergen at Big Data & AI World, 11-12 March, ExCeL London

Ethical governance of AI
11 March 2020, 10:15 – 10:55
Big Data & AI Keynote

Rahmel, who says HSBC has already developed its own principles of ethical AI governance, explains what such a framework looks like. “Key pillars are firstly to develop proper AI systems that can be tested and verified to do what we expect them to do. Secondly, to develop this framework of ethics and moral acceptance of the functions such an AI system could execute. Thirdly, to define the influence that regional differences of mentality and jurisdiction will have on the judgement of acceptability.”

Creating such a framework isn’t exactly straightforward. “By definition I think of artificial intelligence partially as systems that we are not able to understand 100%” Rahmel explains. This situation is often referred to as ‘black box problem’. This is when people who create AI understand the inputs and the outputs yet can’t readily explain how the system reached its conclusions. Rahmel argues that “we need to develop a framework for verifying systems by their results, impacts and interactions.”

You might, for instance, create an AI system which helps discover fraud claims. However, if you don’t know how the system reached its conclusions, it could be using morally questionable judgements. For Rahmel, all these factors need to be laid out in an open and transparent way.

A universal ethics?

One of the most common criticisms of AI is that it is being created by a miniscule portion of the human population yet is being applied to the digital lives and behaviours of people the world over. Can an AI trained by European or North American data scientists truly correspond with the needs of communities elsewhere in the world? By the same token, can ethical governance of AI be developed in a way that is acceptable to people from different cultural traditions?

Take Germany, where Rahmel is from. The country has perhaps the world’s strongest laws around personal data privacy, a notion that is tied closely to its history. In other places, notions of acceptable privacy differ wildly; who is to say that AI governance ethics developed in one place will correspond neatly with ethics elsewhere? Rahmel acknowledges this, saying that the issues of there being “different viewpoints of ethics will remain and need to be included into the plans for development and rollout of capabilities across the globe.”

He describes one potential way of overcoming this issue which would involve “thinking about Ethics for AI in a similar way as thinking about a country’s constitution.” Just as most nations have developed their own constitutions where the basic rights and duties of citizens are outlined, he suggests countries could develop a similar constitution for the ways AI can be applied within those nations.

“We know what basic topics to expect to read in a nation’s constitution, and the differences between those documents around the globe,” he explains. Fundamentally, every constitution makes “transparent statements on locally different views and approaches” about what is and isn’t acceptable behaviour. Creating something similar for AI would involve protecting the wishes and desires of different countries and the extent to which they’re comfortable with AI invading their lives.

AI as a means to an end

AI has enormous potential for advancing human happiness, comfort and welfare. However, it also introduces enormous risks, be that invading privacy, making society less fair or even threatening lives. As the technology becomes increasingly ubiquitous, it is good to see global businesses like HSBC leading the discussion on what kind of AI we want.

Experts featured:

Juergen Rahmel

Chief Digital Officer
HSBC Germany

Tags:

AI big data world cdo Ethics HSBC
Send us a correction Send us a news tip