News Hub

Use of AI in the public sector requires greater transparency and diversity, report suggests

Written by Tue 11 Feb 2020

AI public sector

The Committee on Standards in Public Life says more transparency is needed in the use of AI to ensure public trust around its use

Rules around the use of artificial intelligence (AI) in the public sector remain a “work in progress” with “notable deficiencies”, according to a new report.

The Committee on Standards in Public Life (CSPL) said greater transparency around when AI is used is needed, as was work to prevent the impact of data bias.

Artificial intelligence and machine learning are seen as key technologies going forward, and could be used to help with decision-making in areas such as healthcare, policing, education and social care.

The report said a new AI regulator was not necessary for the UK, but that existing data watchdogs and regulators must quickly adapt to the challenges and pace of innovation in the sector.

It warned that the public also needed greater reassurance on how AI will be used.

CSPL chairman Lord Evans said: “Artificial intelligence, and in particular, machine learning, will transform the way public sector organisations make decisions and deliver public services.

“Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.

“Explanations for decisions made by machine learning are important for public accountability.

“Explainable AI is a realistic and attainable goal for the public sector – so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.”

The committee’s report recommends the Government and regulators establish a set of ethical principles about the use of AI and make its guidance easier to use.

Lord Evans also warned of “serious concern” around data bias – the idea that if data used in AI algorithms is not fully representative of all its subjects, the resulting calculations can be biased.

Among its recommendations to the government, the CSPL report called for guidance to be developed for public bodies on how to improve diversity and therefore avoid data bias.

The recommendations made also include asking public sector organisations to publish statements on how their use of AI complies with relevant laws before they begin to use the technology publicly. It also recommends the establishment of a regulatory assurance body, procurement rules and processes so that AI solutions meet public standards, a digital marketplace to help bodies find AI products and regular impact assessments on AI use.

“Our message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable,” Lord Evans said.

“The work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) are all commendable. But on transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation.

“This new technology is a fast-moving field, so government and regulators will need to act swiftly to keep up with the pace of innovation.

“By ensuring that AI is subject to appropriate safeguards and regulations, the public can have confidence that new technologies will be used in a way that upholds the seven principles of public life as the public sector transitions into a new AI-enabled age.”

Written by Tue 11 Feb 2020


artificial intelligence ICO machine learning
Send us a correction Send us a news tip