Should AI algorithms be regulated?
Fri 14 May 2021 | Nicole Cappella
Can governments keep up with the fast-changing world of AI, or should managing algorithms be the responsibility of the companies that use them?
Recently, both the EU and the U.S. have issued proposals and guidelines on the use of artificial intelligence (AI), seeking to limit the harm that a badly-conceived or poorly-managed algorithm can cause.
AI touches so many different parts of people’s lives: from Amazon purchasing recommendations and Facebook feeds; to healthcare, manufacturing, and military industries; to credit card fraud prevention and facial recognition programs. The concern is that the use of AI is spreading without oversight, or recourse for anyone in the public that comes to harm as a result.
Not all companies are opposed to this. Just last year, Google CEO Sundar Pichai wrote an article for the Financial Times stating that Google actually welcomed input from regulators in ensuring that technology was used appropriately, rather than letting the market dictate use. He said, “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent upon us to make sure that technology is harnessed for good and available to everyone.”
While Per Overgaard, executive director of Lenovo Data Centre Group EMEA, agrees that governmental regulation is important to AI use, he stressed the need for governments to have technical advisors to help them understand the complex technology involved.
As he said, “governments have to understand the potential of AI, and how could they possibly do that without someone by their sides with insights about where the business is going?”
Not only must governments manage the rapid evolution in AI technology and use cases, they must also consider how these regulations affect global partners and markets as well. At this time, governments walk a fine line between protecting the public and unnecessarily inhibiting business and competition, both in-country and internationally.
Recent guidelines issued by the U.S. Federal Trade Commission (FTC) are concerned with the potential for bias that is introduced inadvertently to algorithms, either in the algorithm building stage or inside the data that is used to train the algorithm.
In the healthcare arena, AI has been useful in resource allocation models to help medical care providers cope with increased service demand due to COVID-19. However, a study found that the data used to train the algorithm reflected existing racial bias – which in practice could actually worsen the gap between supply and demand for healthcare resources among people of color.
The FTC strategy is to make companies accountable for managing AI under existing trade regulations. This means extending the ban on unfair or deceptive practices to cover biased algorithms; as well as extending fair credit and equal credit protections to ensure that AI does not play a part in denying marginalized groups access to housing, credit or insurance.
Rather than attempting to create a catch-all like the FTC, proposed EU regulations put the focus on ‘high risk industries’. High-risk industries are those that have been identified as ones in which AI can affect a person’s safety or fundamental human rights: like criminal justice or law enforcement. The EU would also place limits on how AI can be used for self-driving cars, loan applications, and schools; while forbidding some uses like facial recognition altogether (with some exceptions for the military).
How to proceed?
The FTC offered broad instructions on how businesses should manage their AI algorithms to prevent harmful outcomes. These include making sure that marginalized groups are not excluded from data sets, ongoing testing for bias in practice, and independent review of results.
The EU regulations take this one step farther, requiring that assessments and reviews be made available to regulators, who will help to ensure that AI is being used properly. Moreover, if the proposed regulations become policy, companies that violate them could be fined up to six percent of total global sales. This makes the stakes even higher than the precedent-setting four percent potential fines that could be levied for violating GDPR regulations.
Margrethe Vestager, executive vice president of digital policy for the European Commission, outlined the reasons for an EU policy regulating AI. “On artificial intelligence, trust is a must, not a nice-to-have. With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”
With all of the regulatory activity taking place in the EU and the U.S., it appears that it is no longer a question of whether governments should become more involved in regulating AI, but when. What will that regulation look like, how will it be enforced, and what can companies do now to prepare for a regulated future?
The answer appears to be in following the three main trends that are common across different regulatory requirements. These are:
Conduct assessments of the data sets that are used to train an AI algorithm, to ensure that no groups are excluded or poorly represented. Additionally, algorithms should be assessed for impact, ensuring that the results generated by AI algorithms do not have a disparate impact on a single group.
Prepare a process for independent review of AI at different stages of the process: when it is created, tested, and deployed. This probably means designating a third-party partner or agency to review AI on behalf of the business.
Test algorithms over time, to ensure that no new issues arise while an algorithm is in use. AI systems use data as it is generated to improve to its purpose – meaning that a gap between intent and result can grow more apparent the more it is used. Continuous testing will help to ensure that these issues are noted and resolved before they make a larger impact (or generate an enormous fine).
Written by Nicole Cappella Fri 14 May 2021