How can we overcome bias in algorithms?
Mon 21 Mar 2022 | Finbarr Toesland
The consumer of today encounters artificial intelligence (AI) technologies many times on a daily basis, perhaps without even realising it. The driver of their food delivery service uses route planning that is AI enabled, deeply targeted adverts are shown to them every time they browse the Internet and even responses from their smart assistant has been improved with AI.
Yet, even as AI has become commonplace in society, there remain complex issues around the ability for AI to perpetuate societal bias around race, gender, age and sexuality. Countless examples exist of AI solutions reflecting the bias of the data that is fed into its systems.
Not even major tech companies like Twitter are immune to algorithmic bias. Users of the social media platform began to realise that the image cropping algorithm would automatically focus on white faces instead of black faces. While the company said the AI had been tested for bias before it was launched, it clearly didn’t go far enough.
AI-backed facial recognition solutions have also faced intense criticism with a ‘Gender Shades’ project finding that, while facial recognition algorithms have high levels of classification accuracy, subjects who are female, black and aged between 18 and 30, have higher error rates than other groups.
The prevalence of AI biases are now well known to developers and businesses alike. Technology consulting firm Gartner predicted in 2018 that 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
For Ivana Bartoletti, a privacy and ethics expert, the power that AI has to exacerbate existing inequities is vast and more attention needs to be paid to how AI bias can be combated.
“We have internalised the idea that there is nothing more objective, neutral, informative and more efficient than data. This is misleading. When an algorithm is fed data a decision has already been made. Someone has already decided that some data should be chosen and other data should not. And if data is, in reality, people then some of us are being selected while others are being silenced,” said Bartoletti in her book, An Artificial Revolution: On Power, Politics and AI.
Perhaps the largest challenge for businesses is to first identify how pervasive biases have already entered the data they hold and work to stop these human created biases from being fed into AI systems.
Due to the complex nature of AI systems, it is especially difficult to uncover potential biases that may appear during use. For example, if the data sets that are fed into an AI network already contain bias that is related to the human developers, the AI will build on this and show biased results.
Leading tech organisations have released toolkits that offer developers the ability to identify and remove any viruses found within machine learning models. The IBM Watson OpenScale service gives developers access to real-time bias detection and mitigation and helps explain how the AI is coming to results, increasing trust and transparency.
Google, too, have launched their What-If Tool that offers a detailed visualisation of machine learning model behaviour and then uses this data to test against fairness benchmarks to find and remove bias.
There’s no question that companies will have to increase the amount of time they spend on rooting out biases in the AI systems, with those organisations that fail to deliver fair AI systems set to suffer major reputational damage and lose the trust of customers.
Written by Finbarr Toesland Mon 21 Mar 2022
Tags:artificial intelligence bias systems technology
AI Thu 21 Mar 2019A brief history of intelligence, and what it means for ...
AI Mon 29 Jun 2020AI Adoption – Data governance must take precedence
Security Tue 16 Jul 2019AI and cyber security: Separating hype from reality
AI Mon 1 Jul 2019For truly intelligent cyber security, pair AI with humans
Big Data Thu 23 Apr 2020Keeping driverless navigation in the right lane with AI