In a few short years, artificial intelligence (AI) has developed from a burgeoning technology to become an everyday tool for millions of people. Yet in addition to the emerging popularity of these resources accompanies the worry surrounding AI ethics.
AI is showing no signs of slowing down, with the rise of AI chatbot tools like ChatGPT, which currently has amassed more than 1 billion monthly page visits, to businesses in all industries increasingly embracing AI-backed tools and solutions.
Despite this, according to the Accenture Tech Vision report, just 35% of consumers trust how AI is being deployed by businesses. At present, more than three-quarters (77%) of consumers believe companies that misuse AI should be held accountable.
What are AI ethics and responsible AI?
At a time when many organisations are preparing to scale up their usage of AI, it is essential that AI ethics and governance are seriously considered.
Many different ethical principles should be examined before embarking on a major AI expansion, including bias, fairness, and accountability.
Tackling AI bias
Firstly, all AI systems should be unbiased and fair to all users. In practice, this goal is difficult to fully achieve due to the well-reported biases found within AI datasets.
According to recent research from the University of Southern California, more than a third (38%) of “facts” used by AI were biased. By constantly seeking out biases in data that is fed into AI, potential biases can be reduced a great deal.
As the inner workings of AI solutions are often not well-known or understood, being as transparent as possible about how customer data is used by AI can help improve trust and increase its usage.
Ensuring accountability in AI
Naming individuals or teams as being in charge of different stages of AI processes will help make sure human oversight is available at every stage of development.
At every level of the design and implementation process, safety controls should be placed to mitigate challenges if mistakes or issues arise.Even in the best designed AI ecosystems, errors can occur, but creating solutions to these potential issues before they happen will ensure ethical and accountable AI practices.
Proposed regulations and governance
With regulations around AI being introduced around the world, businesses would gain from working out AI governance and ethics initiatives early to avoid potential regulatory pitfalls in the future.
The EU’s AI Act
If passed, the European Union’s AI Act will be the world’s first rules for Artificial Intelligence that has the potential to become a global standard. The proposed rules follow a risk-based process depending on the level of risk that AI could possibly create.
Any system that registers as unsatisfactory in risk level, such as those that are used for social scoring, will be strictly prohibited. This is also expanded to any AI systems that may be used to discriminate, as well as those that use biometric classification systems to identify sensitive characteristics such as gender or race.
Within the proposal any high-risk applications will have to follow specific legal requirements and regulations.
Members of the European Parliament also included obligations for those who provide foundational models to assess and alleviate risk in addition to complying with the design, environmental and information requirements. Providers must also register in the EU database for monitoring purposes.
Whilst the AI Act could be considered as a positive step in the direction of AI ethics and governance, multiple organisations have offered their recommendations for improvement.
The Future of Life Institute (FLI) stated that ‘the Act should ensure that AI providers consider the impact of their applications on society at large, not just the individual’.
The University of Cambridge’s Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk are generally in favour of the AI Act, praising its risk-based approach and implementation of AI ethics.
However, researchers within these centres offered feedback to the AI Act, expressing concerns regarding the lack of speed in adaptability against the rapid development of AI, the definition of ‘high-risk’ systems and the regulation of these systems.
The UK’s AI White Paper
The UK government recently introduced its pro-innovation White Paper on AI Regulation. This White Paper, spearheaded by the Secretary of State for Science, Innovation and Technology, Michelle Donelan MP, has sparked mixed reactions from various stakeholders.
The White Paper outlines five key principles for AI: safety, security, robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress.
These principles aim to guide the UK’s approach to AI, with a focus on creating a conducive environment for AI development and addressing potential risks. These risks can range from physical harm and national security threats to mental health issues.
The Competition and Markets Authority (CMA) has expressed support for the Government’s non-statutory approach, which aims to leverage and build on existing regulatory regimes.
The British Computer Society (BSC) also spoke out in favour of the AI White Paper. The Chartered Institute for IT welcomed the Government’s commitment to helping UK companies become global leaders in AI while developing within responsible principles.
However, the AI White Paper has also faced criticism. The Equality and Human Rights Commission (EHRC) has argued that the safeguards included in the paper are ‘inadequate’ and ‘fall short of what is needed to tackle the risks to human rights’.
Implementing AI ethics and governance
As we embrace the dawn of the AI era, the importance of AI ethics, principles, and governance cannot be overstated.
The rapid advancement and pervasive integration of AI systems for consumers and businesses pose both opportunities and challenges. While AI offers tools of convenience and efficiency, it also brings forth issues of trust, bias, accountability, and fairness that must be addressed.
Regulations like the proposed AI Act from the European Union and the UK’s White Paper on AI Regulation represent a crucial step in establishing a comprehensive framework for AI governance.
The ethical use and governance of AI is a collective responsibility that extends beyond technology companies and regulators. To effectively achieve ethical AI use, all consumers, organisations, and businesses must be involved to shape a future with AI that is not only efficient and innovative, but also fair, accountable, and ethical.
Hungry for more tech news?
Sign up for your weekly tech briefings!