Features Hub
Comment
The State of Global AI Regulation and Strategy
Fri 25 Aug 2023
As artificial intelligence (AI) continues to gain momentum, so does the need for effective regulation and strategic supervision. Businesses will face the challenging task of fostering innovation and digital transformation while safeguarding against ethical dilemmas and unintended consequences of AI implementation.
In response to these challenges, many governments are proposing and enforcing regulations to promote responsible use of AI.
The UK’s pro-innovation stance seeks to balance technological advancement with ensuring ethical AI use. The EU is seeking to address risks through the AI Act. And the US has implemented a Blueprint for an AI Bill of Rights, introducing a wide range of perspectives that reflects the complexities encompassing AI governance.
Meanwhile, France’s AI Strategy aims to collaborate on global regulation. Similarly, Germany has stressed the need for trustworthiness and regulatory oversight.
In contrast, Singapore is currently taking a more relaxed stance through voluntary guidelines that encourages responsible AI adoption.
What is the UK’s AI strategy?
The National AI Strategy, published September 2021, outlined the Government’s intention to capitalise on the UK’s strengths, focusing on access to talent, data, compute, and finance. Ensuring broad benefits for the economy through widespread adoption of AI across all sectors and regions is stated as a priority.
The strategy also stressed the importance of adapting governance and regulatory regimes to the changing demands of AI, promoting innovation, investment, and protecting citizens’ rights. Public trust and diverse societal involvement are crucial elements in achieving these goals.
Since then, the UK invested £100 million in a Foundation Model Taskforce established in June 2023. Led by Ian Hogarth, the Taskforce will contribute to the creation of shared AI safety standards by consolidating expertise from government, industry, and academia.
In August 2023, it was also revealed that the UK intends to spend a separate £100 million of taxpayer money to produce computer chips used for AI.
UK White Paper on AI Innovation
The UK released its White Paper on AI Innovation on 29 March 2023. It outlined five key principles for AI: Safety, security, robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress.
These principles seek to guide the UK’s procedure to AI, with a direct focus on developing an environment for AI development whilst addressing potential hazards. Cited risks included physical harm, mental harm and national security threats.
The UK Government stressed that unless trust is built with the public, the country will miss out on the potential benefits of AI.
As AI evolves, the Government’s regulatory approach may also require adjustment. The framework is said to be ‘agile, iterative, and non-statutory initially’, allowing for collaboration with existing regulators. The UK is also set to work with international partners to ensure ‘international compatibility’ for AI regulation.
“To ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI.
“By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed,” said Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology.
The Competition and Markets Authority (CMA) expressed support for the White Paper. The competition regulator began examining how it might provide guidance on interpreting these principles whilst emphasising clarity and consistency.
“We support government’s approach of leveraging and building on existing regulatory regimes whilst also establishing a central coordination function for monitoring and support. We think this will achieve the context-specific approach to regulation that government is aiming for,” said the CMA.
However, the Equality and Human Rights Commission (EHRC) were less complimentary. They stated that the framework ‘falls short’ of what is required to tackle the risks to human rights and equality. They insisted that more funding for regulators is urgently needed to manage the potential of this rapidly advancing technology.
“To rise to this challenge, we need to boost our capability and scale up our operation as a regulator of equality and human rights. We cannot do that without government funding,” said Baroness Kishwer Falkner, Chairwoman of the EHRC.
> Read more: UK Government’s AI White Paper sparks mixed reactions
What is the EU’s AI strategy?
The EU’s strategy involves fostering excellence in AI by enabling its development and adoption, ensuring the EU is a healthy environment for AI, unlocking its benefits to society, and building leadership in high-impact industries.
The EU has invested in AI research and development by aligning priorities with the global AI landscape whilst coordinating investments from both public and private sectors. Access to high-quality data is crucial, as the EU looks to establish the necessary infrastructure for building robust AI systems.
Trust is of high priority for the EU. Its AI Act attempts to address fundamental rights and safety risks specific to AI systems. The end goal is to create a safe and innovation-friendly environment for AI users, developers, and deployers.
The EU AI Act
The EU AI Act outlines rules for AI following a risk-based procedure dependent on the level of risk AI could possibly create.
The Act made proposals to strictly prohibit AI systems that present unsatisfactory risk levels, including those used for discriminatory purposes and social scoring. Specific legal requirements and regulations would be enforced for high-risk applications.
Members of the EU have included obligations for providers of foundational models to assess and mitigate risks, comply with requirements, and register in the EU database.
Generative AI, like ChatGPT, would have to comply with transparency requirements, including disclosure of content generated by AI, blocking model designs that generate illegal content, and publishing summaries of copyright data used for training.
In June 2023, the Act was passed by European Parliament lawmakers.
While the EU AI Act is generally viewed as a positive step in AI ethics, some organisations have suggested improvements to address broader societal impacts, enhance adaptability, and clarify regulations for high-risk systems.
Opposition to the Act suggested that it has loopholes and exceptions, limiting its effectiveness in ensuring AI’s positive impact. For instance, while facial recognition by the police is banned, exceptions exist for delayed image capture and finding missing children. Some also argued that the Act lacks flexibility to label new dangerous AI applications as ‘high-risk’ in the future.
Professor Lilian Edwards, Expert Legal Adviser at the Ada Lovelace Institute, said the EU AI Act is a strong foundation for comprehensive AI regulation, but remained flawed.
“The AI Act is itself an excellent starting point for a holistic approach to AI regulation. However, there is no reason why the rest of the globe should unquestioningly follow an ambitious, yet flawed, regime.
“The AI Act is not ambitious enough at assessing and seeing off the risks caused by AI,” said Edwards.
The Future of Life Institute offered recommendations to improve the proposal. They said that the Act should ‘ensure that AI providers consider the impact of their applications on society at large’ rather than just the individual.
Research centres at The University of Cambridge were generally more positive about the proposal. They supported the risk-based approach towards AI systems in different contexts, sharing optimism that the regulation can help set international standards for AI.
> Read more: What are AI Ethics, Principles and Governance?
What is the France’s AI strategy?
France had called for global AI regulation after the EU AI Act was proposed.
With the country keen to be regarded as a world-class AI hub, regulations are an attractive option for leaders as they compete for dominance in the AI landscape.
“What we want is a regulation that offers both protection for users … and that establishes trust, but is also flexible enough to allow for the [future] development in France and Europe,” said Digital Minister Jean-Noel Barrot.
Barrot has been critical of the EU AI Act in the past.
“My worry is that in the recent past few weeks, the EU Parliament … has taken a very sort of strong stance on AI regulation, using in some sense this AI Act as a way to try and solve too many problems at once,” said Barrot to CNBC.
Nonetheless, France is eager to establish global regulations on AI. President Emmanuel Macron proposed that the G7 and the Organisation for Economic Co-operation and Development (OECD) are ‘good platforms’ to launch regulation.
Finance Minister, Bruno Le Maire, also expressed desire to collaborate with the US on legislation.
“On regulation as well, I think this is absolutely vital to have an in-depth discussion with the American authorities on the best way of regulating artificial intelligence,” said Le Maire.
France has previously established strategy in the hopes of becoming a world leader in the AI arena.
Back in March 2018, Macron presented his vision and 5-year national AI strategy in the hopes of positioning France as a world leader.
The strategy, titled AI for Humanity, focused on excellence and trust in AI. The main objectives included improving AI education, establishing an open data policy, and developing an ethical framework for transparent and fair AI use.
To achieve these goals, the French government allocated £1.2 billion (€1.5 billion) for AI development by the end of 2022, with £604 million (€700 million) dedicated to research.
The strategy emphasised promoting human capital by providing financial incentives to higher education and research institutions, increasing diversity in AI, and facilitating vocational training and lifelong learning.
Regarding infrastructure, the strategy focused on data policy initiatives, such as high-performance computing and data sharing platforms.
The strategy also addresses AI’s application to societal challenges, including climate and environment issues, and response to the COVID-19 pandemic.
France’s strategy has undergone two phases, with the second phase in 2021 and 2022 prioritising education, embedded AI, trustworthy AI, and AI’s role in the ecological transition of companies.
The country has also established the Conseil National du Numérique et de l’Éthique (CNPEN) in 2020, as France’s Pilot National Digital Ethics Committee. It is given the responsibility to address ethical AI matters, regulation, and transparent use of AI technologies in France. The CNPEN also guides policies on AI that take into account human rights, inclusion, diversity, innovation, and economic growth.
What is the Germany’s AI strategy?
While Germany’s AI strategy is still in its early stages, the Government is committed to increasing its investment in AI, with a total planned expenditure of £4.3 billion (€5 billion) by 2025.
The strategy known as ‘AI Made in Germany’ was introduced in 2017 as a national policy initiative. It aimed to position Germany as a player in the field of AI that can operate independently from the European level.
The strategy focused on formal education and training reforms to develop AI-related skills in the workforce. Initiatives included expanding learning platforms, creating AI professorships, and promoting STEM subjects among students.
The Government supported research and innovation in AI through funding schemes and initiatives. This included creating competence centers, reality labs, and funding start-ups and research projects. Collaborations between academia and businesses were emphasised, with initiatives like R&D networks, AI platforms and centres for expertise.
Regarding regulation, the Government addressed AI related issues such as competition law, data protection, and data use. Ethical guidelines for AI development were also encouraged.
Digital infrastructure to support AI development was highlighted as a necessity. Data sharing facilities, a national research data infrastructure, and the GAIA-X project were set up.
The Government also called to fund AI applications to benefit the environment and address climate change challenges.
Like many nations, the strategy focused on international cooperation and partnerships to ensure responsible AI development aligned with global goals and standards.
“Politics must ensure that a technology that is significant for everyone, but controlled by only a few, is supervised by a regulatory authority and proven trustworthy before its implementation,” said Petra Sitte, a German politician representing Die Linke.
In recent years, Germany has made few shifts in its stance on AI or updated policies.
What is the Singapore’s AI strategy?
Singapore has been actively investing in AI to drive its digital economy. The Government launched a National AI Strategy in 2019, with the goal of making Singapore a leader in scalable AI solutions by 2030.
Key initiatives include AI Singapore, the Advisory Council on Ethical Use of AI and Data, as well as grants and incentives for AI adoption.
As for AI regulation, Singapore has not enforced any solid AI regulation.
“We are currently not looking at regulating AI,” said Lee Wan Sie, Director for Trusted AI and Data at Singapore’s Infocomm Media Development Authority, in June 2023 to CNBC.
Instead, the country has put forth ‘voluntary guidelines for individuals and businesses’ in the Model AI Framework. Policy papers from various regulatory agencies also provide guidance on AI governance and ethics.
Singapore also launched AI Verify in May 2022, a self-assessment AI governance testing framework and toolkit for organisations to ensure fairness, explainability, and safety in their AI systems.
Data used for AI remains regulated by Singapore’s Personal Data Protection Act (PDPA), which requires consent for the collection, use, and disclosure of personal data. Recent amendments allow for deemed consent and exceptions under certain circumstances.
Organisations using large datasets are strongly recommended to be cautious about assuming data is anonymised, as advancements in technology could potentially deanonymise data, making it subject to the PDPA.
Other legislation includes the Cybersecurity Act, which oversees national digital security and imposes duties on critical information infrastructure. The Protection from Online Falsehoods and Manipulation Act 2019 (POFMA) addresses fake news and misinformation, criminalising the use of bots to communicate false statements.
What is the USA’s AI strategy?
The topic of AI in the United States has been a growing concern in recent years. Lawmakers have recognised the need for a regulatory framework to address its potential risks.
At present, regulation in the United States is still in its infancy, with no comprehensive federal legislation dedicated solely to AI regulation. However, steps have been taken to establish forms of legislation and guidance.
The Biden-Harris Administration secured voluntary commitments from seven leading AI companies in July 2023 to promote the safe, secure, and transparent development of AI technology. These companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
This initiative is part of the Administration’s broader commitment to responsible AI development and protecting the public from AI-related risks and discrimination.
“To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation does not come at the expense of Americans’ rights and safety,” said the White House.
The commitments are based on three key principles: safety, security, and trust. This is to ensure that AI innovation does not compromise Americans’ rights and safety.
The companies have pledged to conduct security testing before releasing AI systems, share information on managing AI risks, and invest in cybersecurity measures. They will also develop mechanisms to inform users when content is AI-generated, report on AI capabilities and limitations, and prioritise research to mitigate societal risks of AI like bias and discrimination.
The Blueprint for an AI Bill of Rights
The Blueprint to the AI Bill of Rights, introduced by the White House, was published in October 2022 with the aim to ensure ethical and responsible development of AI.
The Blueprint emphasised principles like privacy protection, transparency, fairness, and accountability. At its core, the Blueprint sought to safeguard individual rights and prevent AI discrimination.
By championing public access to information and promoting education about AI, the Blueprint aimed to foster a knowledgeable and informed society. It envisioned a future where AI is inclusive, beneficial, and respectful of human values through collaborative efforts among stakeholders.
Proponents for the Blueprint saw it as a necessity, advocating that AI-related issues should be treated as civil rights concerns that deserved new protections.
“Although this Blueprint does not have the force of law, the choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law,” said Janet Haven, the Executive Director for the Data & Society Research Institute.
The Blueprint faced criticism from those who believed it should do more to address the rapidly evolving AI landscape.
“What I would like to see in addition to the Bill of Rights are executive actions and more congressional hearings and legislation to address the rapidly escalating challenges of AI as identified in the Bill of Rights,” said Russell Wald, Director of Policy for Stanford’s Institute for Human-Centered Artificial Intelligence.
Daniel Castro, Director of the Center for Data Innovation, saw the Blueprint as a method for businesses or organisations to evade accountability for potential harms caused by AI technology.
“The AI Bill of Rights is an insult to both AI and the Bill of Rights. Americans do not need a new set of laws, regulations, or guidelines focused exclusively on protecting their civil liberties from algorithms. Using AI does not give businesses a ‘get out of jail free’ card,” said Castro.
Others believed the Blueprint does not go far enough.
“We don’t need a blueprint, we need bans,” said Albert Fox Cahn, Founder and Executive Director at the Surveillance Technology Oversight Project.
What does the future hold for AI regulation?
AI technologies will continue to advance and permeate various sectors, making the need for effective regulation and strategic oversight imperative.
Different countries are taking varied approaches to address the challenges posed by AI, with a focus on fostering innovation while safeguarding against ethical dilemmas. International cooperation and regulatory harmony are also key focuses by governments.
But as AI becomes increasingly pervasive in various aspects of society, regulating its development and use poses unique challenges.
Striking the right balance between fostering innovation and ensuring ethical, transparent, and safe AI practices is essential.
Missteps in regulation could stifle technological advancement, limit economic growth, and hinder the potential benefits AI can bring to society. Meanwhile, lax or inadequate regulation may lead to AI systems with biases, discriminatory outcomes, and potential risks to individual rights and safety.
Effective and comprehensive AI regulation will be instrumental in building public trust, encouraging responsible AI development, and maximising the societal benefits while minimising the potential pitfalls of this transformative technology.
Hungry for more tech news?
Sign up for your weekly tech briefings!
The State of Global AI Regulation and Strategy
Fri 25 Aug 2023
As artificial intelligence (AI) continues to gain momentum, so does the need for effective regulation and strategic supervision. Businesses will face the challenging task of fostering innovation and digital transformation while safeguarding against ethical dilemmas and unintended consequences of AI implementation.
In response to these challenges, many governments are proposing and enforcing regulations to promote responsible use of AI.
The UK’s pro-innovation stance seeks to balance technological advancement with ensuring ethical AI use. The EU is seeking to address risks through the AI Act. And the US has implemented a Blueprint for an AI Bill of Rights, introducing a wide range of perspectives that reflects the complexities encompassing AI governance.
Meanwhile, France’s AI Strategy aims to collaborate on global regulation. Similarly, Germany has stressed the need for trustworthiness and regulatory oversight.
In contrast, Singapore is currently taking a more relaxed stance through voluntary guidelines that encourages responsible AI adoption.
What is the UK’s AI strategy?
The National AI Strategy, published September 2021, outlined the Government’s intention to capitalise on the UK’s strengths, focusing on access to talent, data, compute, and finance. Ensuring broad benefits for the economy through widespread adoption of AI across all sectors and regions is stated as a priority.
The strategy also stressed the importance of adapting governance and regulatory regimes to the changing demands of AI, promoting innovation, investment, and protecting citizens’ rights. Public trust and diverse societal involvement are crucial elements in achieving these goals.
Since then, the UK invested £100 million in a Foundation Model Taskforce established in June 2023. Led by Ian Hogarth, the Taskforce will contribute to the creation of shared AI safety standards by consolidating expertise from government, industry, and academia.
In August 2023, it was also revealed that the UK intends to spend a separate £100 million of taxpayer money to produce computer chips used for AI.
UK White Paper on AI Innovation
The UK released its White Paper on AI Innovation on 29 March 2023. It outlined five key principles for AI: Safety, security, robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; and Contestability and redress.
These principles seek to guide the UK’s procedure to AI, with a direct focus on developing an environment for AI development whilst addressing potential hazards. Cited risks included physical harm, mental harm and national security threats.
The UK Government stressed that unless trust is built with the public, the country will miss out on the potential benefits of AI.
As AI evolves, the Government’s regulatory approach may also require adjustment. The framework is said to be ‘agile, iterative, and non-statutory initially’, allowing for collaboration with existing regulators. The UK is also set to work with international partners to ensure ‘international compatibility’ for AI regulation.
“To ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI.
“By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed,” said Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology.
The Competition and Markets Authority (CMA) expressed support for the White Paper. The competition regulator began examining how it might provide guidance on interpreting these principles whilst emphasising clarity and consistency.
“We support government’s approach of leveraging and building on existing regulatory regimes whilst also establishing a central coordination function for monitoring and support. We think this will achieve the context-specific approach to regulation that government is aiming for,” said the CMA.
However, the Equality and Human Rights Commission (EHRC) were less complimentary. They stated that the framework ‘falls short’ of what is required to tackle the risks to human rights and equality. They insisted that more funding for regulators is urgently needed to manage the potential of this rapidly advancing technology.
“To rise to this challenge, we need to boost our capability and scale up our operation as a regulator of equality and human rights. We cannot do that without government funding,” said Baroness Kishwer Falkner, Chairwoman of the EHRC.
> Read more: UK Government’s AI White Paper sparks mixed reactions
What is the EU’s AI strategy?
The EU’s strategy involves fostering excellence in AI by enabling its development and adoption, ensuring the EU is a healthy environment for AI, unlocking its benefits to society, and building leadership in high-impact industries.
The EU has invested in AI research and development by aligning priorities with the global AI landscape whilst coordinating investments from both public and private sectors. Access to high-quality data is crucial, as the EU looks to establish the necessary infrastructure for building robust AI systems.
Trust is of high priority for the EU. Its AI Act attempts to address fundamental rights and safety risks specific to AI systems. The end goal is to create a safe and innovation-friendly environment for AI users, developers, and deployers.
The EU AI Act
The EU AI Act outlines rules for AI following a risk-based procedure dependent on the level of risk AI could possibly create.
The Act made proposals to strictly prohibit AI systems that present unsatisfactory risk levels, including those used for discriminatory purposes and social scoring. Specific legal requirements and regulations would be enforced for high-risk applications.
Members of the EU have included obligations for providers of foundational models to assess and mitigate risks, comply with requirements, and register in the EU database.
Generative AI, like ChatGPT, would have to comply with transparency requirements, including disclosure of content generated by AI, blocking model designs that generate illegal content, and publishing summaries of copyright data used for training.
In June 2023, the Act was passed by European Parliament lawmakers.
While the EU AI Act is generally viewed as a positive step in AI ethics, some organisations have suggested improvements to address broader societal impacts, enhance adaptability, and clarify regulations for high-risk systems.
Opposition to the Act suggested that it has loopholes and exceptions, limiting its effectiveness in ensuring AI’s positive impact. For instance, while facial recognition by the police is banned, exceptions exist for delayed image capture and finding missing children. Some also argued that the Act lacks flexibility to label new dangerous AI applications as ‘high-risk’ in the future.
Professor Lilian Edwards, Expert Legal Adviser at the Ada Lovelace Institute, said the EU AI Act is a strong foundation for comprehensive AI regulation, but remained flawed.
“The AI Act is itself an excellent starting point for a holistic approach to AI regulation. However, there is no reason why the rest of the globe should unquestioningly follow an ambitious, yet flawed, regime.
“The AI Act is not ambitious enough at assessing and seeing off the risks caused by AI,” said Edwards.
The Future of Life Institute offered recommendations to improve the proposal. They said that the Act should ‘ensure that AI providers consider the impact of their applications on society at large’ rather than just the individual.
Research centres at The University of Cambridge were generally more positive about the proposal. They supported the risk-based approach towards AI systems in different contexts, sharing optimism that the regulation can help set international standards for AI.
> Read more: What are AI Ethics, Principles and Governance?
What is the France’s AI strategy?
France had called for global AI regulation after the EU AI Act was proposed.
With the country keen to be regarded as a world-class AI hub, regulations are an attractive option for leaders as they compete for dominance in the AI landscape.
“What we want is a regulation that offers both protection for users … and that establishes trust, but is also flexible enough to allow for the [future] development in France and Europe,” said Digital Minister Jean-Noel Barrot.
Barrot has been critical of the EU AI Act in the past.
“My worry is that in the recent past few weeks, the EU Parliament … has taken a very sort of strong stance on AI regulation, using in some sense this AI Act as a way to try and solve too many problems at once,” said Barrot to CNBC.
Nonetheless, France is eager to establish global regulations on AI. President Emmanuel Macron proposed that the G7 and the Organisation for Economic Co-operation and Development (OECD) are ‘good platforms’ to launch regulation.
Finance Minister, Bruno Le Maire, also expressed desire to collaborate with the US on legislation.
“On regulation as well, I think this is absolutely vital to have an in-depth discussion with the American authorities on the best way of regulating artificial intelligence,” said Le Maire.
France has previously established strategy in the hopes of becoming a world leader in the AI arena.
Back in March 2018, Macron presented his vision and 5-year national AI strategy in the hopes of positioning France as a world leader.
The strategy, titled AI for Humanity, focused on excellence and trust in AI. The main objectives included improving AI education, establishing an open data policy, and developing an ethical framework for transparent and fair AI use.
To achieve these goals, the French government allocated £1.2 billion (€1.5 billion) for AI development by the end of 2022, with £604 million (€700 million) dedicated to research.
The strategy emphasised promoting human capital by providing financial incentives to higher education and research institutions, increasing diversity in AI, and facilitating vocational training and lifelong learning.
Regarding infrastructure, the strategy focused on data policy initiatives, such as high-performance computing and data sharing platforms.
The strategy also addresses AI’s application to societal challenges, including climate and environment issues, and response to the COVID-19 pandemic.
France’s strategy has undergone two phases, with the second phase in 2021 and 2022 prioritising education, embedded AI, trustworthy AI, and AI’s role in the ecological transition of companies.
The country has also established the Conseil National du Numérique et de l’Éthique (CNPEN) in 2020, as France’s Pilot National Digital Ethics Committee. It is given the responsibility to address ethical AI matters, regulation, and transparent use of AI technologies in France. The CNPEN also guides policies on AI that take into account human rights, inclusion, diversity, innovation, and economic growth.
What is the Germany’s AI strategy?
While Germany’s AI strategy is still in its early stages, the Government is committed to increasing its investment in AI, with a total planned expenditure of £4.3 billion (€5 billion) by 2025.
The strategy known as ‘AI Made in Germany’ was introduced in 2017 as a national policy initiative. It aimed to position Germany as a player in the field of AI that can operate independently from the European level.
The strategy focused on formal education and training reforms to develop AI-related skills in the workforce. Initiatives included expanding learning platforms, creating AI professorships, and promoting STEM subjects among students.
The Government supported research and innovation in AI through funding schemes and initiatives. This included creating competence centers, reality labs, and funding start-ups and research projects. Collaborations between academia and businesses were emphasised, with initiatives like R&D networks, AI platforms and centres for expertise.
Regarding regulation, the Government addressed AI related issues such as competition law, data protection, and data use. Ethical guidelines for AI development were also encouraged.
Digital infrastructure to support AI development was highlighted as a necessity. Data sharing facilities, a national research data infrastructure, and the GAIA-X project were set up.
The Government also called to fund AI applications to benefit the environment and address climate change challenges.
Like many nations, the strategy focused on international cooperation and partnerships to ensure responsible AI development aligned with global goals and standards.
“Politics must ensure that a technology that is significant for everyone, but controlled by only a few, is supervised by a regulatory authority and proven trustworthy before its implementation,” said Petra Sitte, a German politician representing Die Linke.
In recent years, Germany has made few shifts in its stance on AI or updated policies.
What is the Singapore’s AI strategy?
Singapore has been actively investing in AI to drive its digital economy. The Government launched a National AI Strategy in 2019, with the goal of making Singapore a leader in scalable AI solutions by 2030.
Key initiatives include AI Singapore, the Advisory Council on Ethical Use of AI and Data, as well as grants and incentives for AI adoption.
As for AI regulation, Singapore has not enforced any solid AI regulation.
“We are currently not looking at regulating AI,” said Lee Wan Sie, Director for Trusted AI and Data at Singapore’s Infocomm Media Development Authority, in June 2023 to CNBC.
Instead, the country has put forth ‘voluntary guidelines for individuals and businesses’ in the Model AI Framework. Policy papers from various regulatory agencies also provide guidance on AI governance and ethics.
Singapore also launched AI Verify in May 2022, a self-assessment AI governance testing framework and toolkit for organisations to ensure fairness, explainability, and safety in their AI systems.
Data used for AI remains regulated by Singapore’s Personal Data Protection Act (PDPA), which requires consent for the collection, use, and disclosure of personal data. Recent amendments allow for deemed consent and exceptions under certain circumstances.
Organisations using large datasets are strongly recommended to be cautious about assuming data is anonymised, as advancements in technology could potentially deanonymise data, making it subject to the PDPA.
Other legislation includes the Cybersecurity Act, which oversees national digital security and imposes duties on critical information infrastructure. The Protection from Online Falsehoods and Manipulation Act 2019 (POFMA) addresses fake news and misinformation, criminalising the use of bots to communicate false statements.
What is the USA’s AI strategy?
The topic of AI in the United States has been a growing concern in recent years. Lawmakers have recognised the need for a regulatory framework to address its potential risks.
At present, regulation in the United States is still in its infancy, with no comprehensive federal legislation dedicated solely to AI regulation. However, steps have been taken to establish forms of legislation and guidance.
The Biden-Harris Administration secured voluntary commitments from seven leading AI companies in July 2023 to promote the safe, secure, and transparent development of AI technology. These companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
This initiative is part of the Administration’s broader commitment to responsible AI development and protecting the public from AI-related risks and discrimination.
“To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation does not come at the expense of Americans’ rights and safety,” said the White House.
The commitments are based on three key principles: safety, security, and trust. This is to ensure that AI innovation does not compromise Americans’ rights and safety.
The companies have pledged to conduct security testing before releasing AI systems, share information on managing AI risks, and invest in cybersecurity measures. They will also develop mechanisms to inform users when content is AI-generated, report on AI capabilities and limitations, and prioritise research to mitigate societal risks of AI like bias and discrimination.
The Blueprint for an AI Bill of Rights
The Blueprint to the AI Bill of Rights, introduced by the White House, was published in October 2022 with the aim to ensure ethical and responsible development of AI.
The Blueprint emphasised principles like privacy protection, transparency, fairness, and accountability. At its core, the Blueprint sought to safeguard individual rights and prevent AI discrimination.
By championing public access to information and promoting education about AI, the Blueprint aimed to foster a knowledgeable and informed society. It envisioned a future where AI is inclusive, beneficial, and respectful of human values through collaborative efforts among stakeholders.
Proponents for the Blueprint saw it as a necessity, advocating that AI-related issues should be treated as civil rights concerns that deserved new protections.
“Although this Blueprint does not have the force of law, the choice of language and framing clearly positions it as a framework for understanding AI governance broadly as a civil-rights issue, one that deserves new and expanded protections under American law,” said Janet Haven, the Executive Director for the Data & Society Research Institute.
The Blueprint faced criticism from those who believed it should do more to address the rapidly evolving AI landscape.
“What I would like to see in addition to the Bill of Rights are executive actions and more congressional hearings and legislation to address the rapidly escalating challenges of AI as identified in the Bill of Rights,” said Russell Wald, Director of Policy for Stanford’s Institute for Human-Centered Artificial Intelligence.
Daniel Castro, Director of the Center for Data Innovation, saw the Blueprint as a method for businesses or organisations to evade accountability for potential harms caused by AI technology.
“The AI Bill of Rights is an insult to both AI and the Bill of Rights. Americans do not need a new set of laws, regulations, or guidelines focused exclusively on protecting their civil liberties from algorithms. Using AI does not give businesses a ‘get out of jail free’ card,” said Castro.
Others believed the Blueprint does not go far enough.
“We don’t need a blueprint, we need bans,” said Albert Fox Cahn, Founder and Executive Director at the Surveillance Technology Oversight Project.
What does the future hold for AI regulation?
AI technologies will continue to advance and permeate various sectors, making the need for effective regulation and strategic oversight imperative.
Different countries are taking varied approaches to address the challenges posed by AI, with a focus on fostering innovation while safeguarding against ethical dilemmas. International cooperation and regulatory harmony are also key focuses by governments.
But as AI becomes increasingly pervasive in various aspects of society, regulating its development and use poses unique challenges.
Striking the right balance between fostering innovation and ensuring ethical, transparent, and safe AI practices is essential.
Missteps in regulation could stifle technological advancement, limit economic growth, and hinder the potential benefits AI can bring to society. Meanwhile, lax or inadequate regulation may lead to AI systems with biases, discriminatory outcomes, and potential risks to individual rights and safety.
Effective and comprehensive AI regulation will be instrumental in building public trust, encouraging responsible AI development, and maximising the societal benefits while minimising the potential pitfalls of this transformative technology.
Hungry for more tech news?
Sign up for your weekly tech briefings!
Latest Features
UK Commons report urges rapid governance for rising AI challenges
Data Centre Titans: Inside the Semiconductor Evolution with Adam Carter, CEO ...
AI Meets Software Testing: Insights from Indeed’s Mesut Durukal
Data Centre Titans: The Frontier of Data Protection with Richard Luna, CEO of...
Sizzling tips for IT teams facing cybersecurity risks this summer