fbpx
News Hub

Meta announces measures against misuse of AI in European elections

Written by Mon 26 Feb 2024

Facebook owner, Meta, said it will create a team to combat disinformation and the misuse of generative artificial intelligence (AI) leading up to the European Parliament elections. 

The news has arrived amid concerns about interference in elections and misleading AI-generated content. The European Parliament elections are scheduled for 6 to 9 June. Its 720 lawmakers, in conjunction with EU governments, enact new EU policies and laws.

Head of EU Affairs, Marco Pancini, said while each election is unique, Meta’s dedicated team has been drawing on key lessons learnt from more than 200 elections around the world since 2016, as well as the regulatory framework set out under the Digital Services Act, and its commitment in the EU Code of Practice on Disinformation.

“We will also activate an EU-specific Elections Operations Center … and put specific mitigations in place across our apps and technologies in real-time,” said Marco Pancini, Head of EU Affairs at Meta.

The Election Operations Center aims to unite experts from across the company from Meta’s intelligence, data science, engineering, research, operations, content policy, and legal teams to identify potential threats. 

Meta to Counter GenAI Risks

Meta said its Community and Ad Standards extend to all content, including AI-generated materials with actions taken against policy violations. 

To enhance transparency, Meta labels photorealistic images crafted with Meta AI and is developing tools to identify AI-generated content across Facebook, Instagram, and Threads from various sources like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, . 

Meta stressed users will soon have the option to disclose AI-generated video or audio shares, facing potential penalties for nondisclosure. In cases where digitally altered content poses a high risk of materially deceiving the public on significant issues, a more prominent label is applied to provide context. 

Advertisers must disclose the use of photorealistic or digitally altered imagery in certain political or social ads. The company’s ad transparency efforts include verification processes, ‘Paid for by’ disclaimers, and the Ad Library. Between July and December 2023, 430,000 EU ads were removed for lacking disclaimers.

“This work is bigger than any one company & will require a huge effort across industry, government, and civil society,” said Pancini.

Meta to Tackle the Spread of Misinformation

Meta said it will combat misinformation by removing content that could promote ‘imminent violence or physical harm, or that is intended to suppress voting’.

Pancini said Meta plans to include three new partners in Bulgaria, France, and Slovakia. The company currently collaborates with 26 independent fact-checking organisations across the European Union covering 22 languages.

“When content is debunked by these fact-checkers, we attach warning labels to the content and reduce its distribution in Feed so people are less likely to see it,” said Pancini.

Ahead of the election, Meta said it will make it easier for all its fact-checking partners across the EU to find and rate content related to the elections. The company said it will use keyword detection to group related content in one place, making it easier for fact-checkers to find.

Meta Tackles Influence Operations

Meta defined influence operations as coordinated efforts to manipulate or corrupt public debate for a strategic goal which may or may not include misinformation as a tactic.

Meta has built specialised global teams to stop coordinated inauthentic behaviour and has investigated and taken down over 200 of these adversarial networks since 2017.

“This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity,” said Pancini. 

Pancini said Meta also labels state-controlled media on Facebook, Instagram, and Threads so people know when content is from a publication that may be under the editorial control of a government.

“After we applied new and stronger enforcement to Russian state-controlled media, including blocking them in the EU and globally demoting their posts, the most recent research by Graphika shows posting volumes on their pages went down 55% and engagement levels were down 94% compared to pre-war levels,” added Pancini.

According to the data, more than half of all Russian state media assets have stopped posting altogether.

Tech Companies Commit to AI Safety

Earlier this month, tech firms vowed to combat deceptive AI content ahead of global elections involving over four billion voters across more than 40 countries.

The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.

The agreement commits to developing tools for detecting misleading AI-generated images, videos, and audio, along with launching public awareness campaigns to educate voters on deceptive content and acting on such content on their platforms.

According to the companies, technology options for identifying AI-generated content or certifying its origin may involve watermarking or embedding metadata. However, the Accord lacks a specified timeline for meeting these commitments or details on how each company plans to implement them.

The signatories of the Tech Accord to date are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

Join Big Data & AI World

6-7 March 2024, ExCeL London

Be at the forefront of change with thousands of technologists, data specialists, and AI pioneers.

Don’t miss the biggest opportunities to advance your business into the future.

Written by Mon 26 Feb 2024

Send us a correction Send us a news tip