EU’s draft election security guidelines for tech giants take aim at political deepfakes

Share via:


The European Union has launched a consultation on draft election security mitigations aimed at larger online platforms, such as Facebook, Google, TikTok and X (Twitter), that includes a set of recommendations it hopes will shrink democratic risks from generative AI and deepfakes — in addition to covering off more well-trodden ground such as content moderation resourcing and service integrity; political ads transparency; and media literacy. The overall goal for the guidance is to ensure tech giants take due care and attention to a full sweep of election-related risks that might bubble up on their platforms, including as a result of easier access to powerful AI tools.

The EU is aiming the election security guidelines at the nearly two dozen platform giants and search engines that are currently designated under its rebooted ecommerce rules, aka the Digital Services Act (DSA).

Concerns that advanced AI systems like large language models (LLMs) which are capable of outputting highly plausible sounding text and/or realistic imagery, audio or video have been riding high since last year’s viral boom in generative AI — which saw tools like OpenAI’s AI chatbot, ChatGPT, becoming household names. Since then scores of generative AIs have been launched, including a range of models and tools developed by long established tech giants, like Meta and Google, whose platforms and services routinely reach billions of web users.

“Recent technological developments in generative AI have enabled the creation and widespread use of artificial intelligence capable of generating text, images, videos, or other synthetic content. While such developments may bring many new opportunities, they may lead to specific risks in the context of elections,” text the EU is consulting on warns. “[G]enerative AI can notably be used to mislead voters or to manipulate electoral processes by creating and disseminating inauthentic, misleading synthetic content regarding political actors, false depiction of events, election polls, contexts or narratives. Generative AI systems can also produce incorrect, incoherent, or fabricated information, so called ‘hallucinations’, that misrepresent the reality, and which can potentially mislead voters.”

Of course it doesn’t take a staggering amount of compute power and cutting edge AI systems to mislead voters. Some politicians are experts in producing ‘fake news’ just using their own vocal chords, after all. And even on the tech tool front malicious agents don’t need fancy GenAIs to execute a crudely suggestive edit of a video (or manipulate digital media in other, even more basic ways) in order to create potentially misleading political messaging that can quickly be tossed onto the outrage fire of social media to be fanned by willingly triggered users (and/or amplified by bots) until the divisive flames start to self-spread (driving whatever political agenda lurks behind the fake).

See, for a recent example, a (critical) decision by Meta’s Oversight Board of how the social media giant handled an edited video of US president Biden, which called on the parent company to rewrite “incoherent” rules around fake videos since, currently, such content may be treated differently by Meta’s moderators — depending on whether it’s been AI generated or edited in a more basic way.

Notably — but unsurprisingly — then, the EU’s guidance on election security doesn’t limit itself to AI-generated fakes either.

While, on GenAI, the bloc is putting a sensible emphasis on the need for platforms to tackle dissemination (not just creation) risks too.

Best practices

One suggestion the EU is consulting on in the draft guidelines is that the labelling of GenAI, deepfakes and/or other “media manipulations” by in-scope platforms should be both clear (“prominent” and “efficient”) and persistent (i.e. travels with content if/when it’s reshared) — where the content in question “appreciably resemble existing persons, objects, places, entities, events, or depict events as real that did not happen or misrepresent them”, as it puts it.

There’s also a further recommendation platforms provide users with accessible tools so they can add labels to AI generated content.

The draft guidance goes on to suggest “best practices” to inform risk mitigation measures may be drawn from the EU’s (recently agreed legislative proposal) AI Act and its companion (but non-legally binding) AI Pact, adding: “Particularly relevant in this context are the obligations envisaged in the AI Act for providers of general-purpose AI models, including generative AI, requirements for labelling of ‘deep fakes’ and for providers of generative AI systems to use technical state-of-the-art solutions to ensure that content created by generative AI is marked as such, which will enable its detection by providers of [in-scope platforms].”

The draft election security guidelines, which are under public consultation in the EU until March 7, include the overarching recommendation that tech giants put in place “reasonable, proportionate, and effective” mitigation measures tailored to risks related to (both) the creation and “potential large-scale dissemination” of AI-generated fakes.

The use of watermarking, including via metadata, to distinguish AI generated content is specifically recommended — in order that such content is “clearly distinguishable” for users. But the draft says “other types of synthetic and manipulated media” should get the same treatment too.

“This is particularly important for any generative AI content involving candidates, politicians, or political parties,” the consultation observes. “Watermarks may also apply to content that is based on real footage (such as videos, images or audio) that has been altered through the use of generative AI.”

Platforms are urged to adapt their content moderation systems and processes so they’re able to detect watermarks and other “content provenance indicators”, per the draft text, which also suggests they “cooperate with providers of generative AI systems and follow leading state of the art measures to ensure that such watermarks and indicators are detected in a reliable and effective manner”; and asks them to “support new technology innovations to improve the effectiveness and interoperability of such tools”.

The bulk of the DSA, the EU’s content moderation and governance regulation, applies to a broad sweep of digital businesses from later this month — but already (since the end of August) the regime applies for almost two dozen (larger) platforms, with 45M+ monthly active users in the region. More than 20 so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs) have been designated under the DSA so far, including the likes of Facebook, Instagram, Google Search, TikTok and YouTube.

Extra obligations these larger platforms face (i.e. compared to non-VLOPs/VLOSEs) include requirements to mitigate systemic risks arising from how they operate their platforms and algorithms in areas such as democratic processes. So this means that — for example — Meta could, in the near future, be forced into adopting a less incoherent position on what to do about political fakes on Facebook and Instagram — or, well, at least in the EU, where the DSA applies to its business. (NB: Penalties for breaching the regime can scale up to 6% of global annual turnover.)

Other draft recommendations aimed at DSA platform giants vis-a-vis election security include a suggestion they make “reasonable efforts” to ensure information provided using generative AI “relies to the extent possible on reliable sources in the electoral context, such as official information on the electoral process from relevant electoral authorities”, as the current text has it; and that “any quotes or references made by the system to external sources are accurate and do not misrepresent the cited content” — which the bloc anticipates will work to “limit… the effects of ‘hallucinations’”.

Users should also be warned by in-scope platforms of potential errors in content created by GenAI; and pointed towards authoritative sources of information, while the tech giants should also put in place “safeguards” to prevent the creation of “false content that may have a strong potential to influence user behaviour”, per the draft.

Among the safety techniques platforms could be urged to adopt is “red teaming” — or the practice of proactively hunting for and testing potential security issues. “Conduct and document red-teaming exercises with a particular focus on electoral processes, with both internal teams and external experts, before releasing generative AI systems to the public and follow a staggered release approach when doing so to better control unintended consequences,” it currently suggests.

GenAI deployers in-scope of the DSA’s requirement to mitigate system risk should also set “appropriate performance metrics”, in areas like safety and factual accuracy of answers given to questions on electoral content, per the current text; and “continually monitor the performance of generative AI systems, and take appropriate actions when needed”.

Safety features that seek to prevent the misuse of the generative AI systems “for illegal, manipulative and disinformation purposes in the context of electoral processes” should also be integrated into AI systems, per the draft — which gives examples such as prompt classifiers, content moderation and other types of filters — in order for platforms to proactively detect and prevent prompts that go against their terms of service related to elections.

On AI generated text, the current recommendation is for VLOPs/VLOSEs to “indicate, where possible, in the outputs generated the concrete sources of the information used as input data to enable users to verify the reliability and further contextualise the information” — suggesting the EU is leaning towards a preference for footnote-style indicators (such as AI search engine You.com typically displays) for accompanying generative AI responses in risky contexts like elections.

Support for external researchers is another key plank of the draft recommendations — and, indeed, of the DSA generally, which puts obligations on platform and search giants to enable researchers’ data access for the study of systemic risk. (Which has been an early area of focus for the Commission’s oversight of platforms.)

“As AI generated content bears specific risks, it should be specifically scrutinised, also through the development of ad hoc tools to perform research aimed at identifying and understanding specific risks related to electoral processes,” the draft guidance suggests. “Providers of online platforms and search engines are encouraged to consider setting up dedicated tools for researchers to get access to and specifically identify and analyse AI generated content that is known as such, in line with the obligation under Article 40.12 for providers of VLOPs and VLOSEs in the DSA.”

The current draft also touches on the use of generative AI in ads, suggesting platforms adapt their ad systems to consider potential risks here too — such as by providing advertisers with ways to clearly label GenAI content that’s been used in ads or promoted posts; and to require in their ad policies that the label be used when the advertisement includes generative AI content.

The exact steerage the EU will push on platform and search giants when it comes to election integrity will have to wait for the final guidelines to be produced in the coming months. But the current draft suggests the bloc intends to produce a comprehensive set of recommendations and best practices.

Platforms will be able to choose not to follow the guidelines but they will need to comply with the legally binding DSA — so any deviations from the recommendations could encourage added scrutiny of alternative choices (hi Elon Musk!). And platforms will need to be prepared to defend their approaches to the Commission, which is both producing guidelines and enforcing the DSA rulebook.

The EU confirmed today that the election security guidelines are the first set in the works under the VLOPs/VLOSEs-focused Article 35 (“Mitigation of risks”) provision, saying the aim is to provide platforms with “best practices and possible measures to mitigate systemic risks on their platforms that may threaten the integrity of democratic electoral processes”.

Elections are clearly front of mind for the bloc, with a once-in-five-year vote to elect a new European Parliament set to take place in early June. And there the draft guidelines even includes targeted recommendations related to the European Parliament elections — setting an expectation platforms put in place “robust preparations” for what’s couched in the text as “a crucial test case for the resilience of our democratic processes”. So we can assume the final guidelines will be made available long before the summer.

Commenting in a statement, Thierry Breton, the EU’s commissioner for internal market, added:

With the Digital Services Act, Europe is the first continent with a law to address systemic risks on online platforms that can have real-world negative effects on our democratic societies. 2024 is a significant year for elections. That is why we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

EU’s draft election security guidelines for tech giants take aim at political deepfakes


The European Union has launched a consultation on draft election security mitigations aimed at larger online platforms, such as Facebook, Google, TikTok and X (Twitter), that includes a set of recommendations it hopes will shrink democratic risks from generative AI and deepfakes — in addition to covering off more well-trodden ground such as content moderation resourcing and service integrity; political ads transparency; and media literacy. The overall goal for the guidance is to ensure tech giants take due care and attention to a full sweep of election-related risks that might bubble up on their platforms, including as a result of easier access to powerful AI tools.

The EU is aiming the election security guidelines at the nearly two dozen platform giants and search engines that are currently designated under its rebooted ecommerce rules, aka the Digital Services Act (DSA).

Concerns that advanced AI systems like large language models (LLMs) which are capable of outputting highly plausible sounding text and/or realistic imagery, audio or video have been riding high since last year’s viral boom in generative AI — which saw tools like OpenAI’s AI chatbot, ChatGPT, becoming household names. Since then scores of generative AIs have been launched, including a range of models and tools developed by long established tech giants, like Meta and Google, whose platforms and services routinely reach billions of web users.

“Recent technological developments in generative AI have enabled the creation and widespread use of artificial intelligence capable of generating text, images, videos, or other synthetic content. While such developments may bring many new opportunities, they may lead to specific risks in the context of elections,” text the EU is consulting on warns. “[G]enerative AI can notably be used to mislead voters or to manipulate electoral processes by creating and disseminating inauthentic, misleading synthetic content regarding political actors, false depiction of events, election polls, contexts or narratives. Generative AI systems can also produce incorrect, incoherent, or fabricated information, so called ‘hallucinations’, that misrepresent the reality, and which can potentially mislead voters.”

Of course it doesn’t take a staggering amount of compute power and cutting edge AI systems to mislead voters. Some politicians are experts in producing ‘fake news’ just using their own vocal chords, after all. And even on the tech tool front malicious agents don’t need fancy GenAIs to execute a crudely suggestive edit of a video (or manipulate digital media in other, even more basic ways) in order to create potentially misleading political messaging that can quickly be tossed onto the outrage fire of social media to be fanned by willingly triggered users (and/or amplified by bots) until the divisive flames start to self-spread (driving whatever political agenda lurks behind the fake).

See, for a recent example, a (critical) decision by Meta’s Oversight Board of how the social media giant handled an edited video of US president Biden, which called on the parent company to rewrite “incoherent” rules around fake videos since, currently, such content may be treated differently by Meta’s moderators — depending on whether it’s been AI generated or edited in a more basic way.

Notably — but unsurprisingly — then, the EU’s guidance on election security doesn’t limit itself to AI-generated fakes either.

While, on GenAI, the bloc is putting a sensible emphasis on the need for platforms to tackle dissemination (not just creation) risks too.

Best practices

One suggestion the EU is consulting on in the draft guidelines is that the labelling of GenAI, deepfakes and/or other “media manipulations” by in-scope platforms should be both clear (“prominent” and “efficient”) and persistent (i.e. travels with content if/when it’s reshared) — where the content in question “appreciably resemble existing persons, objects, places, entities, events, or depict events as real that did not happen or misrepresent them”, as it puts it.

There’s also a further recommendation platforms provide users with accessible tools so they can add labels to AI generated content.

The draft guidance goes on to suggest “best practices” to inform risk mitigation measures may be drawn from the EU’s (recently agreed legislative proposal) AI Act and its companion (but non-legally binding) AI Pact, adding: “Particularly relevant in this context are the obligations envisaged in the AI Act for providers of general-purpose AI models, including generative AI, requirements for labelling of ‘deep fakes’ and for providers of generative AI systems to use technical state-of-the-art solutions to ensure that content created by generative AI is marked as such, which will enable its detection by providers of [in-scope platforms].”

The draft election security guidelines, which are under public consultation in the EU until March 7, include the overarching recommendation that tech giants put in place “reasonable, proportionate, and effective” mitigation measures tailored to risks related to (both) the creation and “potential large-scale dissemination” of AI-generated fakes.

The use of watermarking, including via metadata, to distinguish AI generated content is specifically recommended — in order that such content is “clearly distinguishable” for users. But the draft says “other types of synthetic and manipulated media” should get the same treatment too.

“This is particularly important for any generative AI content involving candidates, politicians, or political parties,” the consultation observes. “Watermarks may also apply to content that is based on real footage (such as videos, images or audio) that has been altered through the use of generative AI.”

Platforms are urged to adapt their content moderation systems and processes so they’re able to detect watermarks and other “content provenance indicators”, per the draft text, which also suggests they “cooperate with providers of generative AI systems and follow leading state of the art measures to ensure that such watermarks and indicators are detected in a reliable and effective manner”; and asks them to “support new technology innovations to improve the effectiveness and interoperability of such tools”.

The bulk of the DSA, the EU’s content moderation and governance regulation, applies to a broad sweep of digital businesses from later this month — but already (since the end of August) the regime applies for almost two dozen (larger) platforms, with 45M+ monthly active users in the region. More than 20 so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs) have been designated under the DSA so far, including the likes of Facebook, Instagram, Google Search, TikTok and YouTube.

Extra obligations these larger platforms face (i.e. compared to non-VLOPs/VLOSEs) include requirements to mitigate systemic risks arising from how they operate their platforms and algorithms in areas such as democratic processes. So this means that — for example — Meta could, in the near future, be forced into adopting a less incoherent position on what to do about political fakes on Facebook and Instagram — or, well, at least in the EU, where the DSA applies to its business. (NB: Penalties for breaching the regime can scale up to 6% of global annual turnover.)

Other draft recommendations aimed at DSA platform giants vis-a-vis election security include a suggestion they make “reasonable efforts” to ensure information provided using generative AI “relies to the extent possible on reliable sources in the electoral context, such as official information on the electoral process from relevant electoral authorities”, as the current text has it; and that “any quotes or references made by the system to external sources are accurate and do not misrepresent the cited content” — which the bloc anticipates will work to “limit… the effects of ‘hallucinations’”.

Users should also be warned by in-scope platforms of potential errors in content created by GenAI; and pointed towards authoritative sources of information, while the tech giants should also put in place “safeguards” to prevent the creation of “false content that may have a strong potential to influence user behaviour”, per the draft.

Among the safety techniques platforms could be urged to adopt is “red teaming” — or the practice of proactively hunting for and testing potential security issues. “Conduct and document red-teaming exercises with a particular focus on electoral processes, with both internal teams and external experts, before releasing generative AI systems to the public and follow a staggered release approach when doing so to better control unintended consequences,” it currently suggests.

GenAI deployers in-scope of the DSA’s requirement to mitigate system risk should also set “appropriate performance metrics”, in areas like safety and factual accuracy of answers given to questions on electoral content, per the current text; and “continually monitor the performance of generative AI systems, and take appropriate actions when needed”.

Safety features that seek to prevent the misuse of the generative AI systems “for illegal, manipulative and disinformation purposes in the context of electoral processes” should also be integrated into AI systems, per the draft — which gives examples such as prompt classifiers, content moderation and other types of filters — in order for platforms to proactively detect and prevent prompts that go against their terms of service related to elections.

On AI generated text, the current recommendation is for VLOPs/VLOSEs to “indicate, where possible, in the outputs generated the concrete sources of the information used as input data to enable users to verify the reliability and further contextualise the information” — suggesting the EU is leaning towards a preference for footnote-style indicators (such as AI search engine You.com typically displays) for accompanying generative AI responses in risky contexts like elections.

Support for external researchers is another key plank of the draft recommendations — and, indeed, of the DSA generally, which puts obligations on platform and search giants to enable researchers’ data access for the study of systemic risk. (Which has been an early area of focus for the Commission’s oversight of platforms.)

“As AI generated content bears specific risks, it should be specifically scrutinised, also through the development of ad hoc tools to perform research aimed at identifying and understanding specific risks related to electoral processes,” the draft guidance suggests. “Providers of online platforms and search engines are encouraged to consider setting up dedicated tools for researchers to get access to and specifically identify and analyse AI generated content that is known as such, in line with the obligation under Article 40.12 for providers of VLOPs and VLOSEs in the DSA.”

The current draft also touches on the use of generative AI in ads, suggesting platforms adapt their ad systems to consider potential risks here too — such as by providing advertisers with ways to clearly label GenAI content that’s been used in ads or promoted posts; and to require in their ad policies that the label be used when the advertisement includes generative AI content.

The exact steerage the EU will push on platform and search giants when it comes to election integrity will have to wait for the final guidelines to be produced in the coming months. But the current draft suggests the bloc intends to produce a comprehensive set of recommendations and best practices.

Platforms will be able to choose not to follow the guidelines but they will need to comply with the legally binding DSA — so any deviations from the recommendations could encourage added scrutiny of alternative choices (hi Elon Musk!). And platforms will need to be prepared to defend their approaches to the Commission, which is both producing guidelines and enforcing the DSA rulebook.

The EU confirmed today that the election security guidelines are the first set in the works under the VLOPs/VLOSEs-focused Article 35 (“Mitigation of risks”) provision, saying the aim is to provide platforms with “best practices and possible measures to mitigate systemic risks on their platforms that may threaten the integrity of democratic electoral processes”.

Elections are clearly front of mind for the bloc, with a once-in-five-year vote to elect a new European Parliament set to take place in early June. And there the draft guidelines even includes targeted recommendations related to the European Parliament elections — setting an expectation platforms put in place “robust preparations” for what’s couched in the text as “a crucial test case for the resilience of our democratic processes”. So we can assume the final guidelines will be made available long before the summer.

Commenting in a statement, Thierry Breton, the EU’s commissioner for internal market, added:

With the Digital Services Act, Europe is the first continent with a law to address systemic risks on online platforms that can have real-world negative effects on our democratic societies. 2024 is a significant year for elections. That is why we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Solana-based DApps rake in record fees as memecoin frenzy...

Five of the top 10 crypto protocols by...

Apple may soon be discontinuing the Lightning to Headphone...

Originally introduced in 2016, it now seems like...

An Ohio man guilty of Bitcoin laundering must forfeit...

An Ohio man named Larry Dean Harmon will...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!