EU publishes election security guidance for social media giants and others in scope of DSA

Share via:


The European Union published draft election security guidelines Tuesday aimed at the around two dozen (larger) platforms with more than 45M+ regional monthly active users that are regulated under the Digital Services Act (DSA) and — consequently — have a legal duty to mitigate systemic risks such as political deepfakes while safeguarding fundamental rights like freedom of expression and privacy.

In-scope platforms include the likes of Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X.

The Commission has named elections as one of a handful of priority areas for its enforcement of the DSA on so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs). This subset of DSA-regulated companies are required to identify and mitigate systemic risks, such as information manipulation targeting democratic processes in the region, in addition to complying with the full online governance regime.

Per the EU’s election security guidance, the bloc expects regulated tech giants to up their game on protecting democratic votes and deploy capable content moderation resources in the multiple official languages spoken across the bloc — ensuring they have enough staff on hand to respond effectively to risks arising from the flow of information on their platforms and act on reports by third party fact-checkers — with the risk of big fines for dropping the ball.

This will require platforms to pull off a precision balancing act on political content moderation — not lagging on their ability to distinguish between, for example, political satire, which should remain online as protected free speech, and malicious political disinformation, whose creators could be hoping to influence voters and skew elections.

In the latter case the content falls under the DSA categorization of systemic risk that platforms are expected to swiftly spot and mitigate. The EU standard here requires that they put in place “reasonable, proportionate, and effective” mitigation measures for risks related to electoral processes, as well as respecting other relevant provisions of the wide-ranging content moderation and governance regulation.

The Commission has been working on the election guidelines at pace, launching a consultation on a draft version just last month. The sense of urgency in Brussels flows from upcoming European Parliament elections in June. Officials have said they will stress test platforms’ preparedness next month. So the EU doesn’t appear ready to leave platforms’ compliance to chance, even with a hard law in place that means tech giants are risking big fines if they fail to meet Commission expectations this time around.

User controls for algorithmic feeds

Key among the EU’s election guidance aimed at mainstream social media firms and other major platforms are that they should give their users a meaningful choice over algorithmic and AI-powered recommender systems — so they are able to exert some control over the kind of content they see.

“Recommender systems can play a significant role in shaping the information landscape and public opinion,” the guidance notes. “To mitigate the risk that such systems may pose in relation to electoral processes, [platform] providers… should consider: (i.) Ensuring that recommender systems are designed and adjusted in a way that gives users meaningful choices and controls over their feeds, with due regard to media diversity and pluralism.”

Platforms recommender systems should also have measures to downrank disinformation targeted at elections, based on what the guidance couches as “clear and transparent methods”, such as deceptive content that’s been fact-checked as false; and/or posts coming from accounts repeatedly found to spread disinformation.

Platforms must also deploy mitigations to avoid the risk of their recommender systems spreading generative AI-based disinformation (aka political deepfakes). They should also be proactively assessing their recommender engines for risks related to electoral processes and rolling out updates to shrink risks. The EU also recommends transparency around the design and functioning of AI-driven feeds; and urges platforms to engage in adversarial testing, red-teaming etc to amp up their ability to spot and quash risks.

On GenAI the EU’s advice also urges watermarking of synthetic media — while noting the limits of technical feasibility here.

Recommended mitigate measures and best practices for larger platforms in the 25-pages of draft guidance published today also lay out an expectation that platforms will dial up internal resourcing to focus on specific election threats, such as around upcoming election events, and putting in place processes for sharing relevant information and risk analysis.

Resourcing should have local expertise

The guidance emphasizes the need for analysis on “local context-specific risks”, in addition to Member State specific/national and regional information gathering to feed the work of entities responsible for the design and calibration of risk mitigation measures. And for “adequate content moderation resources”, with local language capacity and knowledge of the national and/or regional contexts and specificities — a long-running gripe of the EU when it comes to platforms’ efforts to shrink disinformation risks.

Another recommendation is for them to reinforce internal processes and resources around each election event by setting up “a dedicated, clearly identifiable internal team”, ahead of the electoral period — with resourcing proportionate to the risks identified for the election in question.

The EU guidance also explicitly recommends hiring staffers with local expertise, including language knowledge. Whereas platforms have often sought to repurpose a centralized resource — without always seeking out dedicated local expertise.

“The team should cover all relevant expertise including in areas such as content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference], fundamental rights and public participation and cooperate with relevant external experts, for example with the European Digital Media Observatory (EDMO) hubs and independent factchecking organisations,” the EU also writes.

The guidance allows for platforms to potentially ramp up resourcing around particular election events and de-mobilize teams once a vote is over.

It notes that the periods when extra risk mitigation measures may be needed are likely to vary, depending on the level of risks and any specific EU Member State rules around elections (which can vary). But the Commission recommends that platforms have mitigations deployed and up and running at least one to six months before an electoral period, and continue at least one month after the elections.

Unsurprisingly, the greatest intensity for mitigations is expected in the period prior to the date of elections, to address risks like disinformation targeting voting procedures.

Hate speech in the frame

The EU is generally advising platforms to draw on other existing guidelines, including the Code of Practice on Disinformation and Code of Conduct on Countering Hate Speech, to identify best practices for mitigation measures. But it stipulates they must ensure users are provided with access to official information on electoral processes, such as banners, links and pop-ups designed to steer users to authoritative info sources for elections.

“When mitigating systemic risks for electoral integrity, the Commission recommends that due regard is also given to the impact of measures to tackle illegal content such as public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices in the democratic debate, in particular those representing vulnerable groups or minorities,” the Commission writes.

“For example, forms of racism, or gendered disinformation and gender-based violence online including in the context of violent extremist or terrorist ideology or FIMI targeting the LGBTIQ+ community can undermine open, democratic dialogue and debate, and further increase social division and polarization. In this respect, the Code of conduct on countering illegal hate speech online can be used as inspiration when considering appropriate action.”

It also recommends they run media literacy campaigns and deploy measures aimed at providing users with more contextual info — such as fact-checking labels; prompts and nudges; clear indications of official accounts; clear and non-deceptive labelling of accounts run by Member States, third countries and entities controlled or financed by third countries; tools and info to help users assess the trustworthiness of information sources; tools to assess provenance; and establish processes to counter misuse of any of these procedures and tools — which reads like a list of stuff Elon Musk has dismantled since taking over Twitter (now X).

Notably, Musk has also been accused of letting hate speech flourish on the platform on his watch. And at the time of writing X remains under investigation by the EU for a range of suspected DSA breaches, including in relation to content moderation requirements.

Transparency to amp up accountability

On political advertising the guidance points platforms to incoming transparency rules in this area — advising they prepare for the legally binding regulation by taking steps to align themselves with the requirements now. (For example, by clearly labelling political ads, providing information on the sponsor behind these paid political messages, maintaining a public repository of political ads, and having systems in place to verify the identity of political advertisers.)

Elsewhere, the guidance also sets out how to deal with election risks related to influencers.

Platforms should also have systems in place enabling them to demonetize disinformation, per the guidance, and are urged to provide “stable and reliable” data access to third parties undertaking scrutiny and research of election risks. Data access for studying election risks should also be provided for free, the advice stipulates.

More generally the guidance encourages platforms to cooperate with oversight bodies, civil society experts and each other when it comes to sharing information about election security risks — urging them to establish comms channels for tips and risk reporting during elections.

For handling high risk incidents, the advice recommends platforms establish an internal incident response mechanism that involves senior leadership and maps other relevant stakeholders within the organization to drive accountability around their election event responses and avoid the risk of buck passing.

Post-election, the EU suggests platforms conduct and publish a review of how they fared, factoring in third party assessments (i.e. rather than just seeking to mark their own homework, as they have historically preferred, trying to put a PR gloss atop ongoing platform manipulated risks).

The election security guidelines aren’t mandatory, as such, but if platforms opt for another approach than what’s being recommended for tackling threats in this area they have to be able to demonstrate their alternative approach meets the bloc’s standard, per the Commission.

If they fail to do that they’re risking being found in breach of the DSA, which allows for penalties of up to 6% of global annual turnover for confirmed violations. So there’s an incentive for platforms to get with the bloc’s program on ramping up resources to address political disinformation and other info risks to elections as a way to shrink their regulatory risk. But they will still need to execute on the advice.

Further specific recommendations for the upcoming European Parliament elections, which will run June 6-9, are also set out in the EU guidance.

On a technical note, the election security guidelines remain in draft at this stage. But the Commission said formal adoption is expected in April once all language versions of the guidance are available.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

EU publishes election security guidance for social media giants and others in scope of DSA


The European Union published draft election security guidelines Tuesday aimed at the around two dozen (larger) platforms with more than 45M+ regional monthly active users that are regulated under the Digital Services Act (DSA) and — consequently — have a legal duty to mitigate systemic risks such as political deepfakes while safeguarding fundamental rights like freedom of expression and privacy.

In-scope platforms include the likes of Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X.

The Commission has named elections as one of a handful of priority areas for its enforcement of the DSA on so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs). This subset of DSA-regulated companies are required to identify and mitigate systemic risks, such as information manipulation targeting democratic processes in the region, in addition to complying with the full online governance regime.

Per the EU’s election security guidance, the bloc expects regulated tech giants to up their game on protecting democratic votes and deploy capable content moderation resources in the multiple official languages spoken across the bloc — ensuring they have enough staff on hand to respond effectively to risks arising from the flow of information on their platforms and act on reports by third party fact-checkers — with the risk of big fines for dropping the ball.

This will require platforms to pull off a precision balancing act on political content moderation — not lagging on their ability to distinguish between, for example, political satire, which should remain online as protected free speech, and malicious political disinformation, whose creators could be hoping to influence voters and skew elections.

In the latter case the content falls under the DSA categorization of systemic risk that platforms are expected to swiftly spot and mitigate. The EU standard here requires that they put in place “reasonable, proportionate, and effective” mitigation measures for risks related to electoral processes, as well as respecting other relevant provisions of the wide-ranging content moderation and governance regulation.

The Commission has been working on the election guidelines at pace, launching a consultation on a draft version just last month. The sense of urgency in Brussels flows from upcoming European Parliament elections in June. Officials have said they will stress test platforms’ preparedness next month. So the EU doesn’t appear ready to leave platforms’ compliance to chance, even with a hard law in place that means tech giants are risking big fines if they fail to meet Commission expectations this time around.

User controls for algorithmic feeds

Key among the EU’s election guidance aimed at mainstream social media firms and other major platforms are that they should give their users a meaningful choice over algorithmic and AI-powered recommender systems — so they are able to exert some control over the kind of content they see.

“Recommender systems can play a significant role in shaping the information landscape and public opinion,” the guidance notes. “To mitigate the risk that such systems may pose in relation to electoral processes, [platform] providers… should consider: (i.) Ensuring that recommender systems are designed and adjusted in a way that gives users meaningful choices and controls over their feeds, with due regard to media diversity and pluralism.”

Platforms recommender systems should also have measures to downrank disinformation targeted at elections, based on what the guidance couches as “clear and transparent methods”, such as deceptive content that’s been fact-checked as false; and/or posts coming from accounts repeatedly found to spread disinformation.

Platforms must also deploy mitigations to avoid the risk of their recommender systems spreading generative AI-based disinformation (aka political deepfakes). They should also be proactively assessing their recommender engines for risks related to electoral processes and rolling out updates to shrink risks. The EU also recommends transparency around the design and functioning of AI-driven feeds; and urges platforms to engage in adversarial testing, red-teaming etc to amp up their ability to spot and quash risks.

On GenAI the EU’s advice also urges watermarking of synthetic media — while noting the limits of technical feasibility here.

Recommended mitigate measures and best practices for larger platforms in the 25-pages of draft guidance published today also lay out an expectation that platforms will dial up internal resourcing to focus on specific election threats, such as around upcoming election events, and putting in place processes for sharing relevant information and risk analysis.

Resourcing should have local expertise

The guidance emphasizes the need for analysis on “local context-specific risks”, in addition to Member State specific/national and regional information gathering to feed the work of entities responsible for the design and calibration of risk mitigation measures. And for “adequate content moderation resources”, with local language capacity and knowledge of the national and/or regional contexts and specificities — a long-running gripe of the EU when it comes to platforms’ efforts to shrink disinformation risks.

Another recommendation is for them to reinforce internal processes and resources around each election event by setting up “a dedicated, clearly identifiable internal team”, ahead of the electoral period — with resourcing proportionate to the risks identified for the election in question.

The EU guidance also explicitly recommends hiring staffers with local expertise, including language knowledge. Whereas platforms have often sought to repurpose a centralized resource — without always seeking out dedicated local expertise.

“The team should cover all relevant expertise including in areas such as content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference], fundamental rights and public participation and cooperate with relevant external experts, for example with the European Digital Media Observatory (EDMO) hubs and independent factchecking organisations,” the EU also writes.

The guidance allows for platforms to potentially ramp up resourcing around particular election events and de-mobilize teams once a vote is over.

It notes that the periods when extra risk mitigation measures may be needed are likely to vary, depending on the level of risks and any specific EU Member State rules around elections (which can vary). But the Commission recommends that platforms have mitigations deployed and up and running at least one to six months before an electoral period, and continue at least one month after the elections.

Unsurprisingly, the greatest intensity for mitigations is expected in the period prior to the date of elections, to address risks like disinformation targeting voting procedures.

Hate speech in the frame

The EU is generally advising platforms to draw on other existing guidelines, including the Code of Practice on Disinformation and Code of Conduct on Countering Hate Speech, to identify best practices for mitigation measures. But it stipulates they must ensure users are provided with access to official information on electoral processes, such as banners, links and pop-ups designed to steer users to authoritative info sources for elections.

“When mitigating systemic risks for electoral integrity, the Commission recommends that due regard is also given to the impact of measures to tackle illegal content such as public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices in the democratic debate, in particular those representing vulnerable groups or minorities,” the Commission writes.

“For example, forms of racism, or gendered disinformation and gender-based violence online including in the context of violent extremist or terrorist ideology or FIMI targeting the LGBTIQ+ community can undermine open, democratic dialogue and debate, and further increase social division and polarization. In this respect, the Code of conduct on countering illegal hate speech online can be used as inspiration when considering appropriate action.”

It also recommends they run media literacy campaigns and deploy measures aimed at providing users with more contextual info — such as fact-checking labels; prompts and nudges; clear indications of official accounts; clear and non-deceptive labelling of accounts run by Member States, third countries and entities controlled or financed by third countries; tools and info to help users assess the trustworthiness of information sources; tools to assess provenance; and establish processes to counter misuse of any of these procedures and tools — which reads like a list of stuff Elon Musk has dismantled since taking over Twitter (now X).

Notably, Musk has also been accused of letting hate speech flourish on the platform on his watch. And at the time of writing X remains under investigation by the EU for a range of suspected DSA breaches, including in relation to content moderation requirements.

Transparency to amp up accountability

On political advertising the guidance points platforms to incoming transparency rules in this area — advising they prepare for the legally binding regulation by taking steps to align themselves with the requirements now. (For example, by clearly labelling political ads, providing information on the sponsor behind these paid political messages, maintaining a public repository of political ads, and having systems in place to verify the identity of political advertisers.)

Elsewhere, the guidance also sets out how to deal with election risks related to influencers.

Platforms should also have systems in place enabling them to demonetize disinformation, per the guidance, and are urged to provide “stable and reliable” data access to third parties undertaking scrutiny and research of election risks. Data access for studying election risks should also be provided for free, the advice stipulates.

More generally the guidance encourages platforms to cooperate with oversight bodies, civil society experts and each other when it comes to sharing information about election security risks — urging them to establish comms channels for tips and risk reporting during elections.

For handling high risk incidents, the advice recommends platforms establish an internal incident response mechanism that involves senior leadership and maps other relevant stakeholders within the organization to drive accountability around their election event responses and avoid the risk of buck passing.

Post-election, the EU suggests platforms conduct and publish a review of how they fared, factoring in third party assessments (i.e. rather than just seeking to mark their own homework, as they have historically preferred, trying to put a PR gloss atop ongoing platform manipulated risks).

The election security guidelines aren’t mandatory, as such, but if platforms opt for another approach than what’s being recommended for tackling threats in this area they have to be able to demonstrate their alternative approach meets the bloc’s standard, per the Commission.

If they fail to do that they’re risking being found in breach of the DSA, which allows for penalties of up to 6% of global annual turnover for confirmed violations. So there’s an incentive for platforms to get with the bloc’s program on ramping up resources to address political disinformation and other info risks to elections as a way to shrink their regulatory risk. But they will still need to execute on the advice.

Further specific recommendations for the upcoming European Parliament elections, which will run June 6-9, are also set out in the EU guidance.

On a technical note, the election security guidelines remain in draft at this stage. But the Commission said formal adoption is expected in April once all language versions of the guidance are available.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

CFTC report endorses tokenizing trading collateral 

Distributed ledger technology can help solve longstanding challenges...

Tap to Pay on iPhone now available in one...

Following a recent expansion of Tap to Pay...

ChatGPT: Everything you need to know about the AI-powered...

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!