EU grills Elon Musk’s X about content moderation and deepfake risks

Share via:


The European Union has deepened the investigation of Elon Musk-owned social network X it opened back in December, under the bloc’s online governance and content moderation rulebook: the Digital Services Act (DSA). Confirmed breaches of the regime could be expensive for Musk as enforcers are empowered to issue fines of up to 6% of global annual turnover.

On Wednesday the Commission said it has sent X a formal request for information (RFI) under the DSA seeking more details about aspects of the ongoing probe — which is looking into concerns about illegal content risks, manipulative design, shortcomings in ads transparency and platform data access for researchers.

The RFI also targets some fresh concerns, with the EU saying it’s asking X about its content moderation activities and resources in light of its latest Transparency reporting (another DSA requirement) — which reveals X has slashed the headcount of its content moderation team by almost a fifth (20%) since the previous report, back in October 2023.

The report also revealed X has reduced linguistic coverage of content moderation within the EU from 11 official languages to seven — a particular bugbear for the Commission, which has raised the issue before in longer standing efforts pressuring platforms to tackle content harms.

Another fresh EU concern relates to X’s approach to generative AI. The Commission said it’s seeking further details on “risk assessments and mitigation measures linked to the impact of generative AI tools on electoral processes, dissemination of illegal content, and protection of fundamental rights”.

X is regulated as a so-called very large online platform (VLOP) under the DSA which means it’s subject to an additional layer of rules — overseen by the Commission itself — requiring it to assess and mitigate systemic risks, such as in areas like disinformation.

“The request for information sent today is a further step in an ongoing investigation,” the EU said in a press release. “It builds upon the evidence gathering and analysis conducted so far, including in relation to X’s Transparency report published in March 2024 and X’s replies to previous requests for information, which addressed, among others, mitigation measures for risks linked to generative AI.”

Back in March the Commission sent a flurry of RFIs to several VLOPs, including X, asking for more info on their approach to handling risks related to the use of generative AI. The EU is concerned about the role political deepfakes could play in upcoming elections to the European Parliament next month.

The latest RFI to X gives the platform until May 17 to provide responses to its questions about content moderation resources and generative AI. It must get the other requested info to the Commission by May 27.

X was contacted for a response to the development but at press time it had not provided comment.

During a briefing with journalists last month a senior Commission official declined to offer a full update on its investigation with X but characterized contacts with the company as “quite intense”.

The official also confirmed one active discussion topic relates to X’s Community Notes feature, which crowdsources additional context to display on disputed posts — something X under Musk has framed as its main approach to content moderation — adding that it’s unclear whether the company’s approach is sufficiently robust for responding to election risks.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

EU grills Elon Musk’s X about content moderation and deepfake risks


The European Union has deepened the investigation of Elon Musk-owned social network X it opened back in December, under the bloc’s online governance and content moderation rulebook: the Digital Services Act (DSA). Confirmed breaches of the regime could be expensive for Musk as enforcers are empowered to issue fines of up to 6% of global annual turnover.

On Wednesday the Commission said it has sent X a formal request for information (RFI) under the DSA seeking more details about aspects of the ongoing probe — which is looking into concerns about illegal content risks, manipulative design, shortcomings in ads transparency and platform data access for researchers.

The RFI also targets some fresh concerns, with the EU saying it’s asking X about its content moderation activities and resources in light of its latest Transparency reporting (another DSA requirement) — which reveals X has slashed the headcount of its content moderation team by almost a fifth (20%) since the previous report, back in October 2023.

The report also revealed X has reduced linguistic coverage of content moderation within the EU from 11 official languages to seven — a particular bugbear for the Commission, which has raised the issue before in longer standing efforts pressuring platforms to tackle content harms.

Another fresh EU concern relates to X’s approach to generative AI. The Commission said it’s seeking further details on “risk assessments and mitigation measures linked to the impact of generative AI tools on electoral processes, dissemination of illegal content, and protection of fundamental rights”.

X is regulated as a so-called very large online platform (VLOP) under the DSA which means it’s subject to an additional layer of rules — overseen by the Commission itself — requiring it to assess and mitigate systemic risks, such as in areas like disinformation.

“The request for information sent today is a further step in an ongoing investigation,” the EU said in a press release. “It builds upon the evidence gathering and analysis conducted so far, including in relation to X’s Transparency report published in March 2024 and X’s replies to previous requests for information, which addressed, among others, mitigation measures for risks linked to generative AI.”

Back in March the Commission sent a flurry of RFIs to several VLOPs, including X, asking for more info on their approach to handling risks related to the use of generative AI. The EU is concerned about the role political deepfakes could play in upcoming elections to the European Parliament next month.

The latest RFI to X gives the platform until May 17 to provide responses to its questions about content moderation resources and generative AI. It must get the other requested info to the Commission by May 27.

X was contacted for a response to the development but at press time it had not provided comment.

During a briefing with journalists last month a senior Commission official declined to offer a full update on its investigation with X but characterized contacts with the company as “quite intense”.

The official also confirmed one active discussion topic relates to X’s Community Notes feature, which crowdsources additional context to display on disputed posts — something X under Musk has framed as its main approach to content moderation — adding that it’s unclear whether the company’s approach is sufficiently robust for responding to election risks.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Elon Musk is directing harassment toward individual federal workers

Elon Musk is, in addition to many other...

CFTC report endorses tokenizing trading collateral 

Distributed ledger technology can help solve longstanding challenges...

Tap to Pay on iPhone now available in one...

Following a recent expansion of Tap to Pay...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!