Meta says it’s cracking down on violent content following Hamas attacks

Share via:

Image: Nick Barclay / The Verge

In the three days following the terrorist attacks carried out by Hamas against Israel on October 7th, Meta says it removed “seven times as many pieces of content on a daily basis” for violating its Dangerous Organizations and Individuals policy in Hebrew and Arabic versus the two months prior. The disclosure came as part of a blog post in which the social media company outlined its moderation efforts during the ongoing war in Israel.

Although it doesn’t mention the EU or its Digital Services Act, Meta’s blog post was published days after European Commissioner Thierry Breton wrote an open letter to Meta reminding the company of its obligations to limit disinformation and illegal content on its platforms. Breton wrote that the Commission is “seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms” and “urgently” asked Meta CEO Mark Zuckerberg to “ensure that your systems are effective.” The commissioner has also written similar letters to X, the company formerly known as Twitter, as well as TikTok.

Almost 800,000 pieces of content “removed or marked as disturbing”

Meta says it “removed or marked as disturbing” over 795,000 pieces of content in the three days following October 7th for violating its policies in Hebrew and Arabic and says Hamas is banned from its platforms. The company also says it’s taking more temporary measures like blocking hashtags and prioritizing Facebook and Instagram Live reports relating to the crisis. The company says it’s also allowing content removals without disabling accounts because the higher volume of content being removed means that some may be removed in error.

The operator of Instagram and Facebook adds that it’s established a “special operations center” staffed with experts including fluent Hebrew and Arabic speakers to respond to the situation. That’s notable, given one of the major things Meta (then known as Facebook) did after being criticized for its response to genocidal violence in Myanmar was to build up a team of native Myanmar language speakers.

Even more recently, Meta’s track record on moderation has not been perfect. Members of its Trusted Partner program, which is supposed to allow expert organizations to raise concerns about content on Facebook and Instagram with the company, have complained of slow responses, and it’s faced criticism for shifting moderation policies surrounding the Russia-Ukraine war.

X’s outline of its moderation around the conflict does not mention the languages spoken by its response team. The European Commission has since formally sent X a request for information under its Digital Services Act, citing the “alleged spreading of illegal content and disinformation,” including “the spreading of terrorist and violent content and hate speech.”

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta says it’s cracking down on violent content following Hamas attacks

Image: Nick Barclay / The Verge

In the three days following the terrorist attacks carried out by Hamas against Israel on October 7th, Meta says it removed “seven times as many pieces of content on a daily basis” for violating its Dangerous Organizations and Individuals policy in Hebrew and Arabic versus the two months prior. The disclosure came as part of a blog post in which the social media company outlined its moderation efforts during the ongoing war in Israel.

Although it doesn’t mention the EU or its Digital Services Act, Meta’s blog post was published days after European Commissioner Thierry Breton wrote an open letter to Meta reminding the company of its obligations to limit disinformation and illegal content on its platforms. Breton wrote that the Commission is “seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms” and “urgently” asked Meta CEO Mark Zuckerberg to “ensure that your systems are effective.” The commissioner has also written similar letters to X, the company formerly known as Twitter, as well as TikTok.

Almost 800,000 pieces of content “removed or marked as disturbing”

Meta says it “removed or marked as disturbing” over 795,000 pieces of content in the three days following October 7th for violating its policies in Hebrew and Arabic and says Hamas is banned from its platforms. The company also says it’s taking more temporary measures like blocking hashtags and prioritizing Facebook and Instagram Live reports relating to the crisis. The company says it’s also allowing content removals without disabling accounts because the higher volume of content being removed means that some may be removed in error.

The operator of Instagram and Facebook adds that it’s established a “special operations center” staffed with experts including fluent Hebrew and Arabic speakers to respond to the situation. That’s notable, given one of the major things Meta (then known as Facebook) did after being criticized for its response to genocidal violence in Myanmar was to build up a team of native Myanmar language speakers.

Even more recently, Meta’s track record on moderation has not been perfect. Members of its Trusted Partner program, which is supposed to allow expert organizations to raise concerns about content on Facebook and Instagram with the company, have complained of slow responses, and it’s faced criticism for shifting moderation policies surrounding the Russia-Ukraine war.

X’s outline of its moderation around the conflict does not mention the languages spoken by its response team. The European Commission has since formally sent X a request for information under its Digital Services Act, citing the “alleged spreading of illegal content and disinformation,” including “the spreading of terrorist and violent content and hate speech.”

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Cognizant: 13,000 employees rejoined Cognizant: CEO Ravi Kumar

US-headquartered IT major Cognizant saw its headcount going...

Former FTX executive Nishad Singh gets supervised release

Nishad Singh had pleaded guilty to six counts...

SG fintech Zettai seeks court protection after crypto theft

The company plans to restructure user balances after...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!