The Meta Oversight Board said removing the content could impact “freedom of expression and access to information” in the war. | Illustration: Nick Barclay / The Verge
Meta’s Oversight Board has criticized the company’s automated moderation tools for being too aggressive after two videos that depicted hostages, injured civilians, and possible casualties in the Israel-Hamas war were — it says — unfairly removed from Facebook and Instagram. In a report published on Tuesday, the external review panel determined that the posts should have remained live and that removing the content has a high cost to “freedom of expression and access to information” in the war. (A warning for our readers: the following descriptions of the content may be disturbing.)
One of the removed videos, posted to Facebook, depicts an Israeli woman during the October 7th attack on Israel by Hamas, pleading with kidnappers who were taking her hostage not to kill her. The other video was published on Instagram and shows what appears to be the aftermath of an Israeli strike on or near al-Shifa Hospital in Gaza City. The post contains footage of killed or injured Palestinians, including children.
The board says that, in the case of the latter video, both the removal and a rejection of the user’s appeal to restore the footage were conducted by Meta’s automated moderation tools, without any human review. The board took up a review of the decision on an “accelerated timeline of 12 days,” and after the case was taken up, the videos were restored with a content warning screen.
In its report, the board found the moderation thresholds that had been lowered to more easily catch violating content following the attack on October 7th “also increased the likelihood of Meta mistakenly removing non-violating content related to the conflict.” The board says that the lack of human-led moderation during these types of crises can lead to the “incorrect removal of speech that may be of significant public interest” and that Meta should have been swifter to allow content “shared for the purposes of condemning, awareness-raising, news reporting or calling for release” with a warning screen applied.
The board also criticized Meta for demoting the two reviewed posts with warning screens, preventing them from appearing as recommended content to other Facebook and Instagram users despite the company acknowledging that the posts were intended to raise awareness. Meta has since responded to the board’s decision to overturn the removals, saying that because no recommendations were provided by the panel, there will be no further updates to the case.
Meta is hardly the only social media giant being scrutinized for its handling of content surrounding the Israel-Hamas war. Verified users on X (formerly Twitter) have been accused of being “misinformation super-spreaders” by misinformation watchdog organization NewsGuard. TikTok and YouTube are also being scrutinized under the EU’s Digital Services Act following a reported surge of illegal content and disinformation on the platforms, and the EU has opened a formal investigation into X. The Oversight Board case, by contrast, highlights the risks of overmoderation — and the tricky line platforms have to walk.