Shortcuts and Shortfalls in Meta’s Content Moderation Practices
A Glimpse From Its Oversight Board’s First Year of Operation
DOI :
https://doi.org/10.15168/cll.v1i2.2365Mots-clés :
content moderation, social media, oversight board, meta, contextRésumé
Social media companies regulate more speech than any government does, and yet how they moderate content on their platforms receives little public scrutiny. Two years ago, Meta (formerly Facebook) set up an oversight body, called the Oversight Board, that handles final appeals of content moderation decisions and issues policy recommendations. This article sets out to examine Meta’s approach to content moderation and the role of the Board in steering changes, as revealed by the first 20 decisions that the Board published during its first year of operation. The study identifies interpretive shortcuts that Meta’s content moderators frequently deployed, which led to pragmatic deficiency in their decisions. These interpretive shortcuts are discussed under the notions of decontextualisation, literalisation, and monomodal orientation. Further analysis reveals that these shortcuts are design features rather than bugs in the content moderation system, which is geared toward efficiency and scalability. The article concludes by discussing the challenge of adopting a universal approach to analysing speaker intentionality, warning against a technochauvinistic approach to content moderation, and urging the expansion of the Board’s power to not only focus on outcomes but also processes.
Téléchargements
Publiée
Versions
- 2023-01-10 (2)
- 2022-12-21 (1)
Licence
© Comparative Law and Language 2022

Ce travail est disponible sous licence Creative Commons Attribution - Pas d’Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International.