Shortcuts and Shortfalls in Meta’s Content Moderation Practices
A Glimpse From Its Oversight Board’s First Year of Operation
DOI:
https://doi.org/10.15168/cll.v1i2.2365Keywords:
content moderation, social media, Meta, Oversight Board, contextAbstract
Social media companies regulate more speech than any government does, and yet how they moderate content on their platforms receives little public scrutiny. Two years ago, Meta (formerly Facebook) set up an oversight body, called the Oversight Board, that handles final appeals of content moderation decisions and issues policy recommendations. This article sets out to examine Meta’s approach to content moderation and the role of the Board in steering changes, as revealed by the first 20 decisions that the Board published during its first year of operation. The study identifies interpretive shortcuts that Meta’s content moderators frequently deployed, which led to pragmatic deficiency in their decisions. These interpretive shortcuts are discussed under the notions of decontextualisation, literalisation, and monomodal orientation. Further analysis reveals that these shortcuts are design features rather than bugs in the content moderation system, which is geared toward efficiency and scalability. The article concludes by discussing the challenge of adopting a universal approach to analysing speaker intentionality, warning against a technochauvinistic approach to content moderation, and urging the expansion of the Board’s power to not only focus on outcomes but also processes.
Downloads
Published
Versions
- 2023-01-10 (2)
- 2022-12-21 (1)
License
Copyright (c) 2022 Comparative Law and Language

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.