Shortcuts and Shortfalls in Meta’s Content Moderation Practices
A Glimpse from its Oversight Board’s First Year of Operation
Keywords:content moderation, social media, oversight board, meta, context
Social media companies regulate more speech than any government does, and yet how they moderate content on their platforms receives little public scrutiny. Two years ago, Meta (formerly Facebook) set up an oversight body, called the Oversight Board, that operates like a Supreme Court to the company’s content moderation practices, handling final appeals and issuing policy recommendations. This article sets out to examine Meta’s approach to content moderation and the role of the Board in steering changes, as revealed by the first 20 decisions that the Board published during its first year of operation. The study identifies interpretive shortcuts that Meta’s content moderators frequently deployed, which lead to pragmatic deficiency in their decisions. These interpretive shortcuts are discussed under the notions of decontextualisation, literalisation, and monomodal orientation. Further analysis reveals that these shortcuts are design features rather than bugs in the content moderation system, which is geared toward efficiency and scalability. The article concludes by discussing the challenge of adopting a universal approach to analysing speaker intentionality and warning against a technochauvinistic approach to content moderation.