Chuck Darwin<p>Content moderation is, inherently, a subjective practice. </p><p>Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. </p><p>By definition, content moderation is always going to rely on judgment calls, <br>and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. </p><p>Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of <a href="https://c.im/tags/context" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>context</span></a>, <br>and there’s simply no way to adequately provide that at scale in a manner that actually works. </p><p>That is, when doing content moderation at scale, you need to set rules, <br>but rules leave little to no room for understanding context and applying it appropriately. </p><p>And thus, you get lots of crazy edge cases that end up looking bad.</p><p>We’ve seen this directly. </p><p>Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour, <br>we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented.</p><p>Further, people truly underestimate the impact that “<a href="https://c.im/tags/scale" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scale</span></a>” has on this equation. </p><p>Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day, <br>but large platforms are dealing with way more than that. </p><p>If you assume that there are 1 million decisions made every day, <br>even with 99.9% “accuracy” <br>(and, remember, there’s no such thing, given the points above), <br>you’re still going to “miss” 1,000 calls. </p><p>But 1 million is nothing. <br>On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. </p><p>And that’s just photos. <br>If there’s a 99.9% accuracy rate, <br>it’s still going to make “mistakes” on 350,000 images. <br>Every. Single. Day. </p><p>So, add another 350,000 mistakes the next day. And the next. And the next. And so on.</p><p>And, even if you could achieve such high “accuracy” and with so many mistakes, <br>it wouldn’t be difficult for, say, a journalist to go searching and find a bunch of those mistakes <br>— and point them out. </p><p>This will often come attached to a line like <br>“well, if a reporter can find those bad calls, why can’t Facebook?” <br>which leaves out that Facebook DID find that other 99.9%. </p><p>Obviously, these numbers are just illustrative, but the point stands that when you’re doing content moderation at scale, <br>the scale part means that even if you’re very, very, very, very good, you will still make a ridiculous number of mistakes in absolute numbers every single day.</p><p>So while I’m all for exploring different approaches to content moderation, <br>and see no issue with people calling out failures when they (frequently) occur, <br>it’s important to recognize that there is no perfect solution to content moderation, <br>and any company, no matter how thoughtful and deliberate and careful is going to make mistakes. </p><p>Because that’s <a href="https://c.im/tags/Masnick" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Masnick</span></a>’s <a href="https://c.im/tags/Impossibility" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Impossibility</span></a> <a href="https://c.im/tags/Theorem" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Theorem</span></a> <br>— and unless you can disprove it, we’re going to assume it’s true<br><a href="https://www.techdirt.com/2019/11/20/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">techdirt.com/2019/11/20/masnic</span><span class="invisible">ks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well/</span></a></p>