Here’s How Facebook Decides What You Can And Can’t See
The Guardian published the presentation in a series of slideshows divided into different topics: Sadism, violence, child abuse…
One of the slides outlining how the company handles depictions of graphic violence appeared with the following editor’s note:
“Some use language we would not usually publish, but to understand Facebook’s content policies, we decided to include it. See for yourself how Facebook’s polices what users post.”
It’s important to remember that Facebook’s moderators remove content “on report only,” meaning that millions of Facebook users could see a graphic image or video – such as a beheading – before it’s removed.
As one report notes, the guidelines “may also alarm free speech advocates concerned about Facebook’s de facto role as the world’s largest censor. Both sides are likely to demand greater transparency.”
Facebook employs about 4,500 “content moderators” but recently announced plans to hire another 3,000, the Guardian reported.
Here are some notable excerpts highlighted by the Guardian:
- Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.
- Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.
…click on the above link to read the rest of the article…