Scrolling through News Feed, it can be hard to judge what stories are true or false. Should you trust everything that you see, even if it comes from someone whom you don’t know well? If you’ve never heard of an article’s publisher, but the headline looks legitimate, is it?
Misinformation comes in many different forms and can cover a wide range of topics. It can take the shape of memes, links, and other forms. And it can range from pop culture to politics or even to critical information during a natural disaster or humanitarian crisis.
We know that the vast majority of people don’t want to share “false news.” And we know people want to see accurate information on Facebook, but don’t always know what information or sources to trust—this makes discerning what’s true and what’s false difficult.
After a year of testing and learning, we’re making a change to how we alert people when they see false news on Facebook. As the designer, researcher, and content strategist driving this work, we wanted to share the process of how we got here and the challenges that come with designing against misinformation.
What We Learned
In December of last year, we launched a series of changes to identify and reduce the spread of false news in News Feed:
- We made it easier for people to report stories they think are false news
- We partnered with independent fact-checking organizations that review articles that might be false
- We reduced the distribution of articles disputed by fact-checkers
- We launched a collection of features to alert people when fact-checkers have disputed an article, and to let people know if they have shared, or are about to share, false news
…click on the above link to read the rest of the article…