How Facebook Has Discouraged Fake News Since the 2016 Election

Facebook has made it harder for the people who make money circulating fabricated stories. Has it done enough, though?

SAN FRANCISCO, CA- JUNE 28: San Francisco Gay Pride Parade marchers protest Facebook not allowing transgender people from choosing their own name, rather than birth name on the social networking site, June 28, 2015 in San Francisco, California. The 2015 pride parade comes two days after the U.S. Supreme Court's landmark decision to legalize same-sex marriage in all 50 states. (Photo by Max Whittaker/Getty Images)
Transgender activists protest Facebook’s real names policy during a parade in 2015. Max Whittaker/Getty Images

Facebook (META) has put its foot on the neck of fake news publishers, announcing Monday that pages that repeatedly post fake news will lose the privilege of advertising on the platform. Denying the distributors of phony reporting the right to give the company money feels like real commitment. Not that Facebook can’t afford to walk away from a few dollars, but no publicly traded company rushes to turn its back on business.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Facebook is one of the chief drivers of all traffic on the internet. Web analytics firm Parsely keeps a running tally based on its many customers all over the web, and Facebook and Google just kind of go back and forth as the top traffic drivers. As of this writing, both the dominant search engine and the dominant social network each account for about 38 percent of all traffic. Of all other referral sites, only Twitter and Yahoo have been able to break the two percent mark recently.

How crazy is that?

So anything Facebook does to deny traffic to phony stories is powerful, and we wanted to run down the history of everything the site has done so far in light of this latest announcement.

First, though, let’s clarify what we’re talking about. Fake news is an easy concept to understand. It’s not inadequate reporting, biased reporting, reporting with mistakes or even reporting based on a hoax. Those are all examples of bad work, but they aren’t fake news. Fake news is the cynical fabrication of events that never took place, usually with the objective of generating profit.

If you had a crazy uncle who periodically told you during last year’s election that the Clintons have a history of murdering opponents and you couldn’t figure out where on earth he was getting that nonsensical story, he was reading fake news.

CAIRO, EGYPT - FEBRUARY 03: An anti-government demonstrator holds a sign during clashes on February 3, 2011 in Cairo, Egypt. Initial protests against the government were organized on internet social media. The Egyptian army positioned tanks between the protesters during a second day of violent skirmishes in and around Tahrir Square in Cairo. (Photo by John Moore/Getty Images)
An anti-government protester in Egypt in 2011 holds a sign praising Facebook. The site was key to organizing mass protests there. John Moore/Getty Images

So here’s the history of Facebook’s efforts to undermine the spread of demonstrably false information on its site:

November 12, 2016. Zuck downplays the problem.

Just ahead of the election, Buzzfeed got a big hit reporting on loads of fake news sites running out of Macedonia. Right after the election, the Facebook CEO wrote a tortured Facebook post where he said the influence of fake news on the election was overblown. Still, he started sketching out some things the site might do to undermine lies.

A few days later, Buzzfeed reported that the top fake news stories on Facebook saw more engagement on the platform than actual news, just ahead of the election. A Facebook spokesperson cited in the story continued to downplay the importance of this finding.

December 15, 2016. Facebook announces fact-checking flags.

Just before Christmas, Facebook announced that community members would be able to more easily flag news as disputed right on the site. If Facebook had enough clues that a story might be phony, it would turn it over to third-party fact-checkers. It relied on groups that signed on to key principles around fact-checking a few months prior.

This program also had a machine learning component. Facebook said it would look at the behavior of people who actually visited the link in question. If folks who actually clicked weren’t as likely to share as those who did, that was another bad sign.

None of these programs were live yet in December, which is itself noteworthy. Tech companies don’t typically like to talk about products until they are done, but the pressure on Facebook to deal with this problem was such that it needed to say what it was working on.

PALO ALTO, CA - JUNE 04: Ruth Robertson (L) and Shirley Powers of the group Raging Grannies stage a demonstration outside of the FaceBook headquarters June 4, 2010 in Palo Alto, California. The group was calling for the FTC to investigate FaceBook's privacy policies. (Photo by Justin Sullivan/Getty Images)
Facebook users protest the sites privacy policies in 2010. Justin Sullivan/Getty Images

January 31, 2017. Machine-learning gobbledygook.

Next, Facebook basically posted that it would do machine-learning stuff and, “Trust us, it’ll be good.” For example, if a post is getting hidden a lot, that’s bad. If people you know are engaging with something, then you are more likely to see it. In February, Zuck released a giant letter on building a global community that touched on related points.

March 4, 2017. Users start seeing the “disputed news” flag on the site. 

Mashable reported that people had started to actually see the disputed news flags previewed in December and that the site had made a help center post explaining the “disputed news” function.

April 6, 2017. Facebook takes more proactive steps externally. 

Facebook announced that it would collaborate more with news organizations and back a consortium of funders to increase the broader trust in journalism.

Facebook’s collaborations with news organizations end badly. It created pages, and publishers put a lot of effort into convincing readers to “like” their pages as a way to drive organic clicks. Then in 2016 it announced feeds would favor posts from regular people and deprecate the value of posts made by pages, unless their owners paid to get seen more. In 2015, it announced instant articles, to more quickly load content on the site while still sharing revenue. Shortly thereafter, it announced it would favor video content over text, and publishers have come to regret their investment in the instant program. Now the social network is luring publishers into its YouTube competitor, Watch.

Prediction: publishers will regret investing time in this effort too. So it’s unlikely that anything Facebook does will yield a healthier, more robust reporting infrastructure to compete with the scammers.

Still, its philanthropy sounds good. Any investment in helping Americans understand the value that principled journalism brings to a country is welcome.

Pakistani Muslims shout slogans and wave placards as they protest against Facebook in Lahore on May 26, 2010. Pakistan is to lift a ban on Facebook and YouTube in the next few days, after blocking the websites over "sacrilegious" content, the country's interior minister said. When a Facebook user decided to organise an "Everyone Draw Mohammed Day" competition to promote "freedom of expression", it sparked a major backlash among Islamic activists in the South Asian country of 170 million. Islam strictly prohibits the depiction of any prophet as blasphemous and the row sparked comparison with protests across the Muslim world over the publication of satirical cartoons of Mohammed in European newspapers in 2006. AFP PHOTO/Arif ALI (Photo credit should read Arif Ali/AFP/Getty Images)
Pakistani Muslims in 2010 protest Facebook as a site for sharing sacrilegious content. Arif Ali/AFP/Getty Images

May 10, 2017. Crappy sites lose access to Facebook ads.

Menlo Park sent its bots out to crawl web pages looking for spammy, phony or malicious content. It also looked for content with “disruptive” ads. They wrote that sites that were found to have too much garbage and provide viewers with a poor user experience would lose the right to advertise on Facebook. Again, this post was forward looking. The program wasn’t fully live when the post went up. It was a warning to bad publishers.

This relates to fake news because low quality sources could also lose their advertising privileges.

It seems like a lot of Facebook’s winter ideas have been coming together at the end of this summer. There’s been a lot of announcements in August.

August 3, 2017. Related articles may include a fact-checker post.

There’s a good chance that a fact-checker might review a post circulating on Facebook before the site actually marks it as disputed. In the meantime, Facebook announced it would start including posts from fact-checker sites in related articles, at least sometimes. That way, if a user sees something that seems a little crazy, looking right below the post might reveal someone who’s written something explaining why it is or is not crazy.

Nieman also reported that Facebook had started paying fact-checkers, which is nice of them.

Demonstrators take part in a protest against a public transport fare hike announced for January 2014 by Rio de Janeiro's Mayor Eduardo Paes, in the streets of the Brazilian city, on December 20, 2013. The banner reads "We Are in Facebook: Programme of Attacks Against the Corrupt". AFP PHOTO/TASSO MARCELO (Photo credit should read TASSO MARCELO/AFP/Getty Images)
Demonstrators protesting a transport fare hike hold a sign that shows supporters how to find them on Facebook. TASSO MARCELO/AFP/Getty Images

August 9, 2017. This one is crazy.

Apparently fake news sites and other malicious publishers have been hurting badly enough that they started pulling a very sneaky trick. They coded links in such a way that one page would show up for people reviewing the site on behalf of Facebook and another one would show up for a real user. It’s called “cloaking.” It’s nuts. Anyway, Facebook caught it and took steps to catch sites doing it and punish them.

Then Facebook announced it would stop accepting ad money from sites that posted too many misleading stories, as we wrote above.

With that, we’re all caught up.

In a world where it’s easy to pick and choose among sources, people have a strong emotional incentive to find “sources” that buttress their way of thinking. Brooke Binkowski told the BBC, “A lot of people want proof that their world view is the accurate and appropriate one.”

So the larger debate about Facebook’s responsibility in these matters hasn’t abated. Veteran British broadcaster Jon Snow took the occasion of a prestigious lecture to call Facebook’s failure to move swiftly to combat fake news a “threat to democracy.” Adrian Chen (one of the best reporters on internet culture in the world) published a piece in The New Yorker arguing that we tend to see drama like this with every new medium that grabs the public imagination, and it tends to get worked out with time.

There’s no question that Facebook has done a lot of work on this issue. Whether or not it will ultimately work out, however, remains to be seen.

How Facebook Has Discouraged Fake News Since the 2016 Election