Facebook (META) has put its foot on the neck of fake news publishers, announcing Monday that pages that repeatedly post fake news will lose the privilege of advertising on the platform. Denying the distributors of phony reporting the right to give the company money feels like real commitment. Not that Facebook can’t afford to walk away from a few dollars, but no publicly traded company rushes to turn its back on business.
Facebook is one of the chief drivers of all traffic on the internet. Web analytics firm Parsely keeps a running tally based on its many customers all over the web, and Facebook and Google just kind of go back and forth as the top traffic drivers. As of this writing, both the dominant search engine and the dominant social network each account for about 38 percent of all traffic. Of all other referral sites, only Twitter and Yahoo have been able to break the two percent mark recently.
How crazy is that?
So anything Facebook does to deny traffic to phony stories is powerful, and we wanted to run down the history of everything the site has done so far in light of this latest announcement.
First, though, let’s clarify what we’re talking about. Fake news is an easy concept to understand. It’s not inadequate reporting, biased reporting, reporting with mistakes or even reporting based on a hoax. Those are all examples of bad work, but they aren’t fake news. Fake news is the cynical fabrication of events that never took place, usually with the objective of generating profit.
If you had a crazy uncle who periodically told you during last year’s election that the Clintons have a history of murdering opponents and you couldn’t figure out where on earth he was getting that nonsensical story, he was reading fake news.
So here’s the history of Facebook’s efforts to undermine the spread of demonstrably false information on its site:
November 12, 2016. Zuck downplays the problem.
Just ahead of the election, Buzzfeed got a big hit reporting on loads of fake news sites running out of Macedonia. Right after the election, the Facebook CEO wrote a tortured Facebook post where he said the influence of fake news on the election was overblown. Still, he started sketching out some things the site might do to undermine lies.
A few days later, Buzzfeed reported that the top fake news stories on Facebook saw more engagement on the platform than actual news, just ahead of the election. A Facebook spokesperson cited in the story continued to downplay the importance of this finding.
December 15, 2016. Facebook announces fact-checking flags.
Just before Christmas, Facebook announced that community members would be able to more easily flag news as disputed right on the site. If Facebook had enough clues that a story might be phony, it would turn it over to third-party fact-checkers. It relied on groups that signed on to key principles around fact-checking a few months prior.
This program also had a machine learning component. Facebook said it would look at the behavior of people who actually visited the link in question. If folks who actually clicked weren’t as likely to share as those who did, that was another bad sign.
None of these programs were live yet in December, which is itself noteworthy. Tech companies don’t typically like to talk about products until they are done, but the pressure on Facebook to deal with this problem was such that it needed to say what it was working on.
January 31, 2017. Machine-learning gobbledygook.
Next, Facebook basically posted that it would do machine-learning stuff and, “Trust us, it’ll be good.” For example, if a post is getting hidden a lot, that’s bad. If people you know are engaging with something, then you are more likely to see it. In February, Zuck released a giant letter on building a global community that touched on related points.
March 4, 2017. Users start seeing the “disputed news” flag on the site.
April 6, 2017. Facebook takes more proactive steps externally.
Facebook announced that it would collaborate more with news organizations and back a consortium of funders to increase the broader trust in journalism.
Facebook’s collaborations with news organizations end badly. It created pages, and publishers put a lot of effort into convincing readers to “like” their pages as a way to drive organic clicks. Then in 2016 it announced feeds would favor posts from regular people and deprecate the value of posts made by pages, unless their owners paid to get seen more. In 2015, it announced instant articles, to more quickly load content on the site while still sharing revenue. Shortly thereafter, it announced it would favor video content over text, and publishers have come to regret their investment in the instant program. Now the social network is luring publishers into its YouTube competitor, Watch.
Prediction: publishers will regret investing time in this effort too. So it’s unlikely that anything Facebook does will yield a healthier, more robust reporting infrastructure to compete with the scammers.
Still, its philanthropy sounds good. Any investment in helping Americans understand the value that principled journalism brings to a country is welcome.
May 10, 2017. Crappy sites lose access to Facebook ads.
Menlo Park sent its bots out to crawl web pages looking for spammy, phony or malicious content. It also looked for content with “disruptive” ads. They wrote that sites that were found to have too much garbage and provide viewers with a poor user experience would lose the right to advertise on Facebook. Again, this post was forward looking. The program wasn’t fully live when the post went up. It was a warning to bad publishers.
This relates to fake news because low quality sources could also lose their advertising privileges.
It seems like a lot of Facebook’s winter ideas have been coming together at the end of this summer. There’s been a lot of announcements in August.
August 3, 2017. Related articles may include a fact-checker post.
There’s a good chance that a fact-checker might review a post circulating on Facebook before the site actually marks it as disputed. In the meantime, Facebook announced it would start including posts from fact-checker sites in related articles, at least sometimes. That way, if a user sees something that seems a little crazy, looking right below the post might reveal someone who’s written something explaining why it is or is not crazy.
Nieman also reported that Facebook had started paying fact-checkers, which is nice of them.
August 9, 2017. This one is crazy.
Apparently fake news sites and other malicious publishers have been hurting badly enough that they started pulling a very sneaky trick. They coded links in such a way that one page would show up for people reviewing the site on behalf of Facebook and another one would show up for a real user. It’s called “cloaking.” It’s nuts. Anyway, Facebook caught it and took steps to catch sites doing it and punish them.
Then Facebook announced it would stop accepting ad money from sites that posted too many misleading stories, as we wrote above.
With that, we’re all caught up.
In a world where it’s easy to pick and choose among sources, people have a strong emotional incentive to find “sources” that buttress their way of thinking. Brooke Binkowski told the BBC, “A lot of people want proof that their world view is the accurate and appropriate one.”
So the larger debate about Facebook’s responsibility in these matters hasn’t abated. Veteran British broadcaster Jon Snow took the occasion of a prestigious lecture to call Facebook’s failure to move swiftly to combat fake news a “threat to democracy.” Adrian Chen (one of the best reporters on internet culture in the world) published a piece in The New Yorker arguing that we tend to see drama like this with every new medium that grabs the public imagination, and it tends to get worked out with time.
There’s no question that Facebook has done a lot of work on this issue. Whether or not it will ultimately work out, however, remains to be seen.