Facebook Still Has a Content Problem—And It’s a Real Issue for Democracy

Facebook's 'War Room' at its Menlo Park headquarters was launched late last year to serve as the company's nerve center in the fight against misinformation.

Facebook’s ‘War Room’ at its Menlo Park headquarters was launched late last year to serve as the company’s nerve center in the fight against misinformation. NOAH BERGER/AFP/Getty Images

White supremacists sharing content online is how other white supremacists become sufficiently radicalized to murder dozens of worshipers in a black church, a synagogue and a mosque.

The obvious conclusion to draw is that if it was a little harder to use a platform with two billion people on it to spread a demonstrably dangerous ideology, one that consistently and predictably results in violence, perhaps the world would be a better place—and not the white supremacist violence-friendly one we currently inhabit.

Subscribe to Observer’s Politics Newsletter

And yet despite deplatforming’s proven success—turns out if you take away a demagogue’s microphone, it’s harder to hear them!—it took Facebook until Tuesday to connect these very glaring dots. As Motherboard first reported, beginning next week, both white nationalists and white supremacists will be banned from posting content on Instagram, as well as Facebook.

Yet according to The Guardian, Facebook took this very modest and long-overdue step under what sounds like duress. The company consulted academics for three months—all of whom would have told Facebook it was still giving white nationalists a platform—but took action only after a white separatist murdered 50 people in a New Zealand mosque. In a blog post explaining the move, the company appeared to suggest that, actually, it had considered white nationalism and white separatism acceptable views, akin to “American nationalism,” a patently nonsense non-position it justified with Wikipedia articles.

In other words, Facebook can’t be trusted to police itself. The company’s efforts at self-regulation are neither timely nor adequate. The consequences are deadly, but they are also undemocratic.

Facebook is where 43 percent of Americans say they get their news. Together, Facebook and Google account for 85 percent of the global digital ad market. Media companies are still struggling to figure out how to produce news content and stay viable absent an ad model. This is to say that two Silicon Valley monsters are killing the news. Without accurate and trustworthy information, a gap is left—one that an enterprising foreign misinformation campaign can exploit.

This has already happened. Facebook allowed foreign agents bent on influencing the 2016 election to buy political ads. The company says it has finally stopped this practice—now political ads can only be shown in the country of origin—but Facebook is not the only Silicon Valley giant with a laissez-faire attitude undermining liberal representative democracy.

As Michael McFaul, the former U.S. ambassador to Russia, wrote in Foreign Affairs last year, through their aversion to government regulation and unwillingness to self-regulate, tech companies are handy useful idiots for autocrats bent on disrupting American democracy.

McFaul is particularly worried about Russia—for obvious reasons; Vladimir Putin openly desires to “interrogate” the former ambassador, an unprecedented development that President Donald Trump famously called “an incredible offer”—but the problems McFaul identified could be exploited by any foreign actor.

“Readers must know who created and paid for the articles they read and the videos they watch,” he wrote. He has called for search engines to not over-represent or prioritize information originating from foreign governments, and for Washington policymakers to ensure that the internet is not a dystopian hive “through regulation.” This seems reasonable, but it has not happened.

After watching Facebook stumble and slouch towards removing Turner Diaries fans from its platform and hearing Jack Dorsey mumble and prevaricate about why removing Nazis from Twitter is just too hard—even as YouTube (and parent company Google) find the time to remove tens of millions of potentially harmful videos—it seems clear that government intervention, or at least the threat of it, is necessary to ensure more damage isn’t done.

How likely is this to happen? It won’t happen before 2020. Regulating Silicon Valley in this way will likely be unpopular. The companies are big spenders in elections. Anyone taking them on can expect to draw their ire. And there will be the predictable whining from free-speech absolutists. Nearly all of it will be in bad faith. What’s not will be ignorant.

Political speech in the United States is strictly regulated. There are rules on campaigning at polling places; there are rules about disclosing who paid for what ad; and there are rules limiting who can buy ads at all, as well as how much they can spend. And while the Supreme Court has ruled that prior restraint is unconstitutional, the court has also ruled that speakers can be held responsible for the consequences of their speech.

It will be impossible to know for certain whether it was a populist moment, Hillary Clinton’s bizarrely inadequate campaign, James Comey’s inexcusable breach of law enforcement protocol, or Mark Zuckerberg making money off of Russian-planted fake news that swung the 2016 election—the real truth is that it was a combination—but it is beyond debate that misinformation on the internet is causing serious, far-reaching and potentially irreversible problems.

The stock Silicon Valley mindset is like that of a privileged teen who lives a consequence-free life. It is past time to admit that the real world does not work like that and that a blithe attitude carries a bill. Elizabeth Warren’s calls for Silicon Valley giants nearing monopoly status to be broken up for the sake of competition should be the beginning. It’s not only the market that needs saving.

Facebook Still Has a Content Problem—And It’s a Real Issue for Democracy