None of the big tech companies are trying to pretend that fake news isn’t a real problem anymore. Mark Zuckerberg has gone from denying its impact to seeking a new product head to take on the problem. At SXSW, Google invited two fake news creators on stage as an exercise in live research.
The woman who organized that event, Yasmin Green, runs Google’s Jigsaw product, which seeks ways to make the web safer for all users.
“I think that’s a silver lining of everything we’ve seen,” Green said today at the WIRED Business Conference in New York. “They are internet-wide problems.”
“We’ve already had very encouraging early conversations with other platforms,” she continued.
Companies like Facebook and Twitter have been willing to collaborate with Google to find ways to stop the propagation of false narratives masquerading as news. Green just got back from Macedonia, which has been a hotbed for these phony stories. WIRED did a cover story where it visited some of the fake news distributors in that country.
“The part that we’ve really been trying to understand is, what does the dissemination of this look like,” Green said, explaining that it doesn’t matter if someone writes a garbage post if no one reads it. That’s why it’s important to involve the social media platforms in any discussion about curtailing the spread of fake news.
In Macedonia, Green realized that the strength of those creating fake news was not writing bogus stories, but using social media. In fact, many of the people who ran the sites didn’t even speak English. They just cut and pasted other people’s stories, but they knew how to get those stories out.
“I think there’s a bigger challenge for us as a sector in trying to organize information that’s helpful,” Green said.
The Observer just took a look at that today in an article comparing searches around big news topics on Google, Microsoft’s Bing and Qwant, a privacy-oriented search engine from France. Comparing them side by side, Google really seems to have fallen behind in the way it organizes its search results. When it comes to news search, Google seems protected now largely by a global force of habit.
In many ways, information on the internet is self-correcting. When people try to share false information, it’s easy for others to point that out. That only works if people with good intentions are willing to engage in online conversation, though.
“I’m not keen to jump into a conversation about politics or religion or really much of any conversation online,” Green admitted, because the dialogue is so uniformly inflammatory. That hostile environment cedes the floor to trolls.
Green’s team also has a product called Perspective that helps platforms (including news organizations’ websites) flag and filter inflammatory content. Google built it not to filter content, but to filter out tone. It focused first on identifying the kind of comments that tend to make people leave conversations. They had thousands of people help by tagging a dataset of over a million comments. They found it was the easiest to find consensus around the tenor of comments that would drive non-trolls away.
If regular people feel more comfortable joining in, that means more folks will be on-hand to call bullshit when someone’s clearly peddling fiction.