In the past decade or so, academic breakthroughs in artificial intelligence, a niche field sitting in the cross section of computer science and statistics, have led to a leap of progress in many consumer products, from Google Translate to Amazon smart speakers to the ranking of posts on your Facebook homepage.
The rapid changes in these everyday work and entertainment tools have fueled a rising interest in the underlying technology itself; journalists write about AI tirelessly, and companies—of tech nature or not—brand themselves with AI, machine learning or deep learning whenever they get a chance.
Despite a record volume of AI content, however, scientists working on this trendy front don’t necessarily believe that the public is better informed.
“I have read many pretty horrible coverages—either because they were making unsubstantiated claims of the technology or its consequences—about machine learning and artificial intelligence,” Kyunghyun Cho, a scientist of Facebook artificial intelligence (AI) research and a data science professor at New York University, told Observer.
Earlier this month, Zachary Lipton, an assistant professor in the machine learning department at Carnegie Mellon University, voiced his concerns in a Guardian article about the “AI misinformation epidemic” caused by media’s overcrowded yet poor reporting on this topic.
For example, last year, Facebook’s AI unit published a research paper about how bots can simulate negotiation-like conversations. Originally, the bots were using language that was erratic and incoherent. But the researchers found that, with the introduction of a constraint in the bots’ training set, they could prevent the bots from generating strange sentences like, “Balls have zero to me to me to me to me to me to me to me to.”
The findings in the research weren’t necessarily a major breakthrough in the field of AI, but, one month later, the research became the subject of a Fast Company story titled, “AI Is Inventing Language Humans Can’t Understand. Should We Stop It?”
The article, at least the headline, clearly focused on the wrong side of the real story and completely ignored the important findings. Lipton called the episode a perfect example of a devolution from “interesting-ish research” to “sensationalized crap.”
“The media coverage of AI right now is a retreat from the real science reporting,” Lipton told Observer. “Journalists are not looking at the traditional science journals and academic publications anymore. What’s often happening is that someone comes up with something—that hasn’t been peer-reviewed or published yet—and shares it on their blog sites or through their company’s press release. And that just coincides the huge amount of pressure journalists are under to increase content volume because of the ‘à la carte’ consumption model of online journalism.”
Needless to say, it is a journalist’s job to translate complicated topics like AI into plain, digestible English for average readers. And yet, it’s unrealistic to expect journalists, even the best ones on the tech beat, to have the same level of expertise in AI like a professional AI researcher.
As technology essayist Joanne McNeil put it: “If you compare a journalist’s income to an AI researcher’s income, it becomes pretty clear pretty quickly why it is impossible for journalists to produce the type of carefully thought through writing that researchers want done about their work… There are few outlets interested in publishing nuanced pieces and few editors who have the expertise to edit them.”
The same logic applies to many other fields as well, such as finance, medicine and astronomy. What makes AI special, though, is all the hype around it at the moment. To that end, journalists are not the only ones to blame for flooding the internet with misinformation.
As Lipton pointed out, time-strapped reporters often rely on companies’ press releases or blog posts as main sources for breaking news. The problem is, these first-hand reports—occasionally written by researchers, more often by communication staff—can sometimes be over-embellished in the first place. If it’s an eye-popping story, like Facebook’s robots inventing a new language, more journalists will pick it up, repackage it with an even more attention-grabbing headline and distribute the story to larger platforms, more often than not without much further investigation or fact-checking.
In the meantime, provided the public’s rising interest around AI, many tech companies, like Google and Facebook, increasingly use their press pages as a channel to speak directly to consumers instead of journalists, frequently publishing content about their products like a news site does.
“Few people have the audacity to run a counter narrative against a popular story,” Lipton observed. “Because by paraphrasing a ‘puff piece,’ you can say, ‘Hey, I’m not the expert. This is just what comes from Google.’ But if you are to run a counter narrative—calling out someone doing junk science, for example—you’d better be sure.”
Another factor accelerating the tech news cycle is a surge of tech/business events and conferences in recent years.
“I’ve been invited to speak at a bunch of these industry conferences. I’ve also seen a lot of talks that people gave at these events. Most of the time, they are just a ‘buzzword soup’ with no real content. Someone would say, ‘We use a cloud, an internet of things to connect all the devices, using big data to make better models.’ It’s complete bullshit,” Lipton said.
It’s fine if it’s just a few entrepreneurs desperate to promote their businesses by chasing buzzwords; what’s worrisome is that these events are often the place where journalists, politicians and other non-tech people learn about new developments in tech. So, when the messages given on stage are misleading, the impact goes far beyond one conference hall.
“People are afraid about the wrong things,” Lipton told The Guardian. “There are policymakers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision-making. But this issue is terrestrial and sober, so not many people take an interest.”
“There is a large degree of freedom in what kind of consequences a general population imagines a new technology, regardless of whether it is groundbreaking or not, would have in the future, and I don’t think it’s a good idea nor possible to regulate this,” Cho said. “I believe, however, it is reasonable to expect news articles to be closer to—but not necessarily only about—the solid facts [rather] than wild speculation. Unfortunately, some news articles seem to have failed to clearly distinguish between the current, realistic capabilities of new technologies and the speculative use/consequence of them.”