A Scientist’s View on Why the AI Apocalypse Isn’t the End of the World

Kyunghyun Cho, assistant professor of computer science and data science at New York University. Kaitlyn Flannagan for Observer

Artificial intelligence is often portrayed by mainstream media as a “black-box” technology, where we pose a problem, feed a raw set of data into an algorithm system and let it figure out a solution to by itself—and, over time, as the system learns from its own previous experiences, it gets better at problem-solving.

But no one knows what exactly happens between these steps. 

However, that doesn’t stop Silicon Valley investors from pouring venture capital into startups that build business upon artificial intelligence and vow to change the world. According to CB Insights data, between 2012 and 2016, venture capital funding in AI startups increased more than eightfold.

As the AI buzz gets louder, worries around the so-called “AI apocalypse” have emerged.

People fear that robots will replace humans in most functions in our society. AI robots are already serving as supermarket cashiers, baristas, stock advisors and even pets. A McKinsey study projects that, by 2030, 800 million human jobs will be replaced by robots.

However, inside the AI academia, scientists are seeing a slightly different picture.

“Machines still have a long way to go to replace humans,” Kyunghyun Cho, a scientist of Facebook AI research and a data science professor at New York University, told Observer in a recent interview. 

Cho is a rising star in machine translation, an subfield of computational linguistics that has seen major breakthroughs in recent years thanks to the application of AI. Cho was named on Bloomberg’s list of “people to watch in 2018.” Geoffrey Hinton, a computer science professor at the University of Toronto, who is regarded as “the Godfather of AI,” told Bloomberg that Cho’s work had a huge impact on machine translation.

Machine translation aims to use softwares to translate text or speech from one language to another. Scientists have been working in this field for decades, but major progresses didn’t start taking place until the last five years, when large-scale neural networks began being applied to power the translation process. 

The technology is now widely used in everyday internet tools and home devices like Google Translate, Apple’s Siri and Amazon’s Alexa smart speakers.

Today, in a controlled environment, where there is little noise, software can transcribe speeches faster and more accurately than humans, Cho told Observer. In translation, however, algorithms are only good at processing text in units of words and short sentences. When complex grammar and context are involved, computers are still far from threatening human jobs. 

“The progress we’ve made in machine translation is exciting. But, it’s not that exciting.” Cho said. “Let’s say you want to build a sophisticated software that has very complex intelligence like a human does, you need to design a set of algorithms to enable it to see, hear, speak and touch.”

“But all those abilities are just the first step,” he continued. “Once a machine knows how to perceive the world, there are still so many complicated things on top of that. For example, it needs to be able to reason things, to plan and even to imagine. We’ve only just taken a very, very small step.”

The same can be said for the broader scope of AI. 

“We have to be careful with exaggerated claims. We are still far from human-level AI, and those who claim machines with human-like behavior—for example, natural dialogue—are just feeding the hype and hurting the field,” said Yoshua Bengio, a computer science professor at the University of Montreal in Canada. “Science moves by small steps and builds on the work of many others.” 

Nevertheless, Cho believes that everything humans can do is eventually replaceable by computers, as algorithms get “smarter” and the data they learn from becomes sufficient. 

“Our brains are just biologically implemented computers, in some sense,” he said. 

No matter what stage AI research is at right now, practicalists are already using the technology to maximum extent.

For instance, facial recognition technology is used in military drones that some fear could become “killer robots” in future warfare; China has adopted facial recognition technology nationwide to surveil its citizens; even in the niche field of machine translation, researchers have found that algorithms could amplify sexism and racism in human language. All of these concerns have raised the worry over AI’s potential to rattle the ethical order of human society. 

But Cho doesn’t believe scientists are to blame. 

“The inherent risks of technology is its users who are going to exploit the technology for their risky behaviors. Inventors cannot be blamed, because, as soon as we solely blame inventors, we effectively kill any kind of innovation.” he said. 

“Machine learning is like computers in the 1970s. Even Bill Gates and IBM didn’t think computers would be as popular as they are today. So, it’s going to be everywhere. The infusion of venture capital is definitely good, and venture capitalists should invest more in this sector,” he added.

Bengio estimates that there are currently thousands of researchers working in the AI academia and at least 10 times more working on the industry side. 

“Research output has accelerated mostly because many more researchers are involved, and we have better funding for this research, but many important challenges remain ahead of us. Downplaying the current successes in order to sell some new direction is not a better approach, either,” he told Observer. 

A Scientist’s View on Why the AI Apocalypse Isn’t the End of the World