AI Chatbots Aren’t as Close to Human Intelligence as You Think

Despite jaw-dropping results, ChatGPT and other bots can only reproduce a small fraction of what makes humans intelligent.

(Photo by OLIVIER DOULIERY/AFP via Getty Images) AFP via Getty Images

This story is syndicated from the Substack newsletter Big Technology; subscribe for free here.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

AI chatbots are getting so good, people are starting to see them as human. Several users have recently called the bots their best friends, others have professed their love, and a Google engineer even helped one hire a lawyer. From a product standpoint, these bots are extraordinary. But from a research perspective, the people dreaming about their human-level intelligence are due for a reality check.

Chatbots today are trained only on text, a debilitating limitation. Ingesting mountains of the written word can produce jaw-dropping results—like rewriting Eminem in Shakespearian style—but it prevents perception of the nonverbal world. Much of human intelligence isn’t marked down. We pick up our innate understanding of physics, craft, and emotion not by reading, but by living. Without written material on these topics to train on, the AI comes up short.

“The understanding these current systems have of the underlying reality that language expresses is extremely shallow,” said Yann LeCun, Meta’s chief AI scientist and a professor of computer science at New York University. “It’s not a particularly big step towards human-level intelligence.”

Holding up a sheet of paper, LeCun demonstrated ChatGPT’s limited understanding of the world in a recent Big Technology Podcast episode. The bot wouldn’t know what would happen if he let go of the paper with one hand, LeCun promised. Upon consultation, ChatGPT said the paper would “tilt or rotate in the direction of the hand that is no longer holding it.” For a moment—given its presentation and confidence—the answer seemed plausible. But the bot was dead wrong.

Lecun’s paper moved toward the hand still holding it, something humans know instinctually. ChatGPT, however, blanked out because people rarely describe the physics of letting go of a paper in text. (Perhaps until now).

“I can come up with a huge stack of similar situations, each one of them will not have been described in any text,” LeCun said. “So then the question you want to ask is, ‘How much of human knowledge is present and described in text?’ And my answer to this is a tiny portion. Most of human knowledge is not actually language-related.”

Without an innate understanding of the world, AI can’t predict. And without prediction, it can’t plan. “Prediction is the essence of intelligence,” said LeCun. This explains, at least in part, why self-driving cars are still bumbling through a world they don’t completely understand. And why chatbot intelligence remains limited—if still powerful—despite the anthropomorphizing.

AI Chatbots Aren’t as Close to Human Intelligence as You Think