Google CEO Sundar Pichai Says A.I.’s Potential Downsides Keep Him Up At Night

Pichai says A.I. is potentially more important than the discovery of fire and electricity.

Google CEO Sundar Pichai in a gray suit.
Google CEO Sundar Pichai. Ron Jenkins/Getty Images

Google (GOOGL) CEO Sundar Pichai believes artificial intelligence is the most profound technology in the history of human civilization—potentially more important than the discovery of fire and electricity—and yet even he doesn’t fully understand how it works, Pichai said in an interview with CBS’s 60 Minutes that aired yesterday (April 16).

“We need to adapt as a society for it…This is going to impact every product across every company,” Pichai said of recent breakthroughs in A.I. in a conversation with CBS journalist Scott Pelley. It’s the Google CEO’s second long-form interview in a two weeks as he apparently embarks on a charm offensive with the press to establish himself and Google as a thought leader in A.I. after the company’s latest A.I. product received mixed reviews.

Google in February introduced Bard, an A.I. chatbot to compete with OpenAI’s ChatGPT and Microsoft’s new Bing, and recently made it available to the public. In an internal letter to Google employees in March, Pichai said the success of Bard will depend on public testing and cautioned things could go wrong as the chatbot improves itself through interacting with users. He told Pelley Google intends to deploy A.I. in a beneficial way, but suggested how A.I. develops might be beyond its creator’s control.

So far, Bard’s performance is impressive at times and confusing at others. During a test shown in yesterday’s program, Pelley asked Bard to finish a six-word story famously attributed to Ernst Hemingway with a simple prompt and the result exceeded expectations. “I had a sense that I was meeting an intelligence that I had never conceived of and an intelligence I was sure I would never understand,” Pelley said of the result Bard generated.

A.I. errors that are ‘hallucinations’

In another test, however, Pelley asked Bard about inflation and received a response with suggestions for five books that, when he checked later, didn’t actually exist. In the A.I. industry, this type of error is called a hallucination. And engineers aren’t sure why they happen.

“There is an aspect of this which we call it a ‘black box,'” Pichai said. “And you can’t quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.”

Pichai added it’s similar to how little scientists understand how a human mind works. “We don’t have all the answers there yet, and the technology is moving fast. Does that keep me up at night? Absolutely,” he said.

In late March, Elon Musk and a group of tech leaders signed an open letter calling for a six-month pause in developing A.I. systems that are more powerful than OpenAI’s newly launched GPT-4.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter, authored by the Future of Life Institute, a nonprofit partly funded by Musk’s family foundation.

The open letter has received more than 26,000 signatures so far. Pichai isn’t one of the cosigners.

Google CEO Sundar Pichai Says A.I.’s Potential Downsides Keep Him Up At Night