Renowned Yale Professor Jeffrey Sonnenfeld Discusses CEOs’ Fear and Confusion of A.I.

In a survey of 119 CEOs, almost 90 percent say the potential opportunity of A.I. is not overstated.

Jeffrey Sonnenfeld
Jeffrey Sonnenfeld’s Chief Executive Leadership Institute at Yale is the world’s first school for CEOs. Noam Galai/Getty Images

Jeffrey Sonnenfeld, the Yale management professor who has been tracking the presence of American businesses in Russia since the beginning of the Ukraine war, recently asked more than 100 CEOs from various industries their thoughts on artificial intelligence’s potential impact on their business. The findings were surprising.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="noreferrer" href="">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

At a virtual event this week hosted by Sonnenfeld’s Chief Executive Leadership Institute, a research and educational group affiliated with Yale University, the professor distributed a survey to 119 CEOs from various industries containing questions like whether A.I. is overhyped and if it’s a potential threat to humanity.

Respondents include Walmart CEO Doug McMillion, Coca-Cola’s James Quincey, Zoom founder and CEO Eric Yuan and business leaders in manufacturing, pharmaceuticals and media.

Most CEOs are excited about A.I., but many fear it may destroy humanity

Over 4o percent of the CEOs polled believe A.I. could destroy humanity within the next decade, Sonnenfeld’s survey found. Over 30 percent of respondents said this could happen in 1o the next years, and 8 percent believe the day could come in five years. Almost 60 percent of respondents said they were “not worried” about A.I. being a threat to humanity. Over 40 percent think the dangers of A.I. are overstated.

When asked about the positive impact of A.I., the CEOs found more consensus: almost 90 percent say the potential opportunity of A.I. is not overstated. However, they are not entirely convinced of A.I.’s business potential. “There is a sense that there’s too much money going into the valuation [of AI companies],” Sonnenfeld told Observer. “And there’s a lot of questioning about the fundamental business models.”

Younger CEOs are more confused about A.I. than older ones

In previous surveys about emerging technologies, such as cryptocurrency,  Sonnenfeld had found that younger executives (under 40) tended to be more knowledgeable about the subject than older ones. He was surprised to see almost the opposite in A.I.

“Unlike with cryptocurrency, where the older CEOs are more confused, in this case, a lot of the younger CEOs don’t seem to know what they are talking about—more than the older CEOs that have stronger tech backgrounds,” he said. “They are more eager to speak the language than understand the technology and where it could be most useful.”

From the survey and his conversations with CEOs, Sonnenfeld observed many young CEOs loved talking about the use of A.I. tools in marketing and advertising instead of more impactful areas like healthcare and manufacturing.

A “cautious optimist” of A.I.’s future

On the spectrum from being extremely optimistic about A.I. to strictly against the technology, Sonnenfeld categorizes the CEOs he surveyed into five groups:

  • “Curious creators” argue everything you can do, you should do. (Venture capitalist Marc Andreessen recently expressed a similar view in a blog post about A.I.)
  • “Euphoric true believers” only see the good in technology.
  • “Commercial profiteers” don’t necessarily understand the new technology but are enthusiastically seeking to cash in on the hype.
  • “Alarmist activists” advocate for restricting A.I.
  • “Global governance advocates” support regulation and necessary crackdown.

Sonnenfeld sees himself as a “cautious optimist” on the matter, he told Observer. “It’s very similar to what we saw with social media, biotech and nuclear energy,” he said. “As Robert Oppenheimer warned us, it can be very dangerous to think that technology only takes us to the best of the world.”

To minimize the potential harm of A.I., Sonnenfeld suggested establishing legal guidance around the technology, something similar to the Nuclear Non-Proliferation Treaty signed in 1968 to limit the irresponsible spread of nuclear weapons.

Renowned Yale Professor Jeffrey Sonnenfeld Discusses CEOs’ Fear and Confusion of A.I.