A.I. Leaders Are Divided on Whether and How to Regulate the Tech

What the most influential CEOs, investors and industry veterans have said about regulating A.I.

A collage of tech leaders.
A.I. leaders haven’t found a consensus on how to regulate the technology. Getty Images

The tech community has been discussing for a while the possibility of artificial intelligence outsmarting human beings. An equally hotly debated topic is whether humans should come in to regulate the rapidly advancing technology before it’s too late.

No country has yet established a comprehensive legal framework around A.I. In the U.S., lawmakers are seeking input from industry players by hosting public hearings and private meetings with tech executives like OpenAI CEO Sam Altman and Google CEO Sundar Pichai.

Tech leaders are divided on how A.I. should be regulated, if at all. While some support the idea of having a government agency dedicated to licensing new A.I. products, others say the technology shouldn’t be restricted in any way. And there are those in the middle who believe regulation should only apply to large companies but not the upstart ones.

Here are what the most influential CEOs, investors and industry veterans have said about the issue.

OpenAI CEO Sam Altman: “Regulation of A.I. is essential”

The OpenAI CEO is all for regulating A.I. During a Congressional hearing in May, Altman urged lawmakers to police the emerging technology to “mitigate the risks of increasingly powerful models.”

Altman recommended that the government create a new agency to test and license new A.I. models. He proposed establishing a set of safety standards and having independent auditors test new models before they are allowed to be deployed.

However, Altman believes regulation should only apply to large A.I. firms like his own. “We have explicitly said there should be no regulation on smaller companies. The only regulation we have called for is on ourselves and people bigger,” he said at an event in India yesterday (June 7).

Google CEO Sundar Pichai: “A.I. is too important not to regulate.”

While Google (GOOGL) sees OpenAI (and its investor Microsoft (MSFT)) as its largest competitor in A.I., its CEO Sundar Pichai is on the same side as Altman when it comes to regulation. Last month, Pichai and Altman were among the tech executives invited to the White House to discuss A.I. safety with senior officials of the Biden administration.

In an op-ed for the Financial Times on May 22, Pichai called A.I. “the most profound technology humanity is working on today.”

“A.I. needs to be regulated in a way that balances innovation and potential harms,” he wrote. “I still believe A.I. is too important not to regulate, and too important not to regulate well.”

Pichai stressed the importance of international cooperation, adding the U.S. and Europe must work together on future regulation in the A.I. space.

a16z cofounder Marc Andreessen: Companies should “build A.I. as fast and aggressively as they can.”

In stark contrast to Altman and Pichai, venture capitalist Marc Andreessen believes regulation could eventually harm innovation and competition. In a blog post on June 6, the co-founder of venture capital powerhouse Andreessen Horowitz argued A.I. companies should be allowed to build “as fast and aggressively as they can” without restrictions or assistance from any government.

“This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism,” Andreessen wrote.

Elon Musk: “A.I. developers must work with policymakers.”

Elon Musk cofounded OpenAI with Sam Altman in 2015. The tech billionaire is also advocating for government intervention before A.I. becomes too advanced to be regulated.

In March, Musk and more than 1,000 tech leaders signed an open letter calling for a six-month pause in A.I. development. “A.I. developers must work with policymakers to dramatically accelerate the development of robust A.I. governance systems,” the letter said. “These should at a minimum include new and capable regulatory authorities dedicated to A.I.”

Microsoft President Brad Smith: “The government should ensure A.I. models “are developed safely” and “protected from security threats.”

Brad Smith doesn’t believe a six-month pause of A.I. development is realistic, but he supports the idea of setting up a government agency to regulate A.I.

“Something that would ensure not only that these models are developed safely, but they’re deployed in large data centers where they can be protected from cybersecurity, physical security and national security threats,” Smith said on CBS’ “Face the Nation” on May 28.

Smith proposed an executive order mandating the U.S. government only buy A.I. services from companies with safety protocols in place. Microsoft is the largest corporate backer of OpenAI and incorporates OpenAI’s GPT language model in its search engine Bing.

Bill Gates: “A.I.s have to be tested very carefully and properly regulated.”

The retired Microsoft cofounder recently co-signed a statement by the nonprofit Center for AI Safety warning about the risk of extinction from A.I. if the technology is not properly regulated.

In a blog post in March, Gates wrote, “A.I.s have to be tested very carefully and properly regulated,” referring to A.I. applications in high-impact areas like health care, “which means it will take longer for them to be adopted than in other areas.”

In the lengthy post, Gates said A.I. is the most revolutionary technology he’s seen since the breakthroughs of personal computers in the 1980s.

A.I. Leaders Are Divided on Whether and How to Regulate the Tech