How OpenAI’s Sam Altman And Nvidia’s Jensen Huang Think A.I. Should Be Regulated

Sam Altman proposed a regulatory sandbox approach while Jensen Huang emphasized "sovereign A.I."

OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang
OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang. Getty Images

At the center of today’s buzzing scene of artificial intelligence are two companies, OpenAI, the company behind ChatGPT, and Nvidia (NVDA), which supplies the vast majority of computing chips to application developers like OpenAI. While both companies benefit hugely from the A.I. hype, their CEOs are advocates of global governance and setting guardrails in the rapidly advancing technology.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

At the World Government Summit in Dubai this week, OpenAI CEO Sam Altman proposed a regulatory sandbox approach, which encourages experimentation with A.I. and then “see what makes sense and what doesn’t and then write the regulation around that,” Altman said in an onstage interview with H.E. Omar Al Olama, the UAE Minister for A.I., on Feb. 13.

In the past year, OpenAI has become synonymous with the A.I. industry as a whole, as its flagship products ChatGPT, Dall-E and Sora, a text-to-video generator launched yesterday (Feb. 15), have exposed the world to A.I.’s advanced capabilities.

Altman said every country is looking to create its own regulatory environment and, at some point, a single global framework would be required to ensure countries are on the same page about the deployment and uses of A.I. An example already in practice is the International Atomic Energy Agency, an organization formed “to decide what will happen with the most powerful of these systems,” Altman said.

Altman hinted at the possibility of open-sourcing previous iterations of OpenAI’s models, such as GPT-3, as OpenAI has already done with GPT-2, to foster collaborative innovation.

Countries need to establish “sovereign A.I.”

Also speaking at the event was Nvidia CEO Jensen Huang, whose company’s market cap exceeded $1 trillion last year. He called on countries to develop what he described as “sovereign A.I.”

“This is the beginning of a new industrial revolution,” Huang said in an onstage fireside chat with Minister Al Olama on Feb. 12. “Every country needs to own the production of their intelligence.” Emphasizing the importance of each country owning its data, Huang described, “[your data] codifies your culture, your society’s intelligence, your history. Therefore, you must take your data, refine it, and create your own national intelligence.”

The Nvidia CEO emphasized the diverse applications of A.I. across multiple domains, from biology and physics to robotics, highlighting the need for comprehensive regulatory frameworks to ensure safety and ethical standards in A.I.-driven industries. “We have to develop the technology safely, we have to apply the technology safely, and we have to help people use the technology safely,” he said. “Whether it’s the plane that I came in, cars or medicine, all of these industries are heavily regulated today.”

As countries around the world navigate using A.I. to empower their growth, Altman and Huang emphasized the role of regulation in shaping the industry’s future.

Already, China and Brazil have proposed drafts to regulate A.I. in specific use cases, while Israel and Japan have instituted clarifications to protect consumer data based on existing laws. Italy, which instituted a temporary ChatGPT ban over data concerns last March, is also further exploring regulations to protect consumer privacy. In the U.S., 52 percent of Americans report feeling more concerned than excited about A.I. Much discussion has taken place about protecting artists and writers whose work is used to train A.I. models, allowing consumers to unknowingly use copyrighted work without giving credits to their original creators. Additionally, much scrutiny has come on tech companies for the emergence of deepfakes and the potential misinformation A.I. could fuel in the coming election season.

While regulation is important, Huang and Altman both believe democratizing A.I. will unleash a new chapter in human creativity. For example, every person can interact with A.I. today without the barrier of learning how to code, as the language of computers has “become human,” Huang said.

Altman echoed, “If you think of everybody on Earth getting the resources of a company hundreds of thousands of really competent people—an A.I. programmer, an A.I. lawyer, an A.I. marketer, A.I. strategist, and not just one of those but many of each, and you get to decide how you want to use that to create whatever you want to create, the creative power of humanity with tools like that should be remarkable.”

How OpenAI’s Sam Altman And Nvidia’s Jensen Huang Think A.I. Should Be Regulated