The advancement of artificial intelligence could lead to the end of human existence as we know it, according to an extremely brief open letter released yesterday (May 30) and signed by big tech executives, AI scientists, academics and, somewhat incongruously, the musician Grimes.
“Mitigating the risks of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the short statement reads. It was published by the Center for A.I. Safety (CAIS), a non-profit focused on protecting the technology’s deployment.
The brief warning, which did not include contextual information regarding its grave supposition, follows two recent open letters alluding to potential dangers posed by A.I. research and implementation.
In March, more than 1,000 industry experts including Elon Musk and Apple co-founder Steve Wozniak signed a letter calling for a six-month pause on A.I. development to study the risks and develop safety protocols. The following month, another letter was published by members of the Association for the Advancement of Artificial Intelligence, including Microsoft (MSFT)’s chief scientific officer Eric Horvitz, warning of A.I.’s ability to empower bad actors.
The urgency of the two previous messages pales in comparison to that of CAIS’s succinct statement, which claims its goal is to “create common knowledge of the growing number of experts and public figures who also take some of advanced A.I.’s most severe risks seriously.”
Beyond Grimes, other unexpected supporters of the letter include podcaster Sam Harris, TED head Chris Anderson and Kersti Kaljulaid, the former president of the Republic of Estonia. The signatories who stand out, however, aren’t those who seem out of place but rather those who have built careers off of the very technology they are now warning could cause extinction.
Sam Altman, CEO of OpenAI
Sam Altman, head of ChatGPT-maker OpenAI, signed the letter alongside co-founders Ilya Sutskever, John Schulman and three other OpenAI executives.
First launched in 2015 as a non-profit aiming to develop A.I. in a safe manner, the company pivoted to a hybrid “capped-profit” model in 2019. In November, it released A.I. chatbot ChatGPT, which quickly went viral and sparked concerns regarding its potential to write school assignments and replace the workforce.
OpenAI’s market valuation was between $27 billion and $29 billion as of April.
Despite his support for the letter’s urgent warning, Altman has previously praised the rise of increasingly sophisticated A.I. systems. “This revolution will create phenomenal wealth,” he said in a 2021 blog post. “The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.””
Geoffrey Hinton, A.I. pioneer
Two members of a trio of computer scientists known as the “godfathers of A.I.” also pledged their support to the recent letter.
Signatories Geoffrey Hinton and Yoshua Bengio have worked in the A.I. field for decades and, with Yann LeCun, helped lay the foundation for the technology’s development. The three were given the Turing Award in 2018.
Hinton and Bengio, currently professors at the University of Toronto and and the Universite de Montreal respectively, have both expressed concerns over A.I.’s advancements.
Earlier this month, Hinton told the New York Times he quit his job at Google (GOOGL) to speak out about the risks of A.I., adding that he now regrets his life’s work. “It is hard to see how you can prevent the bad actors from using it for bad things,” he said.
Bengio expressed similar sentiments to BBC today (May 31), revealing A.I.’s threat has made him question his years of research. “You could say I feel lost,” he said.
Meanwhile LeCun, chief A.I. scientist at Meta, opted not to sign the letter.
Demis Hassabis, CEO of Google DeepMind
Several Google employees, including more than ten working for its machine learning A.I. startup DeepMind, are signees of the statement. The list includes DeepMind’s CEO and co-founder Demis Hassabis, who formerly worked as an A.I. programmer for video games at Lionhead and Elixir.
DeepMind was purchased by Google in 2014 for around $500 million. Hassabis has urged the company to proceed with caution and claims that artificial general intelligence (A.G.I) , a term referring to computers with cognitive abilities comparable to that of human beings, could emerge within the next decade.
“I would advocate not moving fast and breaking things,” Hassabis told Time in January, adding that he believes profits from A.G.I. should be redistributed evenly.
However, Hassabis’s division has been picking up steam in the past few months. Google DeepMind was formed in April, after Google combined AI-focused divisions DeepMind and Brain to “accelerate progress towards a world where AI can help solve the biggest challenges facing humanity.” And earlier this year, Google announced within its 2022 Q4 earnings that DeepMind’s financial information will now be reported under Alphabet’s corporate costs as it becomes increasingly used by the company.
Grimes, musician and producer
Grimes, the eclectic musician and former romantic partner of OpenAI co-founder Musk, has experimented with the technology throughout her career, notably using it to create an A.I. lullaby for her son and an A.I. chatbot trained off her own mannerisms.
“I think A.I. is great,” she told the New York Times. “I just feel like, creatively, I think A.I. can replace humans.”
The most recent A.I. project from Grimes, whose real name is Claire Boucher, was released this month in the form of Elf.Tech, an A.I. software program that allows fans to duplicate her vocals to create their own songs. “I like the idea of open sourcing all art and killing copyright,” tweeted the artist, who is offering a 50 percent royalty split for successful songs stemming from the A.I. program.
Dario Amodei, CEO of Anthropic
Anthropic’s Dario Amodei supported the statement alongside co-founders Jared Kaplan and Chris Olah. His sister Daniela Amodei, Anthropic’s president, also signed the letter.
Amodei previously worked at Google, Baidu and OpenAI before leaving the latter company in 2020 to head his own A.I. startup, which was valued at $4.1 billion in March.
The Google-backed startup also raised $450 million in a funding round this month, the largest A.I. funding round in 2023 since Microsoft’s OpenAI investment earlier this year.
Anthropic’s primary product is Claude, a chatbot it began testing in January with a select group of users. Claude was trained with a set of moral guidelines in order to avoid offensive responses from the A.I. chat tool, which draws from the U.N. Declaration on Human Rights and Apple’s data privacy rules. The startup’s process of creating a “harmless A.I. assistant” was laid out in a December white paper.
Earlier this month, alongside other A.I. CEOs like Altman, Amodei met with President Biden to discuss how to implement regulations in the A.I. industry to mitigate its risk.
Kevin Scott, CTO at Microsoft
Executives from A.I. powerhouse Microsoft also signed the letter, including the company’s chief technology officer Kevin Scott.
Microsoft is an investor in OpenAI—a partnership extended in January in a multi-billion dollar deal. It powers its search engine Bing with ChatGPT.
Scott, who said 2023 will be “the most exciting year that the A.I. community has ever had,” claims Microsoft is improving its responsible A.I. processes designed to mitigate the technology’s harm.
But despite signing a statement arguing that A.I. has the potential destroy human civilization, Scott authored the 2020 book Reprogramming the American Dream, which argues that A.I. can revolutionize entrepreneurship in rural towns. Microsoft has also heavily focused on A.I. in recent months, claiming the push will result in increased profits for the company over time.
What will yet another open letter accomplish?
The fact that so many signatories have financial ties to A.I. research and development companies has led some to wonder whether the recent calls for caution are a PR stunt or perhaps a way to gain a competitive advantage in an increasingly crowded industry.
Earlier this month, the Microsoft CTO was asked about the balance between big A.I. companies seeking regulations that only favor them and a regulatory landscape made up of smaller competitive start-ups. “I think we should have both,” Scott told Verge. “I don’t think there’s an A priori thing where one precludes the other.”
Altman has also called for more regulation in the A.I. industry, expressing concerns that the technology could be harnessed by authoritarian regimes or used for cyberattacks, yet his desire for regulatory oversight appears to be selective, as Altman also previously claimed that OpenAI may leave the E.U. due to the region’s potential “over-regulating” of A.I.
Focusing on the upsetting-but-unlikely consequences of A.I. implementation shifts regulators’ attention away from actual risks posed by artificial intelligence, suggested University of Washington law professor Ryan Calo in a series of tweets. “Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow ‘waking up’ is not.”