Elon Musk has set up a new company called X.AI aimed at creating an artificial intelligence chatbot called TruthGPT to rival OpenAI’s ChatGPT and similar applications developed by Microsoft and Google.
In an interview with Fox News’s Tucker Carlson that aired yesterday (April 17), the Tesla and SpaceX CEO said the development of A.I. is currently dominated by Google and Microsoft, which is a large investor behind OpenAI, and the world needs a third option.
“I’m starting very late in the game, of course. But I will create a third option that hopefully does more good than harm. This might be the best path to A.I. safety,” Musk said. He didn’t elaborate on how his version of A.I. would be different than existing products.
Musk recently registered a company named X.AI Corp in Nevada, the Wall Street Journal reported on April 14. He has also been poaching researchers from Google, according to Reuters. X.AI lists Musk as the sole director and Jared Birchall, his personal money manager, as a secretary. Birchall also serves as the director of Neuralink, a biotech company founded by Musk.
X.AI is the sixth company Musk publicly owns. Aside from Tesla, SpaceX, Twitter and Neuralink, he is also the CEO of the Boring Company, a tunneling startup as part of his hyperloop ambitions. In addition, he is associated with at least eight other companies, according to a New York Times report in May 2022.
In the Fox interview, Musk criticized OpenAI for becoming the opposite of what he envisioned when he cofounded the company in 2015 with its current CEO, Sam Altman. Musk said he created OpenAI as a counter weight to Google, whose CEO at the time, Larry Page, didn’t take A.I. safety seriously enough. OpenAI was set up as an open-source nonprofit with a mission to develop A.I. that benefits all of humanity.
“They are now close-source, for-profit and closely allied with Microsoft,” Musk said. He stepped down from OpenAI’s board in 2018 to focus on Tesla and SpaceX.
Last month, Musk co-signed an open letter calling for a six-month pause of A.I. training in the U.S. and developing a shared safety protocol within the industry.
“A.I. is more dangerous than, say, mismanaged aircraft design or bad car production. It has the potential of civilizational destruction,” he told Carlson. “What happens when something vastly smarter than the smartest person comes along in silicon form? It’s very difficult to predict what will happen in that circumstance. So I think we should be cautious.”
Google CEO Sundar Pichai, although not supporting a pause of A.I. development, expressed similar worries A.I. might be getting too smart too quickly.
“There is an aspect of this which we call it a ‘black box. And you can’t quite tell why it said this, or why it got wrong,” Pichai said of Google’s new A.I. chatbot Bard in an interview with CBS this week. “We don’t have all the answers there yet, and the technology is moving fast. Does that keep me up at night? Absolutely.”