Elon Musk believes that, at the current pace of advancement, A.I. will likely surpass human intelligence by 2030 and there’s a real chance of the technology ending humanity. But that doesn’t mean the future is all bleak. Speaking at a Silicon Valley event last week (March 19), the Tesla and SpaceX CEO warned that A.I. is “happening fast” and that “we want to try to steer [A.I.] in the most positive direction possible to increase the probability of a great future.”
Musk spoke during a fireside chat with Peter Diamandis at the Abundance 360 Summit, hosted by Singularity University, a Silicon Valley institution that counsels business leaders on bleeding-edge technologies. Diamandis is the founder of both Singularity University and XPRIZE Foundation, a nonprofit hosting science competitions, some of which are sponsored by Musk.
“It’s called singularity for a reason,” Musk said in reference to the host of the event. “When you have the advent of super intelligence, it’s very difficult to predict what will happen next—there’s some chance it will end humanity.” Musk added that he agreed with “A.I. godfather” Geoffrey Hinton in that there’s a 10 to 20 percent probability of such an event taking place.
While acknowledging the risks of A.I. surpassing human intelligence, Musk also highlighted the potential for a positive outcome outweighing the negative, pointing to the title of Diamandis’ 2014 book, “Abundance: The Future is Better Than You Think,“ as a desirable result. The book portrays a future where A.I. and robotics will drastically drive down the cost of goods and services, thus benefiting human society. Musk also brought up the Culture series by Scottish sci-fi author Iain M. Banks as the best possible scenario of a semi-utopian A.I. future.
Musk used an analogy of raising a child as a means for developing A.I. and artificial general intelligence (A.G.I.) to create a positive impact on humankind going forward. He stressed the importance of fostering a truthful and ethical approach to A.I. development, drawing parallels to Stanley Kubrick’s 1968 film, 2001: A Space Odyssey.
“I think that’s incredibly important for A.I. safety is to have a maximum sort of truth-seeking and curious A.I.,” Musk said, adding that he believed achieving ultimate A.I. safety hinged on never forcing A.I. to lie even when confronted by an unpleasant truth.
A main plot point in 2001: A Space Odyssey was the A.I. being forced to lie, causing it to kill the crew of the spaceship. “So the lesson there is don’t force an A.I. to lie or do things that are axiomatically incompatible, but to do two things that are actually mutually possible,” the SpaceX CEO explained.
However, Musk pointed to various constraints that could slow the expansion of A.I., including the tight supply of A.I. chips seen last year and the growing demand for voltage step down transformers, needed to convert high-voltage power to a lower voltage required for devices in homes and businesses. “That is literally the issue this year,” he said.
The discussion at one point touched on the concept of merging the neocortex of the human mind with the cloud. While Musk described the goal of uploading a person’s consciousness and memories to the cloud as a ways off, he touted his brain-computer interface startup Neuralink and its first human patient. A live demo with the patient, who is quadriplegic, was recently carried out in an FDA-approved trial. After receiving a brain implant, the patient was able to control the screen, play video games, download software or do anything else that’s possible when using a mouse just by thinking about it.” It’s going quite well. The first patient is actually able to control their computer just by thinking,” Musk said.
Musk said the expansion of A.I. may remove the restraints for creating a “whole brain interface,” but Neuralink is working toward that goal in the meantime.