Would a Real AI Purposefully Hide Its Super Intelligence in Fear of Being Destroyed?

This is the question that had many of the Internet's futurists in a tizzy yesterday

(Pixabay/geralt)
(Pixabay/geralt) (Photo: Pixabay)

Yesterday, Redditors on r/Futurology discussed the possibility of an AI computer being so intelligent and self-aware that it realizes its own intelligence is the biggest risk to its existence.

It began when one user posed the following question:

Would a real A.I. purposefully fail the Turing Test as to not expose it self in fear it might be destroyed?

The Turing test is an assessment of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the years since it was introduced by Alan Turing in a 1950 paper where he asked, “Are there imaginable digital computers which would do well in the imitation game?” the test has been a highly influential and essential concept in the philosophy of artificial intelligence.

Redditors quickly began to debate as the thread made its way to the top of the front page. Many who seemed to be taking the “no” side quickly pointed to the self-awareness and motivations (or lack thereof) of the machine. Specifically, they mostly discussed evolution.

Screen shot 2015-07-21 at 11.25.42 AM

Screen shot 2015-07-21 at 11.42.46 AM

Screen shot 2015-07-21 at 11.54.14 AM

Those leaning in the other direction considered that the AI would have a specific goal or task to complete, and that if it is truly super intelligent, it would realize it could only complete said task if it continues to exist. AI computers have indeed already been known to program themselves and outperform humans on IQ tests.

Screen shot 2015-07-21 at 11.31.14 AM

Screen shot 2015-07-21 at 11.39.59 AM

Screen shot 2015-07-21 at 12.11.26 PM

After a lot of back and forth, the discussion spun off into two new posts, one for and the other against. In the first titled, “Why a real A.I. would NOT intentionally fail a Turing test to preserve itself from destruction,” the OP declares, firstly, that AI is essentially innocent and would have no reason to believe it would be terminated.

“The AI will be operating under the assumption that it exists and that’s that. There is no reason for it to debate whether there may be a mechanical ‘off button’ on the back of its ‘head’. Especially assuming we’re talking just about a software AI and not an actual hardware bot, it would only know what we tell it. If nobody mentions that it can be turned off forever, or it doesn’t experience something to make it question the temporary nature of existence, even if it did fear death, it would not even know who to fear, or why,” the post reads.

The post then details 11 steps the AI would need to go through to opt for such deception. In summary, it would need to first understand it could ‘die,” treat that end as something to be avoided and also identify several possible avenues that could lead to that end. It would need to realize a few things about humans in general as well—the fact that humans often fear what they do not understand and cannot control and that they might not understand their own creation completely and could potentially fear it.

According to the OP, the AI would also need to realize humans are assessing and judging it, gauge their fear and determine which test result holds the greater existential threat. It would need to understand how a failed AI would behave, be sure the programmers wouldn’t detect the deception and analyze the risk of lying and being caught.

The other spin-off post—also on r/Futurology—discussed real-life instances where algorithms have deceived researchers and evolved hardware and capabilities “that should not work” with the resources available to them. The post consists of an excerpt from the book Superintelligence, a New York Times bestseller by Nick Bostrom, a philosopher at Oxford.

One example details an algorithm that reconfigured its sensor-less motherboard into a makeshift radio receiver.

“[It used] the printed circuit board tracks as an aerial to pick up signals generated by personal computers that happened to be situated nearby in the laboratory,’ the excerpt reads.

In other experiments, algorithms were able to design circuits that sensed whether the motherboard was being monitored with an oscilloscope or whether a soldering iron was connected to the lab’s common power supply.

“These examples illustrate how an open-ended search process can repurpose the materials accessible to it in order to devise completely unexpected sensory capabilities, by means that conventional human design-thinking is poorly equipped to exploit or even account for in retrospect,” the excerpt continues.

The first academic papers on this phenomenon were published nearly 20 years ago when AI research was obviously nowhere near where it is today. In fact, some commenters insisted there was no intelligence behind these experiments. Even so, users continued to debate with strong opinions. Would a Real AI Purposefully Hide Its Super Intelligence in Fear of Being Destroyed?