Siri Co-Inventor: The Internet Is a Vast Psychology Experiment—And It Scares Me

The amazing success of Siri and the resulting stranglehold AI-powered technology has on humans’ day-to-day lives makes Siri co-inventor Tom Gruber extremely nervous. Oli Scarff/Getty Images

Tom Gruber is a vastly successful psychologist—possibly one of the most successful of all time. This is because a creation of his happens to be a very large, very ongoing and continuously expanding experiment. If you have an iPhone in your pocket or in your hand, you co-exist with his creation. You may not be able to live without it. And that, Gruber recently told Willamette Week, isn’t good!

Gruber is a co-inventor of Siri, the artificial intelligence-powered “assistant” that uses machine learning to answer billions of queries every week, according to the Computer History Museum. Gruber and his partners sold Siri, a Norse term that roughly translates as “beautiful woman who leads you to victory,” in 2010 to Apple for a reported $200 million.

SEE ALSO: Flea-Sized Robots Can Crawl Inside You—And Maybe Control Your Mind

In the years since, Siri has become near-ubiquitous. And where you cannot find Siri, you may find a clone, like Amazon’s Alexa or whatever Android-powered genie you summon with the magic words, “Hey, Google.”

The amazing success of Siri and the resulting stranglehold AI-powered technology has on humans’ day-to-day lives makes Gruber extremely nervous, he recently told the Portland-based alt-weekly. Like a digital Dr. Frankenstein, Gruber is increasingly wary and horrified at what he hath wrought, a “science experiment gone wrong,” according to the paper.

In certain areas, “AI can already demonstrably outperform humans,” he’s said before, according to a talk he gave last year in London. And “it’s one thing to create a product, but it’s another thing to have an entire generation transformed by this technology.”

Tom Gruber
Former chief of the Siri digital assistant team at Apple Tom Gruber speaks at the TED Conference in Vancouver, Canada, on April 25, 2017. GLENN CHAPMAN/AFP via Getty Images

“Our millennials check their phones 150 times a day,” he noted in a recent interview he gave WW ahead of a lecture on AI he plans to deliver at TechfestNW. (Since his exit from Apple in 2018, Gruber has spent much of his time on the lecture circuit, delivering a 2017 TED talk as well.) So far, rather than fix humanity’s ills—literal or spiritual—AI’s main contribution to the species is that it has “shown that if you want to get two billion people addicted to something that’s not good for them, you can do it,” he told the paper.

The analogy may not be perfect, but Gruber compared the devotees of the world’s religions, who—at maximum—pray five times a day, or merely attend services once a week—to the adherents of technology companies, with their billions of users logging on throughout the day, every day. That makes Google or Facebook the world’s biggest religions. So who does that make God—and who are the prophets? And which of them are machines—and if they’re all machines, what does it all mean?

The “uncanny valley” is the term coined to describe the “gulf” between human behavior and a machine that uses AI and machine learning to behave like a human. By some metrics, the valley has narrowed, perhaps to a mere chasm. As Gruber has pointed out (and many agree), AI-powered medical diagnostics are outperforming human doctors—and AI-powered marketing is very, very good at getting humans to buy things.

More recently, AI creep has appeared in the “humanities.” An AI-drawn picture was sold at auction at Christie’s last year for $432,500. One of Gruber’s recent projects, an “AI music startup” called LifeScore of which he is co-founder and CTO, promises to “make” music that sounds “just like a human created it.” Yet another is “Humanistic AI,” in which Gruber is attempting to help companies use machine learning to harmlessly cooperate with humans rather than supplant or dominate the species.

Despite all this, Gruber remains an AI optimist—because, he pointed out, he’s a human optimist. Facebook’s programming is bad at discerning real news from fake news—but its employees are pretty good at it. Twitter is policed by people, not machines.

While it seems, very clearly, that the endgame for most large tech firms is to eliminate “the human factor”—self-driving cars, digital assistants—it seems clear that people are the best option to “police” social networks. And likewise, if people use AI as a prosthesis or a supplement rather than a replacement, there may be hope for the future of both.

The ways in which AI is abused, he told WW, is to benefit humans—certain humans, those concerned with advertising dollars going up or human-resource costs going down.

Thus, the only “real” problem with artificial intelligence appears to be human nature. And what could go wrong there?

Siri Co-Inventor: The Internet Is a Vast Psychology Experiment—And It Scares Me