Last year, a New York Times reporter wrote about calling an American Express call center and becoming convinced that he was talking to a robot, rather than a person. It turns out that he was speaking to a real person, but Prof. Brett Frischmann suggests that she might have been a person working from such a sophisticated script that she was working robotically. If a person does work in which their every move is dictated by a machine, aren’t they a little bit machine?
Prof. Frischmann, a member of the faculty at the Cardozo School of Law, wants readers to consider if automating life is making us more robotic. He writes about it in a paper he’s circulating on the Social Science Research Network, “Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings.” He’s revisiting the Turing Test in a new way: not as a way to discern when machines have become like humans, but as a way to discern when humans have become like machines.
“When technology is bad for us immediately, we pick up on the harms immediately, but when it has subtle effects we often don’t stop to assess,” he told the Observer in an interview at his office.
The Turing test was proposed by Alan Turing to determine whether or not machines have begun to demonstrate intelligence. Basically, if a computer can have a text-based chat with a human and convince the human it is not a computer, then the machine is intelligent (in Turing’s framework). In his paper, Prof. Frischmann writes:
The conventional Turing test concerning artificial intelligence focuses on a machine and asks whether the subject is (in)distinguishable from a human being. In a sense, the Turing test establishes an elusive endpoint to which AI experts and others may strive; it is a finish line. But racing to make intelligent machines is only half of the relevant picture. Another race is occurring, but we don’t pay much attention to it, except in science fiction. It occurs on the other side of what I call the Turing line, the human side.
His paper anticipates Being Human in the 21st Century, a book he’s working on with technology philosopher, Evan Selinger, due out in 2017 from Cambridge University Press.
In the paper, the Turing line expresses the idea that there’s a point at which being a machine and being a human meet and where machines could become like humans and humans could become like machines.
‘I genuinely think that the big 21st century constitutional issue is the freedom to be off.’
Some of the most compelling points he makes along these lines concern workplaces, such as the call center mentioned above. As we spoke, he also referenced the stories about warehouses serving companies like Amazon, such as the one that Mac McClelland described in her 2012 Mother Jones story, “I Was a Warehouse Wage Slave.” One warehouse company is even using wearable tracking devices to monitor worker efficiency, as reported by The Irish Independent.
If workers are monitored scrupulously and policed to insure that they work in a prescribed fashion that maximizes efficiency, how are they really different from robots in any way other than the fact that their innards are guts rather than gears?
As we spoke, though, Prof. Frishchmann pushed that we should ask the same questions about how we are starting to behave in our day-to-day life. More and more of our memories, decisions and skills are being outsourced to mobile technology, a point he and his book collaborator recently broke down in The Guardian. So, for example, while wayfinding—the skill of finding one’s way from point A to point B, even when you’re not really sure about the place you’ve found yourself in—was once a skill any normal adult would have a set of strategies for, today we simply outsource those tasks to GPS.
In an “always on” world of constant connection, we may be limiting our opportunities for learning and personal growth.
“If we’re building a world to minimize transaction costs and control things to minimize the downside and maximize the upside. It becomes something we’ve sort of done to ourselves,” Mr. Frischmann said. “I genuinely think that the big 21st century constitutional issue is the freedom to be off.”
SEE ALSO: The Robots Will Not Take Over
In the paper, he asks readers if they would voluntarily plug into a machine if it would make them objectively happy all the time. It’s an idea first raised by philosopher Robert Nozick in 1974, which he called “The Experience Machine.”
Mr. Frischmann and Mr. Selinger’s forthcoming book will open with some version of “Welcome to the Experience Machine 2.0,” he said. “The idea of the ‘Experience Machine 2.0’ is that it’s not a machine one plugs into. It’s environmental.” If one day technology optimizes away all the pain points of living, then what room is left for the failures and synchronicities that make life authentic?
Mr. Frischmann doesn’t want to be a complete technological pessimist. He simply wants people to consent to the ways in which technology is restructuring our choices. Technology has been changing us since the dawn of time, he said. Once upon a time, lots of people knew how to fish with a spear, he said. Now, hardly anyone knows how to fish with a spear, but lots of people fish successfully with a rod and reel. Or a net.
Does that mean that fishing rods have made us less human? Or that we’ve lost the opportunity to grow as a person who fishes with a spear?
Maybe not, but consumers of technology should understand that our devices constantly nudge us in new ways, and some of those nudges take away our autonomy.
For example, have you ever been sitting at a bar with someone when they get a message on their phone, and they apologize as they stop speaking to you to answer an email quickly? That’s a nudge that wasn’t possible in the pre-mobile era.
Imagine the day when commuters are able to ride to work in self-driving cars. Will they use that time to catch up on TV shows or call their moms? Or will technology nudge them to eke out just a bit more work on their way to work?
And will they have a choice in the matter, or will they just think they had one?