AUSTIN, Tex.—Will Smith saved humanity in I, Robot from an artificial intelligence (AI) that went much too far in order to keep humans safe. The enemy was a computer that made the logical conclusion that only violence could keep us safe. It’s a line of thinking that Dr. Douglas Lenat, founder of Cycorp, might describe as “autistic.”
That was a word Mr. Lenat used several times at one of South by Southwest’s (SXSW) earliest events today to describe artificial intelligence as it exists today. He and three other experts discussed the future of machines that understand during a panel called “Big Data and AI,” sponsored by Umbel, a marketing data company, at Austin City Limits. A consensus formed quickly that artificial intelligence will bring us to a future where we will live as Humans 2.0, that is, people whose decisions are ultimately made vastly better with guidance from computers that understand.
One panelist, Kris Hammond of NarrativeScience, “If there’s one place where we can look at human beings and say we are magnificent, it’s in the fact that we can understand each other.” Recognizing that a cat is a cat in different contexts is nothing compared to making sense of a paragraph of text, he said. Machines right now, the panel agreed, can find astounding insights across pools of data, but the pieces of that data are just objects to the computers. None of it means anything to them, yet.
That said, Mr. Hammond opened by articulating an optimistic vision where, he said, humans would not need to look at spreadsheets anymore. The machines could do the looking for them and digest it in ways that would give humans the insights they wanted from spreadsheets.
But Dr. Lenat came back quickly and said he profoundly disagreed. A machine, he explained, might be taught that lying is bad, but it wouldn’t understand that lying is better than burning a person, which is better than cutting their hand off. He extended Mr. Hammond’s optimistic view of artificial intelligence to robots in the home to say that in real life it they start to go awry.
It’s a very different thing, he said, for a robot to vacuum than it is for a robot to cook, mow or feed the baby. “They’ll just want to mow the baby,” Dr. Lenat said.
“It’s not an accident that human brains have evolved left and right hemispheres,” he explained. Many computer scientists seem to think that AI could be perfect if it only had enough data, but data will never be enough. The human brain has one side that crunches data, the analytical side. The other half sits back and looks at the big picture, it asks philosophical questions and decides whether or not its conclusions are not only logical but just as well.
On the flip side, Dr. Lenat also said that humans are also very good at rationalizing. We can construct a story to justify all kinds of bad ideas. Mr. Hammond took that a step further, saying that humans can often be shown that a decision was the wrong one and stand by it anyway, perhaps because of laziness, pride or stubbornness.
This tension helped to form the panel’s consensus around the idea of Humans 2.0, a world in which agile and ubiquitous AI can understand humans’ and groups’ problems and provide them with insights to make better decisions faster.
If that sounds hopeful, don’t get too excited; what humans will earn money doing deep in this future is still an open question. “From my perspective, the data scientist is one of the next jobs we will automate,” Mr. Hammond said. Too bad for the folks creating AI, but “I don’t think there is anything a human can do that a machine won’t be able to do.” That includes creativity and intuition (a part of his company’s work is turning data sets into the written word).
The optimist would say that in a future where machines can do so much, it won’t matter that machines do most of the work because people won’t be expected to work as much in an age of plenty.
The pessimist would say that idea sounded nice when Karl Marx said it, too, but history has not shown any prior instance of a world so generous before.