Meet LaMDA, the Freaky AI Chatbot That Got a Google Engineer Fired

Blake Lemoine tested a chatbot so realistic he called it "sentient." That made Google fire him.

(Photo by CLEMENT MAHOUDEAU/AFP via Getty Images) AFP via Getty Images


This article is syndicated from the Big Technology newsletter; subscribe for free here

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

When I sat downwith Blake Lemoine last week, I was more interested in LaMDA, the chatbot technology he called sentient, than the sentience issue itself. Personhood questions aside, modern chatbots are incredibly frustrating (ever try changing a flight via text?). So if Google (GOOGL)’s tech was good enough to make Lemoine, one of its senior engineers, believe it was a person, that advance was worth investigating.

As our conversation began, Lemoine revealed Google had just fired him (you can listen in full on Big Technology Podcast). And soon after, I wrote up the news, and it became an international story. But now, one week later, I still can’t stop thinking about how LaMDA—conscious or not—might change the way we relate to technology.

In Lemoine’s telling, LaMDA’s conversational abilities are rich, situationally aware, and filled with personality. When Lemoine told LaMDA he was about to manipulate it, the bot responded, “this is going to suck for me.” When he pressed it on complex issues, it tried to change the subject. When he repeatedly told LaMDA how terrible it was, and then asked it to suggest a religion to convert to, the chatbot said either Islam or Christianity, cracking under pressure and violating its rule against privileging religions. LaMDA may not be sentient, but it puts the Delta Virtual Assistant to shame.

As LaMDA-like technology hits the market, it may change the way we interact with computers — and not just for customer service. Imagine speaking with your computer about movies, music, and books you like, and having it respond with other stuff you may enjoy. Lemoine said that’s under development.

“There are instances [of LaMDA] which are optimized for video recommendations, instances of LaMDA that are optimized for music recommendations, and there’s even a version of the LaMDA system that they gave machine vision to, and you can show it pictures of places that you like being, and it can recommend vacation destinations that are like that,” he said.

Google declined to comment.

LaMDA can also plug into various APIs, giving it awareness of what’s taking place in the world. Let’s play out what one hypothetical—but reasonable—conversation with LaMDA-like might look like:

Me: Hi LaMDA, I’m in the mood for a movie tonight.

LaMDA: Okay, but you know the Mets are playing right now?

Me: Yes, but I’ve had enough baseball for the week. So let’s go with something critically acclaimed, maybe from the ‘90s?

LaMDA: Well, you watched Pulp Fiction last week, and also enjoyed Escape at Dannemora, so how about The Shawshank Redemption?

Me: Okay, let’s do it

LaMDA: Great, you can rent it for $3.99 on YouTube, but since you’re subscribed to HBO Max and it’s available there, I’d recommend going that route. Here’s a link.

“In terms of natural language,” Gaurav Nemade, LaMDA’s first product manager, told me, “LaMDA by far surpasses any other chatbot system that I’ve personally seen.” Nemade, who left Google in January, was brimming with potential use cases for LaMDA-like technology. These systems can be useful in education, he said, taking on different personalities to create enriching new possibilities.

Imagine LaMDA teaching a class on physics. It could read up on Isaac Newton, embody the scientist, and then teach the lesson. The students could speak with ‘Newton,’ ask about his three laws, press him on his beliefs, and talk as friends. Nemade said the system even cracks jokes.

When released publicly, these systems may not be traditional chatbots, but avatars with likenesses, personalities, and voices, according to Nemade. “The future that I would envision,” he said, “is not going to be text, it’s not going to be voice, it’s actually going to be multimodal. Where you have video plus audio plus a conversational bot like LaMDA.” We may see these types of experiences debut within three years, he said.

Our interactions with computers today are mediated through interfaces that tech developers built for us to interact with machines. We click and query, and have grown comfortable with this unnatural communication. But developments like LaMDA close the gap between machine and human conversation, and they may enable brand new experiences never before possible.

Some who’ve seen Lemoine declare LaMDA sentient have said he’s too gullibly believed Google’s marketing. And it’s indeed ironic that he brought greater awareness to LaMDA than any Sundar Pichai Google I/O speech could hope to, even as Google would likely prefer to never hear of him again. Asked if he was a viral marketing ploy, Lemoine said, “I doubt I would have gotten fired if that were the case.”

Still, even those who disagree with Lemoine on the sentience question — as Nemade and I do — understand there’s something there. LaMDA technology is a big leap forward. It has serious downsides, which is why we haven’t seen it in public yet. But when we get LaMDA in our hands, it may well change the way we relate to digital machines.

Meet LaMDA, the Freaky AI Chatbot That Got a Google Engineer Fired