Ex-Google Engineer Warns AI ‘Killer Robots’ Could Cause Catastrophes

Let it be said, we’ve been warned.

The concerns over AI killer robots wreaking havoc on our civilization has gotten to the point where an organization has been formed called The Campaign to Stop Killer Robots.
The concern over AI killer robots wreaking havoc on our civilization has gotten to the point where an organization has been formed called The Campaign to Stop Killer Robots. CARL COURT/AFP/Getty Images

Let it be said, we’ve been warned.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Remember back in March of 2018, when a younger Elon Musk addressed a crowd at SXSW by stating, “Mark my words—AI is far more dangerous than nukes.”

SEE ALSO: How a New AI Breakthrough Could Undermine the Financial Industry

Yes, a more youthful Musk, even went so far as to call those who push back against his warnings “fools,” being they don’t have insight or oversight that everyone is safely developing artificial intelligence (AI), particularly when it comes to weapons.

True. We can’t even confirm the complete safety that a hacked light bulb connected to the Wi-Fi might lead to your bank account being drained, let alone what catastrophic scenarios might implode with this generation of autonomous weapons that go by the moniker: killer robots.

Yes, killer robots. Just another thing to put on the worry list of your daily existence. Let me once again restate: We now have to worry about killer robots…

So besides Musk’s astute SXSW warning in March (and yet another viewing of Terminator 2), what other reasons do wear actually have that indicate killer robots could potentially be a very real threat?

Laura Nolan, a former top Google (GOOGL) software engineer, resigned last year after being assigned to work with the U.S. Department of Defense on Project Maven. Nolan began working on the project in 2017 before quitting over ethical concerns. Her team’s goal was to build a system with AI algorithms that would speed up analysis of vast amounts of captured surveillance footage—to distinguish between people and inanimate objects at an infinitely faster rate. Nolan warned that killer robots could cause “mass atrocities” and could even potentially start a war.

Here’s where it all goes wrong: Nolan told The Guardian that unlike a military team remotely controlling a weaponized drone thousands of miles away, robots have the dangerous potential of doing catastrophic actions that they were not originally programmed to do.

To put it into the most simplistic movie terms, a weaponized killer robot might work out well only if it was RoboCop apprehending the dad from That ’70s Show.

But a weaponized killer robot doesn’t work out quite as well if it’s the gunslinger from the original Westworld, blowing away vacationing humans.

And most importantly, this isn’t the movies—but reality.

Nolan stated to The Guardian: “What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.”

There could be large-scale accidents,” she added, “because these things will start to behave in unexpected ways.”

From Nolan’s experience she believes that robots that aren’t guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons.

Musk shares Nolan’s sentiments and has called for regulatory oversight. He has stated that AI is a bigger threat than North Korea in terms of obliterating the United States (or the planet for that matter).

Musk’s statement points to the fact that we have learned nothing from history (or at least movie history). Did we not learn anything from the havoc that occurred when Ferris Bueller played global thermonuclear war with a Defense Department computer?

The concern over AI killer robots wreaking havoc on our civilization has gotten to the point where an organization has been formed called the Campaign to Stop Killer Robots. (And no, that’s not also the name of a ’90s indie band.) Nolan has joined the group, whose purpose is to ban fully autonomous weapons. The organization’s site clearly states their (our) concerns: “Such weapons would be able to identify, select and attack without further human intervention.”

Human Rights Watch has also called for a ban on killer robots and serves as global coordinator of the Campaign to Stop Killer Robots.

A final scary takeaway? The only way to test these autonomous weapons is by deploying them in a real combat zones. And we are most likely not the only country on the planet that is developing them. Back in March, Russia unveiled an array of AI killer robot drones that they hope will revolutionize the battlefield.

As Nolan explained, “What we do know is that at the UN, Russia has opposed any treaty, let alone ban, on these weapons, by the way.”

To restate: Let it be said, we’ve been warned.

Ex-Google Engineer Warns AI ‘Killer Robots’ Could Cause Catastrophes