Cybersecurity Expert Eases Elon Musk’s Fears of AI—Or Does He?

Rest assured, Elon—we don’t need to fear AI killer robots, at least for the time being. Mark Brake/Getty Images

At this juncture, some experts believe artificial intelligence (AI) is the panacea for all of society’s woes. Meanwhile, we all know how fearful Tesla CEO Elon Musk is of AI. Heed his words. To paraphrase, Musk has pretty much said that artificial intelligence could possibly result in the end of humanity as we know it. Heavy stuff.

And that would be pretty bad. Let me illustrate the worst case scenario based on Musk’s point with a scene from Terminator 2.

Like I just said, pretty bad. And this type of destruction could take place—all because of AI. So my suggestion? Take the batteries out of your smartphone and smash it with a rock right now; we must stop AI…before it stops us.

SEE ALSO: Elon Musk Says Putting AI Chip in Your Brain Will Be as Simple as Lasik

But there are experts who feel otherwise.

James Litton, CEO and co-founder of identity and access management company Identity Automation, has a more optimistic view. In fact, he feels “the future of AI is bright.”

OK, that’s already a lot more optimistic than Musk. The gut-feeling of this cybersecurity expert is that there are still many opportunities involving AI, and we will simply be amazed by how we can leverage these capabilities in the future.

Particularly, we might be surprised by just how much progress is expected to be made over the next five years.

“It’s possible AI can become dangerous in the future, but I feel that’s a long way off because we are in such early stages,” Litton explained to Observer. (OK, not completely reassuring.)

“AI, today, is very nascent in that AI algorithms are only being leveraged for pinpointed functionality,” he said. Litton pointed out positive AI benefits, such as its use in health care to help identify abnormal X-ray results. This, in turn, can predict specific diseases, such as tuberculosis. (We’re all in agreement that’s a good thing—right?)

Yet, despite this quantification of positive benefits, AI still needs a human hand involved. “While AI helps radiologists prioritize cases, they still need to ultimately interpret the results and confirm the diagnosis,” Litton said. “While it is possible to train AI to do specific tasks, it still needs coaching to make sure the data is interpreted correctly.”

So the idea that AI can be very broad and do the same tasks that a human can is not currently a reality.

“We haven’t found a way to overcome biases that we introduce, which can stray algorithms off-track,” he added.

Thus, Litton feels that it makes more sense to use AI as a helper, instead of relying on it to make decisions for us.

And why is that?

“We haven’t figured out how to develop algorithms that are ‘smart enough’ to learn on their own,” he said. Even when an algorithm learns, it only learns because it has been programmed to do so. And who’s doing that programming? Humans.

To break this down: Because algorithms aren’t currently intelligent enough to pick up a programmer’s biases, the results must constantly be reviewed and the algorithms tweaked to correct for errors.

Stupid present-day AI.

Again, Litton says that the technology is just in its infancy (baby AI, if you will), and over time, the need for a human hand will be remedied—for better or (Musk, are you listening?) worse.

“The idea that AI can reach a mature stage where it can truly ‘think on its own’ is probable,” Litton said. “As we head to a future where we have more and more powerful computing capability, the possibilities are endless, though I don’t have the same cynical view as others about AI taking over the world.”

Litton believes that AI will always be somewhat constricted to what we’re running it on. Thus, AI will always have some level of containment. However, we have to consider that AI will improve substantially within the next decade or two as the algorithms are perfected and refined.

And there’s another limitation to AI’s capabilities. “There’s an inherent distrust of what is produced by the algorithms,” Litton said. In addition, there’s concern around the reliability of those AI predictions. “This leads companies to say they are leveraging AI in an assistive way, but not necessarily for actual decision making.”

In AI’s current state, it’s ultimately up to the experts to decipher results and take action.

A simple example of this: we use predictive text on our smartphones, but it’s up to us to decipher if we meant “buckle” when our phone types “butthole.” Still, these are areas where we are likely to see the greatest advances in the shortest amount of time.

“There’s an inability for AI to do what the human mind can do right now. Humans have an incredible ability to consider many factors at once,” Litton explained. “In order to get to a place where AI can do that, all of that has to be programmed into the system, and we simply don’t know how to do that yet.”

Intuition is another aspect to consider; humans can glean insights from data using intuition. And intuition is not something that can be taught or programmed… yet.

“In the access management world, it is realistic to leverage the technology for access request certification,” Litton said, as an example. “If we need to review all the access within a particular organization, we can leverage AI to perform grading on that access to provide guidance to the approvers. However, if the algorithm were used to make the actual decisions, the risk would be considerably higher.”

Meanwhile, we’re seeing different levels of AI usage in multiple industries—from banking and the legal profession to the potential rise of killer robots in the military. Is there a one-size-fits-all similarity in usage?

“It’s similar in that AI can analyze large amounts of data and automate repetitive tasks,” said Litton. “For example, AI can analyze logs to look at a post incident situation or be leveraged in real-time to analyze events as they occur. Academia is using AI to process copious amounts of information in order to figure out a variety of interesting problems, such as what’s likely to happen based on historical information or connecting historical dots to answer questions about the past.”

And of course: “AI is also being used in military and government agencies to review copious amounts of call data and look for information regarding terrorist or criminal activity.”

In that sense, these different industries are using these algorithms to comb through massive amounts of data, which would be nearly impossible for a human being to analyze in a quick duration of time.

“There’s value in AI as it exists today, as long as we are careful in how we use the information,” Litton concluded. “We can use AI as a data point in making decisions, but I don’t feel it’s advanced enough to rely on the data to actually make decisions.”

The final takeaway on Musk’s fears and a potential AI uprising?

“While the use cases discussed all have merit and help these industries produce higher quality work,” Litton stated, “we’re still not anywhere near a place where the AI itself can produce meaningful output that can be fully relied upon without human review.”

Rest assured, Elon—we don’t need to fear AI killer robots, at least for the time being, until the technology advances to the point where AI is able to think on its own.

Cybersecurity Expert Eases Elon Musk’s Fears of AI—Or Does He?