Bad Actors, AI & the Historical Context of Disinformation Campaigns

The worst-case scenario, in the spread of disinformation, is, well, complete global catastrophe... that’s all.

The worst-case scenario, in the spread of disinformation, is, well, complete global catastrophe… that’s all. Pixabay/Pete Linforth

Disinformation campaigns didn’t start with social media. Before there was Facebook, propaganda was spread via everything from radio and the telegraph to street leaflets. To put it into historical context, disinformation campaigns even played a role in the Roman-Persian wars.

“Funny enough, modern disinformation has its origin in the KGB’s black propaganda department,” said Sean O’Brien, CEO of @Risk Technologies. “The Soviets ran disinformation campaigns to influence the opinion that the U.S. invented AIDS and that the U.S. supported apartheid.”

SEE ALSO: Mind-Reading Tech Is Dangerously Close to Becoming a Reality

O’Brien knows his disinformation; he’s a former Department of Defense (DOD) and intelligence officer who has spent his career engaged in warfare. Now, he’s CEO of a cybersecurity start-up fighting the cyber war against bad actors engaged in worldwide disinformation campaigns.

The goal of disinformation is sinisterly simple. “It is the ability to increase trust in one party and erode trust in another,” O’Brien told the Observer. “Diminish the public’s belief in an idea, an institution or principle.”

To further break it down: disinformation is false content spread with the specific intent to deceive, mislead or manipulate a target or opponent. In our social media age, it’s a weapon that baits fear mongering, boosts authoritarian regimes and, of course, sits center stage in U.S. election meddling.

Just last week, Facebook said that it found and took down four state-backed disinformation campaigns. This is merely a small taste of how online foreign interference is increasing ahead of the 2020 presidential election. These state-backed actors disguised themselves on Facebook as average users. Three of the disinformation campaigns originated in Iran, while the other originated from, you guessed it, Russia.

“Someone executing a disinformation effort usually does not care to side with any particular team; it is about achieving a well-planned goal,” clarified O’Brien. “They will target whichever side is the easiest to exploit.”

As O”Brien sees it, today, foreign powers are once again invading the privacy of our homes and businesses to take what they want through cyber attacks.

“The biggest threat we have is a lack of awareness and taking proactive measures to stop it,” said O’Brien. “It ranges from small businesses going bankrupt from ransomware to politically motivated extremist ideas—all of these coming together in a perfect storm, spurning a disenfranchised public to riot, reject civility and, in some cases, go on a shooting spree.”

Typically, disinformation campaigns are aimed at political, social or business ecosystems.

“The most effective campaigns are useful because the progenitor of the campaign has gained a good understanding of the ecosystem and has done their homework,” explained O’Brien. “Let’s use Russian interference with voting as an example. Russian operatives want to erode American trust in the outcome of an election.”

This particular disinformation campaign’s insidious goal was to slow down voting machines, which inadvertently attacks the entire institution.

“If the amount of time it takes to vote is lengthened, the lines waiting to vote are lengthened too,” he continued. “After voters complain they couldn’t vote, either side will argue foul play.”

Attackers employ both art and science to spread disinformation campaigns based on situational awareness.

“Timeliness, tone and context are critical accelerants to an effective campaign,” O’Brien stated. “Authors of disinformation campaigns must understand the ‘rhythm’ [of] when to say something or when not to.”

For example, we know how false it rings when a telemarketer—who is obviously calling from overseas—provides a phony-sounding American name and tries to make poorly executed local references. (“Hi, this is Scooter. How about that local sports team?”) The same is true with a poorly written phishing email; it’s easily spotted as a fraud. (I’ve had this happen with a faux Craigslist apartment sublet ad.)

On Facebook or Twitter, attempts to build pretext to gain trust often fail because the perpetrator uses a faulty history or an implausible scenario. A poorly written social media post may ruin an attempt at baiting someone to make a comment or take a position that will thwart a campaign. Those who attempt to deceive need to do their deceptive homework first. Without in-depth cultural understanding, attackers will struggle with how to properly say things or how to best put it into context.

But to know your enemy is to know yourself.

“Our team has learned that planning and scenario development give attackers the edge in effectively executing a disinformation campaign,” said O’Brien. “Proper planning provides for branches and sequels in knowing timeliness, tone and context.”

O’Brien has leveraged his background as a Ranger Qualified Army Officer, along with the knowledge and expertise he acquired from working at IBM, to gain the experience necessary to combat disinformation campaigns.

“Many of the team we have today was with me then, and they were responsible for running big data analytics for U.S. combatant commands,” he said. “Most military personnel, whether they are in the infantry or transportation, learn in their basic and advanced training, about disinformation campaigns as part of counter-insurgency (COIN) training.”

So how can we play defense against bad actors?

To date, the world has spent close to $2 trillion dollars on cybersecurity, and yet a teenager from Moldova can, say, put a 20-year-old urology practice out of business with a piece of ransomware.

O’Brien’s @Risk team has domain expertise in disinformation because they break it down and leverage external ecosystems as data sources.

“We use this data to build an analytic zone to forecast how it will influence our customers,” he explained. “If you can’t measure the efficacy of something, you can’t possibly justify a course of action for solving a problem you don’t know exists.”

In constructing the proper disinformation campaign defense, O’Brien employs military science—the study of organizations, systems, processes and behavior, along with the study of past warfare.

“From this, the best military minds have built a theory on how best to apply the right amount of force to win,” he said.

People who’ve worked in the military have said that combat is like a symphony, and O’Brien explained how applying military science breaks down. “An effective combat leader orchestrates and integrates different capabilities to come together to apply the most lethal force at the right place, time and conditions to win,” he said. “There are a lot of proven methods and practice of producing defense in depth (DiD) capabilities using economic, social, operational, technological and tactical to deliver victory.”

A simple concept for conducting this defense? “Obtaining and maintaining situational awareness,” O’Brien continued. “A defender must know what they are defending and how it is arrayed on their terrain.”

In a DiD (a defensive cybersecurity approach used to protect data), it means having an accurate asset management inventory of both hardware and software at the endpoint and understanding the network segment and subnet routes between them.

“This pure military science equates to a rudimentary advantage for building a cyber defense in depth,” O’Brien said. “You have to know what you have available to help you win in your message. You also have to know their vulnerabilities and safeguard them. People, process, technology and data assets are just as applicable in politics as in cyber warfare.”

Sure, that defense approach might work against a 16-year-old in Moldova, but what if the bad actor comes in the form of artificial intelligence (AI)? As Elon Musk said, “AI is far more dangerous than nukes.” To further stoke this fear, replicating human thought via AI is now becoming a reality. A Google search yields published concepts describing how micro agents could mimic human thought.

“These agents will analyze the design and implement intelligent control of distributed data processing. They will also collaborate with a network of agents doing the same thing,” said O’Brien.

To combat AI bad actor fear, O’Brien says we need to “understand what AI is, and what it isn’t, and what it can do today.”

AI, in its simplest form, is using hardware and software technology to mimic human behavior. To combat AI, in terms of perpetuating a disinformation campaign, “a question we should constantly be asking goes to the fundamental question: What human behavior, or which human, do we want to mimic?” said O’Brien. “AI isn’t efficient if it lacks data, and if the data is limited or of poor quality, the results are less than optimal. Weak data leads to poor analysis and that cripples the ability to learn new scenarios in different contexts.”

“Because correlation does not build causation, useful data is required to corroborate or build weights of evidence,” he continued. “It is developing and defining weights of evidence with human supervision that will, in the interim, ensure a neural network is training appropriately.”

Unfortunately, we now live in a world where it’s not just AI, teen hackers and state-backed bad actors spreading disinformation campaigns; the term “fake news” is common in our lexicon. How harmful is it to have the highest people in power (I’m talking to you, President Trump) deem the mainstream media as spreaders of disinformation, which, in turn, is spreading disinformation…

“What is most harmful is when we, as a society, cease to put value in the freedom of the press and the exchange of ideas,” said O’Brien. “Anytime anyone exerts any effort to limit that freedom, it is a problem. It is a civic responsibility to identify when it is opinion versus fact. Without doing so, the escalation of rhetoric is a breeding ground of disinformation and empowers enemies of free speech to limit our pursuit of liberty and happiness as a society. Any and all attempts to limit free speech or twist it must be quickly identified, exposed and quarantined.”

As a defense solution, O’Brien says that “we need to have trusted checkers. Using specific kinds of AI, we can check the checkers.”

O’Brien’s worst-case scenario, in the spread of disinformation, is, well, complete global catastrophe… that’s all.

“Unfortunately, disinformation will continue to contribute to the loss of life in land, sea and air warfare,” he said, grimly adding: “In the fifth domain, we are due for a cyber 9/11. This size of an attack could have a devastating impact on the global economy. Bad actors will inject post-attack narratives with disinformation, which will ignite a loss in confidence in the government. Social and religious institutions could also be impacted and threaten the very fabric of human society.”

Instead of pulling the blankets over our heads in fear, there are ways we can protect each other—and it goes back to the ethos of the original Minutemen.

“As Big Data monitors threats in real-time, the machine learning gained is synthesized using algorithms that help forecast a cyber attack,” O’Brien concluded. “This ‘one if by land, two if by sea’ early warning system is made up of bold thinkers who join our common cause as digital Minutemen.”

Bad Actors, AI & the Historical Context of Disinformation Campaigns