COVID-19 May Have Rewired People’s Relationships With Artificial Intelligence Forever

“Now we rely on technology to be able to do things. So this appreciation for technology might be changing the way we think about machines and make decisions with machines,” de Melo suggests. “This is possibly a longer term consequence and the way we think about machines may have changed long-term.”

There’s currently a string of apps hitting the market that utilize artificial intelligence to help your dating life. Elena Eliachevitch/Getty Images

Putting aside love-lorn Joaquin Phoenix and the all-knowing voice of Scarlett Johansson in Her, it’s generally rare that people develop romantic feelings for their virtual assistants.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

If anything, research has shown that we’re often willing to violate many of the social norms of politeness, kindness and fairness when interacting with machines; after all, voice or no voice, we do not treat computers the same way that we treat humans. Yet, a study published in the journal iScience last week shows that, during the pandemic, human fondness and faith towards human-like machines has grown and people deeply affected by COVID-19 are showing more goodwill towards the likes of Alexa and Cortana.

This not only suggests that the pandemic has broken some of the barriers to collaboration between humans and machines, but it also paves the way for improving how we set up those interactions in the future and how computers and machines can be built.

“People are able to selectively choose who they are willing to cooperate with. So we’ve been asking, what kind of psychological mechanisms apply when collaborating with machines?” Celso De Melo, a computer scientist at the US Army Research Lab and the lead researcher on this paper, tells Observer. “We noticed that people were tending to cooperate more with machines than before.”

You’ve probably gotten frustrated with a GPS that proved to be no brilliant J.A.R.V.I.S. and or kicked a vending machine for not serving up your soda or snack. Those kinds of reactions are largely normal expressions of human emotions against technologies that do not possess such capabilities.

For example, a 2021 study explains that when a person plays rock-paper-scissors with another human, the medial prefrontal cortex, the part of their brain devoted to reflecting and inferring the other’s beliefs, is highly activated; when they play the game with a computer or machine, the activity level is greatly reduced. Humans have a hard time seeing machines as their equals and nurturing human-like relationships with them, because it’s just not wired in our brains to do so.

However, De Melo’s study, conducted with the University South California, George Mason University and the US Department of Defense, shows these diffident relationships might be changing.

In this experiment, 186 participants from over 40 different US states (recruited on Amazon Mechanical Turk) were ranked on how much they were affected by COVID-19 by using the clinical standardized post-traumatic stress disorder scale. Then, they played various rounds of the popular social-psychology experiment the Dictator Game. They were given 12 tickets that could win them a $30 lottery, then had to decide how much to give away to either a human or a machine.

While participants weren’t expected to share any of the tickets because they don’t get anything in return, as suggested by prior research, people are willing to share 10-25 percent of their initial lot, sometimes going up to 50 percent, even if there’s no reason to send the money.

Here, the participants most affected by the pandemic were the most generous. Those most affected by COVID-19 were more likely to be generous to both humans and machines, displaying equal levels of altruism towards both. This builds off of other studies that show that traumatic group experiences can bring people to grow more compassionate and altruistic.

“Those who weren’t impacted by COVID were making, as we had found before, more favorable decisions to humans than to machines. And this sort of reflects one of the problems with adopting AI. People are still not treating AI in the same way, even though they might treat machines in a social manner to a certain extent, in the sense that they’re not sending zero money during the Dictator Game, it’s still below what they would do with a human in the same situation,” de Melo tells Observer. “But the higher the impact of COVID in that PTSD scale, the higher the offers were towards humans and machines—they were not distinguishing any more between humans and machines.”

The researchers further examined their data and found that two main mechanisms underlie this switch in altruism toward machines: an increase in heuristic thinking and an increase in faith in technology. On one hand, situations of deep distress tend to fuel more intuitive thinking, mental shortcuts, speedy decision-making, also known as “heuristic thinking.”

According to these results, the pandemic may have increased levels of heuristic thinking among the experiment participants most affected, which led people to treat human-like machines more like humans. On the other hand, during the pandemic people grew more dependent on machines to live their lives, from attending school and working from home to buying groceries and communicating with others—and this may have encouraged goodwill toward machines in other ways.

“Now we rely on technology to be able to do things. So this appreciation for technology might be changing the way we think about machines and make decisions with machines,” de Melo suggests. “This is possibly a longer term consequence and the way we think about machines may have changed long-term.”

This study was conducted in May 2020, so this research doesn’t take into consideration phenomena like zoom fatigue or pandemic fatigue.

Still, “this is a well-designed study because it uses an already validated approach to ask a new question,” Maja Matarić, director of the Interaction Lab at the University of Southern California, who was not involved with the research, tells Observer. “But our society does not yet have pervasive uses of intelligent machines, so the way technology has affected the population in the COVID-19 context has mostly had to do with utilitarian interactions.”

Mataric has been developing socially assistive robotics for the elderly and for children with autism. She thinks that if those personalized supportive machines had been available in time for COVID-19, they would have perhaps been able to make elderly isolated people feel less lonely, to make children have better learning outcomes in remote at-home settings, and perhaps help everyone feel less anxiety about their lives.

According to Majaric, the results can be an interesting jumping-off point for more research that seeks to better understand this in situations with more realistic stakes. To further this research, scientists would need to test people who have other types of difficult experiences besides and beyond living during COVID-19.

“Some machines bring out our better selves, and some bring out the worst. Some interactions elicit empathy and altruism, while others elicit bullying behaviors and aggression,” says Majaric. “The machines we build are reflections of ourselves and of our values; they highlight who we are, and they can help us to become who we want to be: more compassionate, altruistic, empathetic, and collaborative.”

In fact, there are several ways this could go wrong, à la I, Robot and, with this data, nefarious programmers could design emotionally responsive AI machines for a bunch of criminal purposes.

“Once we see machines as partners rather than tools, the millennia of our social evolution come into play,” Subbarao Kambhampati, a computer science professor at Arizona University, who also was not involved in the research, tells Observer. “But while social intelligence on the part of the machines is quite critical for effective human-machine collaboration, it also poses several ethical dilemmas, in as much as it makes humans vulnerable.”

Kambhampati has also researched whether AI bots can lie, whether people are willing to be told white lies by their disembodied machine assistants, even if only to nudge them towards better behavior.

Moving forward, that’s always going to be more of the case. That’s why, according to de Melo, it’s important we understand how to promote collaboration between humans and machines because of the direction work on technology and AI is taking. “Simulating this kind of social ability machines, this kind of social intelligence, not just the functional aspects of the tasks, is important because the success of this technology, this autonomous AI technology is really dependent on people collaborating and adopting it,” de Melo tells Observer.

“If they don’t do that, we will never benefit from what AI is able to provide us. My expectation is that this attitude is gradually changing.”

COVID-19 May Have Rewired People’s Relationships With Artificial Intelligence Forever