This story is syndicated from the Substack newsletter Big Technology; subscribe for free here
Now that AI program speak with us in natural language, turn our thoughts into illustrations, and embody our voices, a major conflict over their ethics is en route.
And if you thought the content moderation fight was intense, just wait for this one.
At stake is how chatbots address political issues, how AI illustrators portray the world, and whether some applications like voice emulators should even exist. Given the scale and power of this blossoming technology, the activists won’t be subtle. They’ve had their practice fighting over human speech online, and they’ll bring that experience to this war. It could get messy quickly.
“Everyone’s got their knives sharpened,” said Sam Lessin, a venture capitalist, and former Facebook executive. “At least with speech, everyone was a little bit off-kilter and didn’t really get it. This one, they’re like, ‘Oh shit, I’ve seen this game before.’ Every single lobby in the world is ready to write their letters and start their influence campaigns.”
AI’s intelligence may be artificial, but humans encode its values. OpenAI, for instance, effectively decides whether ChatGPT takes stances on the death penalty (no opinion), torture (it’s opposed), and whether a man can get pregnant (it says no). With its AI illustrator Dall-E, the organization influences what type of a person the tech portrays when it draws a CEO. In each case, humans behind the scenes make decisions. And humans are influenceable.
Like content moderation, there will be some obvious, consensus ethical decisions for generative AI (you don’t want chatbots advocating for genocide, for instance) but advocates will stake their ground in the grey. “It’s a very powerful tool, and people are going to want to do a broad range of things with it to meet their own interests,” said Lessin. “If you look at how the free speech stuff played out, it will play out the same way again, just faster.”
The potential conflict areas include how AI addresses race, gender, warfare, and other thorny issues. ChatGPT, in one recent conversation, listed several benefits of Ukraine winning the war against Russia. But asked to list positive outcomes from Russia winning the war, it refused. ChatGPT also moralizes a lot. “War is a grave and devastating event, and should always be avoided if possible,” the bot said, in one typical interaction. “The best outcome is always peace and diplomatic resolution.”
Ethical decisions for generative AI are particularly high stakes because they scale. When you encode values into a chatbot, it can push those values repeatedly in conversation. When you make a content moderation decision, in most instances, it involves just one individual and one piece of content.
The best way to handle this new power is to have the bots play it as evenhanded as possible, said Dr. Jeffrey Howard, a professor of political science at London’s UCL. “These value judgments are inescapable,” he said. “One of the value judgments could be to build in a certain kind of neutrality and impartiality.”
Ultimately, generative AI’s decentralization may let out some of the tension. While speech is relatively centralized online today, there are many developers working on generative AI. And as developers build apps with their own morals, an all-out war over the central powers’ policies may fade. But in the meantime, expect plenty of positioning, cajoling, and fighting over the ethics the big players build into their models.