ChatGPT Sparked Transatlantic Regulatory Threats for All Artificial Intelligence

All E.U. countries have the legal basis to follow Italy's lead and block ChatGPT, but the threat to AI may not stop there.

A ChatGPT logo appears on a phone in front of an OpenAI logo.
ChatGPT was temporarily banned in Italy. SOPA Images/LightRocket via Gett

Threats of bans and legislation are looming over ChatGPT, and it’s just the beginning of the regulatory challenges artificial intelligence (AI) could face around the world.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="nofollow noreferer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Italy became the first Western nation to temporarily block ChatGPT on March 31. In addition to concerns about the lack of an age verification mechanism, the country’s data protection authority said there is no legal basis for ChatGPT to collect massive amounts of personal data to train its algorithm. But the threat to AI research goes further than ChatGPT and Italy.

“Italy’s move is an indicator to the potential of legal liability killing AI,” said Eric Goldman, a law professor at Santa Clara University with specialties in technology and privacy.

The whole E.U. bloc functions under the same data protection law, meaning any other European country could take similar action against the chatbot, which OpenAI created and Microsoft invested in, said Gabriela Zanfir-Fortuna, a global privacy executive at the nonprofit Future of Privacy Forum.

Germany’s data protection commissioner is considering a similar ban, the commissioner reportedly told a German newspaper. French and Irish regulators are reportedly in talks with Italy, and Canada’s privacy commissioner opened an investigation into OpenAI yesterday (April 4). In the U.S., President Joe Biden and the Federal Trade Commission (FTC) have floated concerns about AI. An AI policy think tank probed the FTC to investigate OpenAI last month.

The issue will only become bigger, said Zanfir-Fortuna.

Regulation might be too slow in addressing OpenAI’s privacy issues

ChatGPT has swept the internet in recent months due to its ability to engage in human-like conversations and provide information at a level that previous chatbots haven’t reached. The main concern from regulators is that OpenAI isn’t transparent about where the data that is training the algorithm comes from, said Merve Hickok, senior research director of the Center for AI and Digital Policy who co-authored the complaint to the FTC.

On March 20, a bug in ChatGPT resulted in some users being able to view partial conversations and information from other users, including email addresses and the last four digits of credit card numbers. Many tech leaders—including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak—have called on AI companies to pause training new large language models, like the models that power ChatGPT, until they develop shared safety protocols.

Some regulators are looking to pass laws to address these concerns, but that takes time and the AI race is happening fast, Hickok said. “You can’t wait and hope there is going to be regulation.” Instead, AI ethics experts like Hickok are looking where there are already enforcement mechanisms, like the FTC’s regulation on deceptive commerce practices.

“It’s not hard for companies to be more transparent,” she said. OpenAI could provide more details on their datasets and safety precautions, as well as conduct independent audits. Sam Altman, OpenAI CEO, is open to independent audits, he said in February.

Could ChatGPT be banned in the U.S.?

In the U.S., several states have introduced AI-related bills, but they must adhere to the First Amendment granting freedom of speech, said Goldman. A total ban on the technology is highly unlikely because it would run into problems with this right. “There are thousands of use cases for a service like ChatGPT that aren’t privacy-invasive and are completely protected by the First Amendment,” he said.

While the federal government can pass legislation to regulate the use of large language models, so can individual states. As a result, OpenAI and companies like Google (GOOGL) that are developing similar products could be required to follow dozens of contradictory laws in different territories.

“We are at the same kind of juncture with AI that we were at with the internet circa 1995,” he said. Decades ago, there was a lot of legal risk involved in running an internet business, so Congress passed Section 230 of the Communications Decency Act to give internet companies some legal protection, he said. Now highly contested, Section 230 ruled that websites aren’t liable for the third-party content they host, and it is being reviewed in the Supreme Court. But as a result of Congress passing the law, Web 2.0 boomed, he said.

Rather than pushing regulation on AI companies, Goldman said legislators should give them the legal protection to grow. The legal threats to large language models are overwhelming, he said. “If the legislature doesn’t provide legal protection for those services, we ultimately won’t be able to enjoy the technology long-term.”

ChatGPT Sparked Transatlantic Regulatory Threats for All Artificial Intelligence