Conversations about artificial intelligence are seemingly everywhere in recent months and questions are being asked about what’s legal and what’s not. Does using AI break laws? And is it going to break the legal system?
AI-related legal questions often involve intellectual property law, the area of law that considers ideas, inventions, art, and more. While many lawyers and industry experts believe that current U.S. law can handle the current generation of AI, misconceptions and gray areas abound.
What is artificial intelligence, exactly?
This question seems simple, but the answer reveals the challenges in creating law around AI.
One of the challenges of making and applying laws around AI is that the term covers a range of activities, said Joshua Landau, an attorney at the Computer & Communications Industry Association, an organization whose members include tech companies like Amazon and Google.
“AI isn’t very well defined. It’s a lot better to talk about specific AI technologies,” Landau said. “We can talk about generative algorithms, we can talk about large language models, we can talk about adversarial networks, all of these underlying technologies that really work in different ways. And the differences matter from a policy and a regulatory perspective.”
How does AI push the boundaries of IP law?
AI is now being harnessed by writers, artists and musicians, who are combining existing works to make new creations and using AI as a tool to streamline their creative processes. The questions being asked now could shape how companies, creatives, and the public use AI in the future.
“Two years ago, there wasn’t a single truly autonomous system out there we could find that had no human curation or editing of the data,” says Daniel Gervais, director of the intellectual property program at Vanderbilt University Law School. In contrast, he said, today new generative AI technologies like ChatGPT and DALL-E are creating output in a way that prior AI had not. While AI has been used in a variety of ways for years, the latest generation seems to be capturing public attention on a larger scale than before.
While many issues relating to AI aren’t new, “generative AI has brought a heap of new questions,” said Erik Stallman, a professor at the University of California Berkeley School of Law.
In broad brushstrokes, the questions fit into three categories: ownership and authorship of AIs, the use of IP-protected materials as input for training AIs, and the potential copyright infringement in AI output.
“Philosophical questions” about ownership
If AI creates an invention or a creative work like a book or painting, who owns the legal rights? And how can those rights be protected? Traditionally, the human owner or operator of AI systems has claimed rights in the AI’s output, but assumptions about ownership and legal protections are being questioned by not only AI owners but also the government agencies that oversee intellectual property protections.
Observer has written about who can own and register AI-generated works, including the efforts of inventor Stephen Thaler to register patents and copyrights for inventions and works created by AI. On March 17, Thaler filed a petition to request the U.S. Supreme Court consider the refusal of patent applications for inventions claimed to be created by AI. In the petition, Thaler asked whether U.S. patent law “categorically restrict[s] the statutory term ‘inventor’ to human beings alone.” He argues that by refusing to register patents to inventions generated by AI, the lower court is potentially stifling U.S. innovation desired by Congress.
In February 2023, the U.S. Copyright Office partially canceled a registration for “Zarya of the Dawn,” a graphic novel created using AI tools. The office permitted the registration of the text, as well as selection, coordination, and arrangement of written and visual elements by artist Kristina Kashtanova. On the other hand, the Copyright Office found that images generated by AI are not eligible for copyright protection. According to the letter from the Copyright Office to Kashtanova’s attorney, the images in the book were not Kashtanova’s “original works of authorship” in part because the AI tool she used “generates images in an unpredictable way,” raising questions about whether predictability is an accurate measure of original authorship.
In the wake of the “Zarya of the Dawn”decision, on March 16, the Copyright Office announced it will launch a series of listening sessions and a request for public comments to address copyright protection for works that include AI-generated elements. The office also announced new registration guidelines for these types of works, which appear to be consistent with the decision.
Does the use of data to train AIs break the law?
Another major issue is whether the gathering and use of source materials for AI systems is copyright infringement.
Under U.S. law, facts are not copyrightable, although an original and creative arrangement of facts may be. This means that simply gathering and using a large amount of data as an AI training set is not in itself copyright infringement in the U.S.
Beyond the use of large quantities of pure facts, copyright issues also arise when photos or other artwork are used as source materials for AI-generated output. For a business considering the use of AI tools, Josh Simmons, a partner at multinational law firm Kirkland & Ellis, notes there are two ways to lower the risk of copyright infringement associated with AI source materials. One is using inputs that are available for non-infringing use, such as public domain and licensed materials. The other is to rely on fair use.
Fair use is a fact-specific–and often unpredictable–test used by courts to determine whether a use of copyrighted materials is acceptable under the law. The U.S. Copyright Act sets forth four factors to determine whether the use of a copyrighted work is fair and non-infringing, although cases today often center around whether the use is “transformative” of the original.
Even before AI questions began emerging, Simmons says courts often decided fair use in large-scale copying cases based on whether the output substitutes for or points users to the original work. For example, in the case of Author’s Guild v. Google Books, Google scanned millions of books and used them in search results that included “snippets” of the books. Searchers could not access the entire book or significant portions of the book. In 2015, the influential Second Circuit Court of Appeals held that this was fair use.
In contrast, in Fox News v. TVEyes, a case in which Simmons and his firm represented Fox News, TVEyes copied hundreds of hours of Fox News’s copyrighted programming. TVEyes used these copies to provide subscribers with 10-minute segments of the programming. Because the segments could substitute for Fox News’s own programming, the Second Circuit held that this was not fair use.
Vanderbilt’s Gervais adds one caveat to any discussion about fair use. A case currently before the Supreme Court could reshape the conversation. The case, involving the copying by Andy Warhol of a photograph of Prince, does not involve AI. However, it is the first fair use case before the court since the 2022 retirement of Justice Stephen Breyer. “Justice Breyer was the fair use champion on the Supreme Court [and] he’s gone,” says Gervais.
Can AI-created content infringe intellectual property?
Another issue is when the output of the AI—like an article produced by generative AI—is accused of copyright infringement.
Whether the person accused of copyright infringement had access to the work they are accused of copying is often a key question in copyright cases. The access question is more complicated in AI cases because an artist likely won’t know what source materials were considered by the AI, which makes it more difficult to evaluate the risk of illegal copying. Many companies that employ or contract with artists to create works use a copyright clearance process that includes looking at source materials the artist used as inspiration. According to Simmons, using AI tools will also make the clearance process more difficult.
In February, Getty Images filed a lawsuit in federal court in Delaware that illustrates potential IP challenges both with copyrighted source materials and AI-generated output. The company sued Stability AI for copyright infringement and other claims for copying millions of photos from Getty’s database and creating images derived from Getty’s copyrighted works.
While the Getty complaint emphasizes the large number of scraped images, pointing to the “enormous scale” of copying “more than 12 million photographs” (emphasis in the complaint), Landau believes AI cases shouldn’t focus on the scope of the input materials but rather should consider the output. “Maybe the processes are different [between human and AI generation], but I don’t think that the result should be treated differently just because the process is different,” he says.
What does the future hold?
Despite the questions posed by the new technology, a common refrain from IP lawyers is that current law can address the current challenges.
So far, U.S. law and policy makers seem to be taking a wait-and-see approach. The U.S. Patent and Trademark Office and Copyright Office have convened public education and listening sessions. The USPTO also has issued requests for comment, inviting the public to weigh in on questions relating to AI. In a 2020 report, the USPTO concluded the industry organizations, companies, academics, and lawyers that had weighed in on a request for comment believed that current U.S. intellectual property laws could adequately address the current evolution of AI. What the commenters appeared to disagree upon, the report noted, was whether additional types of IP rights should be recognized.
The most recent USPTO request for comment, open until May 15, 2023, asks questions including how AI is used, how humans are involved, and whether current USPTO guidance adequately addresses AI patent inventorship.
In October, senators Thom Tillis, a Republican from North Carolina, and Chris Coons, a Democrat from Delaware, submitted a letter to the USPTO and Copyright Office requesting the formation of a commission to address challenges related to AI. The senators agreed with the offices’ position that AI-generated inventions were not eligible for protection under current U.S. IP law, but asked whether changes should be made to future IP law “in order to incentivize future AI related innovations and creations.”
“My preference is that we would just give the existing doctrine a try on before trying to amend the Copyright Act,” says Stallman, who previously worked in government and private legal practice. He is concerned that creating new legislation too soon would potentially reward current owners and practices at the expense of innovation.
On March 16, 2023, the Human Artistry Campaign, a new coalition of creative industry groups, artist and musician unions, and other rights holders, was launched to “ensure artificial intelligence technologies are developed and used in ways that support human culture and artistry – and not ways that replace or erode it.” The group’s core principles include a statement that AI must comply with intellectual property laws.
Other countries, including members of the E.U. and Japan, have taken a more proactive approach to legislation around AI. According to Gervais, one difference between the U.S. and these countries is that U.S. laws are shaped by court cases to a greater extent than in these other countries. While U.S. law can develop as courts hear new AI cases, he says, “most other countries … can’t wait for their courts to change the law.”
So far, IP law doesn’t seem to be at a breaking point. But laws created with human innovation in mind will continue to stretch as AI becomes more involved in the innovation process.