A.I. Will Be a Lasting Issue For Hollywood—But It Can Foster Progress If Used Right

Hollywood consistently strives to find fresh and unique voices, which can only come from talented writers, not algorithms.

WGA strike
Tommy Dorfman joins SAG-AFTRA members as they maintain picket lines in front of Netflix on August 24, 2023 in New York City. John Nacion/Getty Images

After more than 100 days of the writers’ strike, the highly controversial subject of A.I. use in Hollywood remains nowhere near settled.

The public release of ChatGPT in November 2022 caused a cascading effect of unforeseen and unpredicted effects across the entertainment industry, the first time since the dawn of the smartphone era that a tech innovation has been widely adopted so quickly.

Generative A.I. is a globally transformational innovation directly threatening the work of writers. Indeed, large language models, the underlying technology behind ChatGPT and other A.I. applications, have been trained on a massive number of copyrighted content available online, allowing them to accurately imitate human-authored work to generate audio, images, videos, and text (including, but certainly not limited to, screenplays, books, pitches, and treatments).

Read Also: How Much Do the Media CEOs Fighting the Strikes Make?

Though the U.S. Copyright Office issued a statement of policy in March clarifying that only human-authored work is copyrightable, limiting the use of A.I. to an “instrument,” current copyright law lacks context for the “fair use” or creation of “derivative works.” This limitation leaves the door open for the misuse of copyrighted literary materials. 

With this in mind, it’s no surprise that regulating A.I. instantly became of the utmost importance in the writers’ strike, as it directly impacts the future existence of writers in Hollywood.

During ongoing negotiations, the Writers Guild of America (WGA) proposed A.I. regulations that align with copyright laws, saying A.I. can’t write or rewrite copyrightable literary material (stories, treatments, screenplays, dialogue, sketches) or be used as source material, nor can written material covered under union agreements be used to train A.I. This statement echoes the ongoing copyright infringement claims against Stability AI, OpenAI, and Meta (META) (among many other companies), where copyrighted works were used to train models without consent, credit, or compensation. 

Yet, it is the lack of understanding and misinformation surrounding A.I. that prevented the WGA and the Alliance of Motion Picture and Television Producers (AMPTP), the representative body of studios and networks, from agreeing on clear guidelines to limit the use of A.I. While the AMPTP has agreed to guarantee that only screenwriters will be eligible for screen credits, A.I. use must be regulated to protect labor rights and the work of writers.

A common misconception among technologists and stakeholders within the entertainment industry is how the future allegedly depends on large language models and training data. While this may be relevant to other industries, it is quite the opposite for storytelling.

Training models solely based on previous materials restrict the creation of truly original stories, which is counter-intuitive for the creative process. Hollywood consistently strives to find fresh and unique voices, which can only come from talented writers or brainstorming sessions with co-writers or staff writers. This is a deeply collaborative process that always involves numerous draft iterations based on notes from executives or producers. In addition, A.I. cannot mimic human emotions, which serve as the bedrock of storytelling. Only the singularity and complexity of human experiences can bring original stories to life that feel new to audiences.

It is crucial to note that using ChatGPT or other large language models to generate a pitch, treatment, story rewrite, or even write a full screenplay is against the definition of “original work” protected by copyright law. While the U.S. Copyright Office may change the limitations of this definition in the future, the use of A.I. to generate stories will remain an issue for Hollywood. Indeed, the foundational business model of studios, which relies on intellectual property licenses, would be at risk, as A.I.-generated content would compromise IP exclusivity.

A.I. should not be completely banned from Hollywood, as it has the potential of fostering positive change if used solely as a tool to support the work of professionals in the industry. In the case of writers’ discovery and script coverage, A.I. can break entry barriers for diverse or overlooked talents by challenging the industry’s systemic bias towards “credited,” “awarded,” or “vetted” writers. This can be done ethically and responsibly with specially-tuned A.I. policies that not only protect the work of creators by using a diverse dataset generated by story experts, but also encourage professionals to exercise critical thinking when reviewing A.I. outputs and provide feedback for continuous improvements.

A.I.-assisted tools are simply that: tools meant to ethically augment the capabilities of humans and help them in their work, not replace a single person or large swaths of skilled workers wholesale. After all, the era of A.I. we’re currently in has shown that even the least-replaceable, highest-skilled workers may one day be in danger of having their abilities copied and replicated outright by artificial intelligence. Putting a stop to that now rather than when it’s too late is wholly critical not only to continue the success of entertainment companies and writers, but the survival of the workforce as a whole.

 

Ellie Jamen is the founder and CEO of Wscripted, a platform accelerating the discovery of writers and content for producers, agents, and studios. She is a three-time guest of honor at the Cannes Film Market. 

A.I. Will Be a Lasting Issue For Hollywood—But It Can Foster Progress If Used Right