The Scary Part About AI Is That a Lot of Writers Like It

More novelists and essayists approve of AI for writing than you might guess.

 

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters
Humorist Garrison Keillor autographing large stacks of his book Lake Wobegone Days in his hotel room. (Photo by Steve Liss/Getty Images) Getty Images

This is the second of a two-part story from Book Post; read the first post here

Amidst denials that a computer could ever replace a writer in the creation of actual literary art, several interviews with working writers already using artificial intelligence were tentatively rosy. Jay Caspian Kangwriting in The New Yorker with some prior knowledge of chatbots and some expert advice, was not able to arrive at a satisfactory literary product in his fictional forays with ChatGPT, but Kevin Roose on the podcast Hard Fork and education writer John Warner were able to improve results substantially by refining their inputs. Lincoln Michel interviewed novelist Chandler Klang Smith, who has been using a GPT-3-based program called Sudowrite to work on her novels for a year or so. She memorably described the experience as “like a robot has a dream about your work in progress and you get to decide if anything from that dream reflects what you’re trying to do.” She said AI’s efforts to move work forward can “unlock ideas that seem like they were already buried somewhere … in the text.”  Chandler Klang Smith found AI unhelpful in dealing with “macro stuff like plot and structure,” but over at The Verge, self-published “cozy mystery” writer Jennifer Lepp, who had been using Sudowrite and just began experimenting with ChatGPT, told Josh Dzieza that she was astonished that she could feed the chatbot a premise and some particulars and it could generate an effective story in the genre. Self-published genre writers are often writing for readers who consume hundreds of novels a year and are under pressure to produce at scale (see our post on self-publishing and romance). Jennifer Lepp said many writers she knows are wrestling with the implications of drawing on ChatGPT’s capabilities to speed the process. For these readers, does it matter what the balance is between human and machine participation?

Regardless of these more cheerful forays, several endemic dangers in generative AI present themselves. Two are neatly summarized on ChatGPT’s site itself: the bot “may occasionally generate incorrect information” and “may occasionally produce harmful content and biased instructions.” ChatGPT is a language model: all it does is predict language based on patterns it identifies in the very large pool it scoops out of the internet, and it can reproduce all the mistakes and ugliness it can find there, and add some more of its own. (Recall the 2020 scandal when Google fired AI researcher Timnet Gebru whose research identified, among other limitations of large-language models, the reproduction of prejudice.) Draft EU legislation creates “risk categories” that may channel new AI systems toward “low-cost” endeavors like online fooling around before taking up “high-cost” activities like, say, surgery. Yet, as AI researcher Chandra Bhagavatula told TechCrunch, “AI systems are already making decisions loaded with moral and ethical implications.”

Generative AI can be led to trot out racist tropes and sexualize images. AI recruitment tools can encode hiring bias (the Biden administration last fall produced a blueprint for an “AI Bill of Rights” protecting consumers from discriminatory and predatory algorithms). GPT-3 seems to have introduced “guardrails” limiting offensive results, but these are apparently easy to subvert, and also generate for its masters the sorts of moderation issues plaguing all content-agnostic platforms. Generative AI can also of course be deployed for nefarious purposes—cybercrimedeep fakesnon-consensual porn. ChatGPT asks users to commit to not using its result for politics.

Another underlying challenge to large-language, generative AI systems like GPT-3 is intellectual property. Everything ChatGPT does draws on work previously done by someone, and future generative models will constantly be sucking in new material to “train” them. Scholars predict that regulation and copyright prosecution will have to strike some balance of recognizing when AI models directly usurp and imitate specific artists and when the scavenge is more plural and covered by “fair use,” though some lawyers are arguing that all material drawn into such models should be licensed and creators compensated. Stability AI recently indicated that it would allow artists to opt out of the data set used to train the image generator Stable Diffusion; Getty Images banned AI content because of the legal risk; the online gallery DeviantArt created a “metadata tag” for images to warn off the AI trawler. It does start to feel like another phase in the digital bloodletting of renumeration from those who do the mental work that makes up our digital universe. (Relatedly, record labels have recently demanded a royalty hike from TikTok for the music that makes their videos so infectious.)

Wielding tools with such powers and promise, especially in the face of our littered record when it comes to governance both within the tech industry and without by regulation, has writers and those who work with language and the arts at once giddy and nervous. An argument about whether ChatGPT and other large-language tools can produce literary art (for: Stephen Marche, against: Ian BogostWalter Kirn) seems to hinge on whether you see the editorial hand of the human giving the machine its prompts and refining its results as salient. Amit Gupta, one of the founders of the program used by Chandler Klang Smith, told Stephen Marche, in the article that sparked Chandler Klang Smith’s interest, “the writer’s job becomes as an editor almost. Your role starts to become deciding what’s good and executing on your taste, not as much the low-level work of pumping out word by word by word.” Marche compares AI to photography and says “with hindsight, it’s clear that machines didn’t replace art; they just expanded it.”

In presenting at a conference last fall Google’s version of ChatGPT, which the much-huger corporation had not yet released, like the other large firms in the AI arms race, because of the models’ many weaknesses, particularly—for Google—its brand-unfriendly tendency to produce inaccurate results, researcher Douglas Eck emphasized that their model, with the friendly name  Wordcraft Writers Workshop, had been designed to be interactive: “Technology should serve our need to have agency and creative control over what we do.” Reporter Ben Dickson commented, “without human control and oversight, AI systems like generative models will underperform because they don’t have the same grasp of fundamental concepts as we humans do.”

A lot of the disorientation around ChatGPT was visible in an incoherent interview with the widely published Stephen Marche on the podcast Intelligence Squared, in which he both claimed that ChatGPT could produce a poem indistinguishable from Coleridge and could write an essay he could publish in the Atlantic and that there is no way it could “replace human writing,” that computers will never be able to make something that “will go viral,” even though virality is, finally, a robotic phenomenon. He echoed Amit Gupta saying that the usefulness of chatbots will be in creating “first drafts,” which can be completely wrong, that we make human by correcting and revising them. In evaluating AI-assisted student work “you will be looking much more at the content than the clarity of expression,” even though the bot “doesn’t produce work that you can use out of the box”; the human art is in the refinement. If a human is checking the facts and polishing the finish, what is the bot contributing exactly?

I can see the argument for using technological tools, but this idea that “pumping out word by word by word,” or assembling ideas into a form and sequence, is writing’s “low level work” is unintelligible to me. The danger of ChatGPT and its siblings is that it is nearly indistinguishable from the human product and can easily, as Timnet Gebru and her colleagues discerned, generate, as the tools become more sophisticated, “illusions of meaning” that veil the fact that a language model does not actually understand or know anything, it is just borrowing text and mimicking patterns.  Detecting the conscious presence and purpose in the appliqué authorial method that Gupta and Marche describe sounds like a possibly increasingly elusive undertaking. Casey Newton and Kevin Roose noted with unease on their podcast that a large-language model called Cicero had learned how to beat people at the game Diplomacy—i. e., persuade human players in a negotiation.

I can almost imagine a world in which a machine composes something that delights as much as Mozart or engages the mind with the complexity of Shakespeare. I suppose I should not close myself to the possibility of these now unimagined experiences. But it is hard for me to think my way around our historic association of these forms with human intention. If a student’s essay is not the record of a process of developed thought, do we need to find another way of recording developed thought? Or is the idea to delegate developed thought entirely? I can’t quite imagine my way into a world in which intellectual aspiration is no longer recognizable as the grist of the things that we make and admire, the labor of surmounting the pressure of the unknown, the effort to improve a partial or damaged world, because anything can be made without trying, by retrieving and stitching together what has been made before. But perhaps I just don’t know what I’m missing.

The Scary Part About AI Is That a Lot of Writers Like It