Whether we realize it or not, a great many writers have been using A.I. daily for decades. Spell check, after all, was born in 1971 at Stanford University’s Artificial Intelligence Laboratory. It became a common tool in Word sometime in the early aughts, and I remember having a sense then that using it was somehow cheating. I wasn’t alone in feeling this way, but at some point, it started feeling lazy not to use spell check. Besides, the artistry isn’t in the ability to spell words correctly but the ability to synthesize experiences into words people can relate to.
In the wake of powerful new expressions of A.I., like OpenAI’s ChatGPT and a slew of other generative models, the line between the artist and the tool is starting to blur. The implications are massive, both in terms of how people leverage creativity to make a living and in how we decide to define art.
When it comes to our notion of work, the new maxim on the streets bears sober consideration: A.I. isn’t coming for your job; someone using A.I. is coming for your job. A graphic designer leveraging tools like text to image, sketch to image, generative fill and text effects from Firefly, Adobe (ADBE)’s beta collection of generative models, is likely to outpace the designer who isn’t using them.
It’s rather exciting in the near term, but it raises questions about what could happen when these tools outpace the graphic designer using them. Is that a bad thing? The answer probably depends on your view of capitalism or how intertwined your work is with your identity. I see the glimmer of a possibility that by automating the tedium that fills so much of our lives, A.I. might help us break free from the productivity mindset that requires using quantity as a metric of accomplishment instead of quality.
In this moment, however, the sheer power of these nascent tools prompts another question: Will models this powerful and ubiquitous initiate the ruinous decimation of human creation? An artistic armageddon?
I turned to Scott Bourne for an answer. An author, publisher, former pro skateboarder and American expat living in Paris, Scott was a fellow contributor to SLAP skateboarding magazine back in the early days of spell check, but his writing always carried the extra weight of having been punched out on a typewriter. There was no autocorrect in Scott’s world, and to this day he carries no pocket computer, sending much of his correspondence out the old-fashioned way, in envelopes, sometimes sealed with discs of melted wax.
When I asked Scott if he’d be interested in collaborating with Bournebot, a personal large language model (LLM) trained on all of his output—decades worth of poems, stories, interviews and emails—the answer came quickly: “Not in the slightest.”
“What most interests me in this time of technology is touch: human touch and staying human,” he said. “I think it is understood that time is what makes things great. The longer it takes to create something, the better it is, whether it be wine, whisky or a work of art. Haste makes waste, and I think we are wasting a lot of time on speed. Write, rewrite, write again and again and again. Then revise and write it once more.”
What was interesting about his reply was that it echoes an approach to A.I. that I’ve been exploring for the past few years. If technology is going to become more powerful and intertwined with our lives, I’d prefer an outcome that puts a premium on humans’ creativity and problem-solving skills—one where our eyeballs aren’t tethered to glowing screens. Maybe if we’re having conversations with technology instead of staring at it, A.I. can become an invisible ally. Maybe machines can give us the gift of time. Maybe we’ll spend that time having meaningful interactions with other people. Maybe we’ll use it in artistic endeavors: to write and rewrite. Or will it be so easy to point A.I. to the task of writing out our ideas that we won’t bother to take the time?
“No A.I. is ever going to have an experience, so it could never write my poems,” Bourne said. “No A.I. is ever going to feel love or pain or any of the emotions that have created Scott Bourne. I am not just blood and guts; there’s a hard road in there, some hitchhike and fist fight, a little jail time and freight ride. A.I. could only imitate, and all my life I have been avoiding imitations. I think it’s kind of ironic that people now prefer an imitation to the real thing.”
His argument made me think of Oscar Wilde’s assertion that “life imitates art far more than art imitates life.” As I’ve come to understand it, he means that what we experience as life is just what art has taught us exists. I also spoke with New York Times bestselling author and business leadership coach Charlene Li, who loved the idea of having her own LLM (and was actually in the exploratory stages of crafting one).
“It’s not just another me, it’s a better me,” she said of this potential ally. “Seventy percent of what I do, I can now do with GPT. I can do more of the 30 percent that’s unique to me and keep working with the technology to keep adding on to that, and I can do other things—higher-order, more value-added things.”
So, there’s no one answer, just fronds of perception dangling overhead. The task at hand is to wrap your head around generative A.I. and then decide how you want to incorporate it into—or attempt to eliminate it from—your life.
Generative A.I. has kicked over the barrier of technical training as a means to create art. People who feel compelled to explore artistic expression but who don’t have the natural talent or hard-earned skills are now in a position to share their visions. Generative A.I. has the power to raise marginalized voices and engender widespread empathy. But is it bad for people who have trained extensively as artists? That seems likely in scenarios where technology directly threatens livelihoods or intellectual property, and there are already artists litigating around these points.
Perhaps the root problem is that making money as an artist has always been hard as hell. Most of the artists I know have a stable of side-gigs that let them make both art and a living. The lucky ones are able to combine their artistic abilities with something marketable, like graphic design or cosmetology. Getting paid to write short stories, record reviews and band interviews for SLAP felt like a huge break twenty years ago. It was one of many freelance writing gigs I maintained, and I still needed to work odd jobs on the side. From my experience, it’s only gotten harder to find paid work as a writer.
Right now, I feel like I can outperform LLMs. As a trained writer, I don’t have a lot of use for ChatGPT. My favorite thing to do with generative A.I. so far has been feeding Stable Diffusion 2 odd prompts in hopes of getting unsettling results (“Alf eating pasta” continues to deliver).
Rebecca Evanhoe, co-author of a popular book in experience design circles, Conversations With Things: UX Design for Chat and Voice, shared a similar viewpoint on large language models: “I think it’s going to be real hard to get that pen out of my hand. Writing is so essential to my thought process, I can’t really tell you what I think [about something] until I’ve written about it… If I can’t write, I’m not even doing the thinking… But I am a person who has a gift in writing, and I have a lot of training in writing. Writing is very painful for lots of people.”
Writing can be excruciating, and I know it’s only a matter of time before generative A.I. will surpass my writing abilities by certain metrics, most notably speed. In the commercial marketplace, my value as a writer might shrink even further, but I agree with Scott that it will be exceedingly difficult for a machine to replicate my experience.
Our innate human ability to synthesize our experiences for creative problem solving will be hard for technology to infringe upon. That potentially matters less if we’re all adrift in an endless, churning sea of quasi-personalized, auto-generated content that has immense technical prowess but no pulse. That’s why it seems critical for artists and creatives to find ways to use these tools that will augment their abilities, not replace them.
Whether it’s a busy artist using ChatGPT to craft email updates for their mailing list or a designer using generative tools to produce a wider variety of work for clients, technology used responsibly can provide tangible gains in terms of income while also shaping the form these powerful technologies assume. Machines are destined to become more intertwined with our lives. With generative A.I. readily available, there’s an opportunity right now for people to decide what that will look like.
Recently, I was part of a conversation about technology and the nature of art with Laura Herman, a researcher at the University of Oxford. We discussed many aspects of generative A.I. and creativity, but her thoughts on intentionality have been echoing in my head.
“Naturally, when I look at a piece of artwork… I think about ‘What was the artist trying to convey?’ and ‘Why did they do this?’” Herman said. “It can be just physical intentionality—it could be conceptual intentionality. Now [these] new technologies [are] shifting the visibility of intentionality and shifting how that intentionality is brought to bear on the ultimate artwork—there’s so much to think about in that space.”
In a world oversaturated with generative content, it’s possible that people will be more drawn to art that’s being created by humans. On paper, machines can play jazz, but does anyone want to watch one take an extended solo over Coltrane’s “Giant Steps”? What is the quality of the intentionality in that kind of content? Like most of the things we call art, the very heart of jazz is so tied to human experience and discipline that a machine’s best attempt would be eternally relegated to the status of imitation.
Maybe the question isn’t will the tool usurp the artist but will we let it?