A.I.-generated content is increasingly making its way online. A study earlier this year found that more than 40 percent of posts on the publishing platform Medium were likely created with A.I., while an analysis of newly-created Wikipedia pages in August flagged 5 percent as containing A.I.-generated content. The new technology is also widely used by some of the most prominent authors on Substack, the popular newsletter platform with more than 35 million subscribers.
Of 100 top Substack authors, 10 percent use some form of detectable A.I., according to a recent report from the A.I. detection software GPTZero, as first reported by Wired. GPTZero emerged after the release of OpenAI’s ChatGPT as a tool to identify text created by large language models (LLMs). It claims to be 99 percent accurate when differentiating between A.I. and human-written text and 96 percent accurate at detecting when writing contains text written both by A.I and humans.
Founded in 2017, Substack has over the years attracted a bevy of high-profile writers like the journalist Glenn Greenwald and the historian Heather Cox Richardson. GPTZero took a look at the recent content of the platform’s 100 most popular newsletters by pulling the last 25 to 30 posts from their feed and running through detection model. In the case of paywalled newsletters, GPTZero paid for subscriptions when possible.
To fall into the 90 percent of Substack newsletters identified as human-written, they didn’t even have to be completely devoid of A.I.—GPTZero still doles out its “Certified Human badge” to writers with one to two A.I.-generated posts. Meanwhile, 10 percent of writers were flagged as using some forms of detectable A.I., while 7 percent of writers were found to have used A.I. significantly in more than 1 in every 10 posts, according to GTPZero. Nearly all the Substacks in this latter section boast six-figure subscriber numbers and focus on topics like sports, financial advice and business.
How are Substack authors using A.I.?
One of the seven A.I.-heavy Substacks is a soccer-focused newsletter written by David Skilling, who also serves as the CEO of the sports agency Freedom Sport. “I see A.I. as a support tool rather than a creator,” Skilling told Observer, adding that A.I. tools have taken on the role of an assistant. Skilling, whose newsletter has 623,000 subscribers, leans on A.I. to help with gathering research for stories and editing copy. He likens his use of the technology to the transition photographers made from developing film in darkrooms to now using “digital tools to streamline editing.”
Josh Belanger, who chronicles the stock market for 352,000 subscribers through his Belanger Trading newsletter, draws from LLMs like ChatGPT, Claude and even Elon Musk’s Grok to speed up research and inject color and personality into his writing. “It helps with just getting a lot more stuff done faster,” he told Observer, adding that he began using A.I. significantly in the past six to eight months.
Other Substack authors identified in GPTZero’s report as using A.I. claim that the technology assists their content instead of creating it. Subham Panda, one of the writers behind the Spotlight by Xartup Substack, told Wired he uses A.I. to create images and aggregate information; while Max Avery, a writer for the newsletter Strategic Wealth Briefing With Jake Claver, said the technology comes in handy for editing rough drafts.
Substack doesn’t prohibit A.I.-generated content, although the platform has mechanisms in place to detect spam activities like duplicated content and bot activity that often involve A.I. “We don’t proactively monitor or remove content solely based on its A.I. origins, as there are numerous valid, constructive applications for assisted content creation,” said the company in a statement to Observer.
GPTZero, which doesn’t currently measure factors like the accuracy or quality of writing, maintains that its A.I. detection capabilities enhance transparency around A.I.-generated text. The purpose of GPTZero’s report “isn’t to pass moral condemnation on writers who use A.I.,” said the company in its report, but instead to “raise awareness about the prevalence of A.I.-generated content, especially as the amount of A.I. content grows unchecked.”