As the A.I. models developed by tech companies become larger, faster and more ambitious in their capabilities, they require more and more high-quality data to be trained on. Simultaneously, however, websites are beginning to crack down on the use of their text, images and videos in training A.I.—a move that has restricted large swathes of content from datasets in what constitutes an “emerging crisis in data consent,” according to a recent study published by the Data Provenance Initiative, a group led by researchers at the Massachusetts Institute of Technology (MIT).
The study found that in the past year alone, a “rapid crescendo of data restrictions from web sources,” set off by concerns regarding the ethical and legal challenges of A.I.’s use of public data, has restricted much of the web to both commercial and academic A.I. institutions. Between April 2023 and April 2024, 5 percent of all data and 25 percent of data from the highest quality sources has been restricted, the researchers found through looking at some 14,000 web domains used to assemble three major datasets known as C4, RefinedWeb and Dolma.
Major A.I. companies typically collect data through automatic bots known as web crawlers, which explore the internet and record content. In the case of the C4 dataset, 45 percent of data has become restricted through website protocols preventing web crawlers from accessing content. These restrictions disproportionately affect crawlers from different tech companies and typically advantage “less widely known A.I. developers,” according to the study.
OpenAI’s crawlers were restricted for nearly 26 percent of high-quality data sources, for example, while Google (GOOGL)’s crawler was disallowed from around 10 percent and Meta (META) from 4 percent.
If such constraints weren’t enough, the supply of public data to train A.I. models is expected to become exhausted soon. Given the current pace of companies working on improving A.I. models, developers could run out of data between 2026 to 2032, according to a study released in June by the research group Epoch A.I.
A.I. companies are paying millions to acquire training data
As Big Tech scrambles to find enough data to support their aggressive A.I. goals, some companies are striking deals with content-filled publications to gain access to their archives. OpenAI, for example, has reportedly offered publishers between $1 million to $5 million for such partnerships. The A.I. giant has already entered into deals with publications like the Atlantic, Vox Media, The Associated Press, the Financial Times, Time and News Corp to use their archives for A.I. model training, often offering the use of products like ChatGPT in return.
To unlock new data, OpenAI has even considered using Whisper, its speech-recognition tool, to transcribe video and audio from websites like YouTube—a method that has also been discussed by Google. Other A.I. developers like Meta, meanwhile, have reportedly looked into acquiring publishing companies like Simon & Schuster to obtain its large cache of books.
Another possible solution to the A.I. data crisis is synthetic data, a term used to describe data generated by A.I. models instead of humans. OpenAI’s Sam Altman brought up the method during an interview earlier this year where he noted that data from the Internet “will run out” eventually. “As long as you can get over the synthetic data event horizon, where the model is smart enough to make good synthetic data, I think it should be all right,” he said.
Some prominent A.I. researchers, however, believe fears over an emerging data crisis are overblown. Fei-Fei Li, a Stanford computer scientist often dubbed the “Godmother of A.I.,” argued that data limitation concerns are a “very narrow view” while speaking at the Bloomberg Technology Summit in May.
While constraints may be tightening around internet content, Li noted that a variety of alternative and pertinent data sources have yet to be tapped by A.I. For example, “the health care industry is not running out of data, nor are industries like education, so no, I don’t think we are running out of data,” she said.