OpenAI, the artificial intelligence research lab founded by Elon Musk and former Y Combinator CEO Sam Altman with a stated aim to develop “friendly AI” for humanity, has released a controversial language software that it called “too dangerous to make public” not long ago.
In February, OpenAI announced that it had made a robot text generator, named GPT-2, that’s so fluent at producing English text that it could write a decent news article based on a simple prompt. But worried that this tool could potentially be used for malicious purposes, such as mass-producing fake news and propaganda content, OpenAI published a research paper about its scientific significance but didn’t release the full model or the dataset it used to train the system.
SEE ALSO: Google’s AI Hate Speech Detector Has a ‘Black Tweet’ Problem
Instead, the research lab released codes of a smaller version of the language model. In May, OpenAI introduced a new version about three times more powerful than the previously released one. And last week, it unveiled an even more advanced version that’s six times more powerful than the February one.
OpenAI said it’s in partnership with a group of other institutions, including Cornell University and the Middlebury Institute of International Studies at Monterey, to ensure the ethical and unbiased use of GPT-2.
But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI’s (shelved) original GPT-2 software on the internet for anyone to download.
The pair said their work was to prove that creating this kind of software doesn’t require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don’t believe such a software would cause imminent danger to society.
“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” Cohen told Wired. “I’ve gotten scores of messages, and most of them have been like, ‘Way to go.'”
Similar to OpenAI’s process, Gokaslan and Cohen trained their language software using millions of webpages of text written by humans (by harvesting links shared on Reddit) and an estimated $50,000 worth of free cloud computing from Google.
In its announcement last week, OpenAI said that it was aware of at least five GPT-2 replications out there. It also said the smaller version released by itself is roughly as good as the fully withheld software at creating fake news articles.
I experimented with the tool by typing the prompt, “The U.S.-China trade war will help Trump win the 2020 re-election.” Here’s what I got from the robot:
The U.S.-China trade war will help Trump win the 2020 re-election. If Clinton prevails in the primary and the GOP holds the White House, it will be the end of the American dream for middle class Americans. And it will help Trump win re-election if he is the nominee. For now, he is the best bet to win.
OpenAI was originally founded by Musk, Altman and other investors in 2015 as a nonprofit institution to share the latest developments in AI with the world.
In February, Musk left OpenAI’s board, citing potential future conflict of interests as Tesla (TSLA) undertakes its own AI research for the company’s self-driving car effort.
In March this year, OpenAI restructured itself from a nonprofit to a for-profit company called OpenAI LP in a bid to attract more investments. Soon after, Microsoft invested $1 billion.