‘Godmother of A.I.’ Fei-Fei Li On Why You Shouldn’t Trust Any A.I. Company

“It’s hard to rank the A.I. players, who you trust the most, or least."

Fei-Fei Li
Fei-Fei Li speaks onstage during The AI Optimist Club at WIRED Celebrates 30th Anniversary With LiveWIRED at The Midway SF on Dec. 5, 2023 in San Francisco. Kimberly White/Getty Images for WIRED

Fei-Fei Li, the Stanford computer scientist regarded as the “godmother of A.I.” in Silicon Valley, doesn’t believe one should trust any single A.I. company as the technology advances rapidly and raises complicated questions around its ethical and fair use. “It’s hard to rank the A.I. players, who you trust the most, or least. My trust is in the collective solutions we create together. The founding fathers [of the U.S.] did not put trust in a single person, and so my hope is not in a single A.I., it is in people,” Li said during an interview at the Bloomberg Technology Summit in San Francisco last week.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="noreferrer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Li currently co-directs Stanford University’s Human-Centered AI Institute and advises the Biden Administration on tech policies. When asked about her thoughts on rising concerns about the potential harm of A.I., she said much of the fear around A.I. “belongs to the world of sci-fi.”

“There’s nothing wrong about pondering about all this,” she said. “But compared to the other, actual social risks—whether it’s the disruption of disinformation and misinformation to our democratic process, or, you know, the kind of labor market shift or privacy issues—these are true social risks that we have to face because they impact real peoples’ real life.”

“Our collective will, our responsibility, is to create trustworthy A.I., and there are many people in the industry working on that,” she added. “I worry about real catastrophic social risks, that is more important. We need to be cognizant of labor market shifts and social risks.”

Li received the “godmother of A.I.” moniker for her early work at Princeton University pioneering a massive database called ImageNet that laid the foundation for modern A.I. systems. Li remembered 2007 as “an inflection point in the business intelligence industry,” where the role of data would change dramatically. “We couldn’t have dreamed at that point that the big new world of new networked GPTs would develop, or that we would be talking to President Biden and Congress about how should they use that power,” she said.

“Now A.I. is doing really good work, making scientific discoveries, finding new materials, and medical breakthroughs,” Li said. “The real issue now, is how to develop and deploy the technology thoughtfully, whether it is in the classroom or industry.”

Asked whether she thought A.I. models are running out of data after more than a decade of meticulous ingestion of libraries worth of information, Li said the notion that A.I. models are running out of data is a very narrow viewpoint. “Even large language models, customized models, are gathering huge amounts of data from sources that are really good. The health care industry is not running out of data, nor are industries like education, so no, I don’t think we are running out of data.”

Asked whether she thought some of the A.I. training data could be untrustworthy, Li noted that the problem is not in the data itself and that “even with human-generated data, that can take us down a dangerous path.”

She noted that Meta (META)’s Open Source AI campaign is a promising model of where the industry could go. Headed by Mark Zuckerberg, Meta’s approach focuses on creating open-source A.I. via a massive, ever-growing well of A.I. training data from public posts and comments on Facebook and Instagram, with a strong emphasis on Reels.

“We need what I think is a more entrepreneurial exchange of information, that is very important,” Li said. “We should talk about imagining how to communicate to and with tech, biotech, teachers, doctors, farmers. We don’t talk enough, and really, there’s just a few people out there talking gloom and doom. But the reality is there are people out there thinking in the most imaginative ways, trying to create new use cases for A.I.”

‘Godmother of A.I.’ Fei-Fei Li On Why You Shouldn’t Trust Any A.I. Company