A.I. Safety Institute Defends the Future of U.S. Tech Policy as Trump’s New Era Looms

As Trump vows to dismantle key regulations, the U.S. A.I. Safety Institute argues that true innovation needs oversight as A.I. risks grow.

Woman in black blazer sits onstage in a chair in front of a digital screen
Elizabeth Kelly claims regulatory safeguards are needed to fuel advances in A.I. Photograph by Michael O’Shea for Fortune

A.I. regulators are bracing for a new chapter under President-elect Donald Trump, who’s already eyeing the shredder for policies he deems obstacles to tech progress. One target on the chopping block: the U.S. A.I. Safety Institute, founded last year under the Biden-Harris Administration to access and curb risks in advanced A.I. systems. Despite the institute’s focus on safeguards, its director Elizabeth Kelly insists that regulation fuels, rather than stymies, A.I. advancements. “We see it as part and parcel of enabling innovation,” she asserted at the Fortune Global Forum yesterday (Nov. 11), doubling down on her view that safety and progress can coexist—if regulators know what they’re doing.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

Housed within the Department of Commerce, the safety institute convenes an unlikely band of computer scientists, academics and civil society advocates to craft fresh guidelines aimed at managing A.I.’s slippery slope. Established as a cornerstone of Biden’s 2023 A.I. executive order, the institute’s mandate includes enforcing transparency standards and crafting testing protocols for A.I. models—a setup that, according to the GOP’s 2024 platform, “hinders A.I. innovation” and is, accordingly, market for repeal.

Sensing the shift, major A.I. developers aren’t waiting for the axe to fall. Last month, industry giants like OpenAI, Google (GOOGL), Microsoft (MSFT) and Meta (META) signed a letter urging Congress to authorize the institute permanently, touting its work as “essential to advancing U.S. A.I. innovation, leadership and national security.” Translation: the tech titans want the safety guardrails in place before innovation speeds past them.

Kelly remains adamant that the institute doesn’t just keep A.I. development in check—it accelerates it. Beyond enhanced R&D, talent acquisition and boosted computing resources, the institute’s mission is, as Kelly put it, about building “trust—which enables adoption, which also enables innovation.”

Earlier this year, the institute inked a deal with OpenAI and Anthropic for early access to their newest models, putting Anthropic’s Claude 3.5 Sonnet through a pre-deployment trial run. Kelly hailed this as a “classic example” of leveraging expertise from places like Berkeley and MIT to assess risks and national security implications—a not-so-subtle reminder that this team isn’t exactly the D-List.

The institute’s ambition is straightforward: enable A.I.’s vast potential responsibly. From drug discovery to carbon capture, A.I.’s promise looms large but, as Kelly cautions, so do the risks. “As we see these models become more powerful and more capable, we get that much closer to that future,” she explained. In her eyes, it’s a matter of ensuring the brakes are as sharp as the engine before A.I. hits the fast lane.

A.I. Safety Institute Defends the Future of U.S. Tech Policy as Trump’s New Era Looms