
US President Joe Biden signs into law H.R. 5376, the Inflation Reduction Act of 2022 (climate change and health care bill) in the State Dining Room of the White House on Tuesday August 16, 2022. (Photo by Demetrius Freeman/The Washington Post via Getty Images) The Washington Post via Getty Im
This article is adapted from The Middle Out: The Rise of Progressive Economics, published this month by Doubleday.
When the Inflation Reduction Act was signed into law last month, even many long-term observers were caught by surprise. Since the beginning of the Biden Administration, it had long appeared that no meaningful legislation could pass the Senate without the support of West Virginia Senator Joe Manchin, who’s historically shown little appetite for the type of climate-change reduction that makes up a portion of the bill; but because Larry Summers quieted his inflation jitters, and because the bill also contained some west Virginia goodies, he came around.
But maybe just as surprising was how ambitious the bill’s economic provisions are, mostly aimed at making life easier for the middle class and below. It promises to augment union jobs by creating electric vehicles and other clean energy technology; it promises to enforce prevailing wages; it promises a fairer tax code and to crack down on wealthy tax dodgers; and it promises to lower health care costs for millions of Americans.
You’d have to go back at least to the 1960s to find a single piece of economic legislation this sweeping and explicitly pro-worker. But while that may seem like a new development politically, Biden’s economic policies reflect a sea change in economic thinking that has been decades in the making. To put it bluntly, the often arcane world of economics has at last caught up with reality, and is now focused on inequality in ways that were inconceivable a few years ago. And those changes in the economics profession are laying the ground for economic policy-making in Washington.
Consider that in 1993, the economists David Card and Alan B. Krueger published a pioneering study of the minimum wage. Two years after that, they published a book expanding on the paper called Myth and Measurement: The New Economics of the Minimum Wage. They were doing an event at the Brookings Institution, the famous Washington think tank, and were laying out their basic argument that they found no evidence to support the idea that a higher minimum wage led to reductions in employment. A hand shot up from an economist in the audience who objected to all this talk about evidence, saying, “Theory is evidence, too.”
Card and Krueger’s work presented a challenge to their field precisely because it was based on evidence and data. They looked at what actually happened in low-wage workplaces around the New Jersey–Pennsylvania border when New Jersey raised its minimum wage, and they found that there was in fact no resulting reduction in low-wage employment in New Jersey relative to Pennsylvania. In the argot of economics, what Card and Krueger conducted was called a “natural experiment.” Their method was a challenge to the way most economics was done at the time, which was that most research was based on theoretical modeling rather than real-world evidence. And their conclusion flew in the face of the prevailing theory, which had held for a century or more that when a price (here, a wage) is raised, demand (for workers) falls.
So it was highly controversial, and a lot of people, particularly Republican politicians, still don’t accept it. If you’re a regular human being, you may think it kind of obvious that real-world evidence and data should be at the heart of any kind of analysis in either the hard or the social sciences. But economics, as Paul Krugman wrote in a 2009 essay, became in the latter part of the twentieth century more based on theories and models than on evidence. The models were alluring to many people, Krugman writes, because they were elegant and based on increasingly complex mathematical calculations. They won their creators Nobels. And they were reassuring because they tended to proceed from the long-held assumption of neoclassical economics that actors behaved rationally—that is, people never behaved irrationally, made poor decisions, and so on—and the system was insulated against undue risk.
At the time, there was some justification for this faith in theoretical models, explained Jesse Rothstein of UC Berkeley, one of the leading economics departments in the United States that has challenged traditional thinking. “Prior to the ’90s, in general, if the theory conflicted with the empirics, you would ignore the empirics and focus on what the theory said,” Rothstein told me. “And that was probably the right thing to do because the empirics weren’t very good. We didn’t have much data. We didn’t have very good methods of disentangling all the different causal factors. And so you were probably more right to do that than not.”
But in the 1990s, something changed, Rothstein said, something that has only grown more pronounced since then: “We have better data. We have better computers. We have better empirical methods. Economics kind of became known as a field that really took causal inference very seriously.”

This idea of causality is key, as Heather Boushey wrote in an essay in the journal Democracy in 2019 in which she explained for lay readers the sea changes that had taken place in economics. The techniques pioneered by Card and Krueger and quickly adopted by others “allowed economists to estimate causality—that is, to show that one thing caused another, rather than simply being able to say that two things seem to go together or move in tandem.” Causality meant that, based on all this newly available data, economists could look for explanations for problems like inequality or wage stagnation or global poverty in a way that wasn’t possible in previous eras. It was mostly driven by access to new data, and it was a profound change.
As Boushey wrote, “Whereas in the 1960s about half of the papers in the top three economic journals were theoretical, about four in five now rely on empirical analysis—and of those, over a third use the researcher’s own data set, while nearly one-in-ten are based on an experiment.”
The most prominent example of work in this new empirical realm that has had a huge real-world impact is that of Thomas Piketty, and of his sometimes collaborators Emmanuel Saez and Gabriel Zucman, as well as the inequality research pioneer Tony Atkinson. Piketty’s most famous work, the book Capital in the Twenty-First Century, sold millions of copies worldwide and was made into a movie. He argued that the return on capital (profits, dividends, incomes, rents, and so on) is greater than the growth rate of national income (total economic output), which has the effect over time of wildly concentrating wealth in the top 1 percent, the top 0.1 percent, and even the top 0.01 percent. His conclusions were driven by mounds of income-tax data from the United States covering decades, data that showed how the rich were running away from the rest, and how the superrich were running away from even the rich.
The book’s impact is difficult to overstate. It moved economic inequality to the white-hot center of economic debates and the political conversation. Writing together, Piketty, Saez, and Zucman have looked at tax and other data all the way back to 1913 to compare pretax and post-tax growth rates for different segments of the U.S. population (the pretax/post-tax distinction is important because it tells us whether the government’s policies on income tax and other matters help to shift wealth in one direction or the other). They found that since 1980, eight percentage points of national income shifted from the bottom 50 percent of the population to the top 1 percent. They also found that “government redistribution has offset only a small fraction of the increase in pretax inequality.” In other words, tax rates are not keeping up with the shift in wealth toward the rich.
Another high-profile example of the impact of data analysis comes from the realm of what’s called development economics—the study of global poverty. Here, Esther Duflo, Abhijit Banerjee, and Michael Kremer are among the best-known practitioners. They introduced the idea of randomized controlled trials (RCTs) into the study of various aspects of global poverty—essentially, a way to assign people or whole villages to either a “treatment group” or a “comparison group” at random to try to determine the impacts of particular interventions. This practice earned development economists the moniker the Randomistas. The trio won the Nobel Prize in 2019 for their work. Another development economist wrote upon that announcement, “Over the last fifteen years, Abhijit, Esther, and Michael’s work has truly revolutionized the field of development economics by changing our view of what we know—and what we can know—about when and why some policy interventions work and others do not.” RCTs have come under some strong criticisms: that they can’t be generalized upon, and that by focusing so intently on small questions, their adherents ignore important big ones. But RCTs have helped development efforts figure out how best to improve education or health-care outcomes, for example, in many poor parts of the world.
The IRA will not, of course, end inequality on its own or any time soon. But the fact that it has passed makes it very likely that there will be more bills like it that reflect the new economic thinking.