
OpenAI CEO Sam Altman believes we are already past “the A.I. event horizon,” he said in a new blog post yesterday (June 11), arguing that A.I. development is quietly reshaping civilization—even if the shift feels subtle. “The takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it’s much less strange than it seems it should be,” he wrote.
According to the OpenAI CEO, 2025 marks a pivotal shift in A.I. capabilities, particularly in coding and complex reasoning. By next year, he expects A.I. systems to begin generating original scientific ideas, with autonomous robots functioning effectively in the physical world by 2027.
“In the 2030s, intelligence and energy are going to become wildly abundant. These two have long been the fundamental limiters on human progress,” he wrote. “With abundant intelligence and energy (and good governance), we can theoretically have anything else.”
One key driver of this shift is A.I. infrastructure, such as computing power, servers and data center storage. As it becomes more automated and easier to deploy, the cost of intelligence could soon be as low as electricity. And it will supercharge scientific discovery, enable infrastructure to build itself, and unlock new frontiers in health care, materials science and space exploration. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different,” Altman wrote.
Altman also addressed a common question: how much energy does a ChatGPT query use? He revealed that a typical query consumes just 0.34 watt-hours of energy and 0.000085 gallons of water—roughly the same amount of power an oven uses in a second and as little water as one-fifteenth of a teaspoon.
While some fear that A.I. could render human labor obsolete, Altman believes that by 2030, A.I. will amplify human creativity and productivity, not replace it. “In some big sense, ChatGPT is already more powerful than any human who has ever lived. A small new capability can create a hugely positive impact,” he wrote.
However, Altman also acknowledged the dangers. He noted that alignment—the challenge of ensuring A.I. systems understand and follow long-term human values—is still unsolved. He cited social media algorithms as an example of poorly aligned A.I. systems—tools optimized for engagement that often result in harmful societal outcomes.
The real threat is not that A.I. will replace human purpose, but that society might fail to evolve the systems and policies necessary for people to thrive alongside increasingly intelligent machines. He urged global leaders to begin a serious conversation about the values and boundaries that should guide A.I. development before the technology becomes too deeply entrenched to redirect.
“The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” he wrote.