
It turned out the northeast snow storm today came a few hours later than we were warned.
Weather forecast technology has come a long way. A three-day forecast today is as good as the one-day forecast 10 years ago, thanks to the massive computing power of supercomputers that can consolidate trillions of data points on atmospheric conditions into simple simulations.
And yet, pinpointing where and when a snow storm will hit is still extremely challenging.
One straightforward reason is simply the number of factors at play.
“One of the great challenges to predict today’s storm in the northeast is the type of precipitation—will it be rain or snow or a little bit of both? These fine-scale details can be very difficult to track from one hour to the next because there are so many variables that can influence these,” said Greg Carbin, chief of forecast operation at the National Weather Service, the federal agency that provides weather forecasts to major TV networks and other media we get weather information from.
The distance between a light drizzle and a blizzard can be as close as 30 miles, meaning there can be a shower in State Island and barely any rain in the Bronx at the same time.
A more technical reason, as a 2016 Economist article pointed out, is that conflicting prediction models can produce vastly disparate results.
For example, before Hurricane Sandy hit the East Coast in 2012, most American weather models predicted the storm would bypass the mainland and head towards the Atlantic Ocean, while European models correctly identified the storm track.
Weather forecasting starts with raw data describing atmospheric conditions collected by a number of sources, ranging from satellites to on-the-ground weather stations. This information, in the form of trillions of data points, is then processed through models that generate the most probable simulations of the weather at a future time.
As a general rule, the more data computers can crunch (and the faster they can do so), the more accurate the forecast results will be.
“A good weather forecast requires two parts: an accurate initial state of the atmosphere and a good model with sufficient resolution. But, in reality, an accurate three-dimensional initial state of the atmosphere is exceptionally challenging. That creates uncertainties that get amplified as the atmospheric simulation evolves in time,” Xi Chen, a researcher in atmospheric and oceanic sciences at Princeton University, told Observer.
Chen’s lab produced a model called FV3, which can utilize tens of thousands of processors to work simultaneously on atmospheric simulations. The model was adopted by the National Weather Service in 2016 as part of an upgrade following the misforecast of Hurricane Sandy. The new model is currently under implementation.
The existing model of the National Weather Service divides the Earth into a grid of 13 km-by-13 km blocks to observe and make predictions.
“However, many crucial weather phenomena, such as precipitation, are largely determined by cloud processes, which could be of much smaller scales,” Chen said. “Therefore, scientists rely on a technique called ‘physical parameterization’ to estimate such processes, which inevitably introduces uncertainties. Our job is to minimize the uncertainties by both improved theories and hopefully more available computing resources.”
“Improvements in forecast accuracy have been pretty dramatic in past decades. The global models have gotten pretty good at indicating potential significant weather five to seven days out. For example, the snow storm we are dealing with today was predicted a week ago, even though details still need to be worked out,” Carbin told Observer.
“It’s telling the future, after all,” Chen added.