How Scientists Use Climate Models To Forecast The Future

How Scientists Use Climate Models To Forecast The Future

Sharing is caring!

Jeff Blaumberg, B.Sc. Economics

Global Supercomputers Crunch Mind-Boggling Numbers

Global Supercomputers Crunch Mind-Boggling Numbers (image credits: flickr)
Global Supercomputers Crunch Mind-Boggling Numbers (image credits: flickr)

Climate models simulate the physics, chemistry and biology of the atmosphere, land and oceans in great detail, and require some of the largest supercomputers in the world to generate their climate projections. Think about it this way – a single climate model uses roughly about 10 megawatt hours of energy to simulate a century of climate, which is about the amount of electricity used annually by a US household.

Climate models are complex configurations of computer code constructed by scientists that specialize in a wide array of disciplines including mathematics, physics, chemistry, biochemistry, hydrology, and many more. These models are comprised of component sets of code that represent different parts of the climate system (atmosphere, ocean, land surface, ice, ecosystems, etc.) and are coupled together to simulate Earth’s climate. Climate models have been worked on and improved for decades and today’s models require some of the fastest supercomputers in the world to run.

Ensemble Modeling Creates Multiple Future Scenarios

Ensemble Modeling Creates Multiple Future Scenarios (image credits: pixabay)
Ensemble Modeling Creates Multiple Future Scenarios (image credits: pixabay)

Given the range of natural climate variability and uncertainties regarding future greenhouse gas emission pathways and climate responses, changes projected by one climate model should not be used in isolation. Rather, it is good practice to consider a range of projections from multiple climate models (ensembles) and emission scenarios.

Ensembles of models represent a new resource for studying the range of plausible climate responses to a given forcing. Such ensembles can be generated either by collecting results from a range of models from different modelling centres (‘multi-model ensembles’ as described above), or by generating multiple model versions within a particular model structure, by varying internal model parameters within plausible ranges (‘perturbed physics ensembles’). Multi-model ensemble, where each model has its own realization of the processes affecting Q, and its own internal variability around the baseline value. The multi-model mean (black) is commonly taken as the ensemble average.

Real-World Data Validation Proves Models Work

Real-World Data Validation Proves Models Work (image credits: rawpixel)
Real-World Data Validation Proves Models Work (image credits: rawpixel)

A new evaluation of global climate models used to project Earth’s future global average surface temperature finds that most have been quite accurate. In a study accepted for publication in the journal Geophysical Research Letters, a research team led by Zeke Hausfather of the University of California, Berkeley, conducted a systematic evaluation of the performance of past climate models. The team compared 17 increasingly sophisticated model projections of global average temperature developed between 1970 and 2007, including some originally developed by NASA, with actual changes in global temperature observed through the end of 2017.

“The results of this study of past climate models bolster scientists’ confidence that both they as well as today’s more advanced climate models are skillfully projecting global warming,” said Gavin Schmidt, director of NASA’s Goddard Institute of Space Studies. Hindcasts are a useful tool for modellers to assess the performance of climate models. When models do a good job of representing past changes, it can instill confidence that they will get future changes correct as well.

Current Temperature Predictions Are Getting Hotter

Current Temperature Predictions Are Getting Hotter (image credits: pixabay)
Current Temperature Predictions Are Getting Hotter (image credits: pixabay)

The WMO report forecasts that the annually averaged global mean near-surface temperature for each year between 2025 and 2029 is predicted to be between 1.2°C and 1.9°C higher than the average over the years 1850-1900. Even more concerning, there is an 80% chance that at least one year between 2025 and 2029 will be warmer than the warmest year on record (currently 2024). There is a forecast 70% chance that the five-year average warming for 2025-2029 will be more than 1.5°C, according to the report. This is up from 47% in last year’s report (for the 2024-2028 period) and up from 32% in the 2023 report for the 2023-2027 period.

There is a 50% chance, the authors reported, that global warming will breach 2 degrees Celsius even if humanity meets current goals of rapidly reducing greenhouse gas emissions to net-zero by the 2050s. A number of previous studies, including the authoritative assessments by the Intergovernmental Panel on Climate Change, have concluded that decarbonization at this pace would likely keep global warming below 2 degrees.

Weather Pattern Predictions Show Regional Changes

Weather Pattern Predictions Show Regional Changes (image credits: unsplash)
Weather Pattern Predictions Show Regional Changes (image credits: unsplash)

Predicted precipitation patterns for May-September 2025-2029, relative to the 1991-2020 baseline, suggest wetter than average conditions in the Sahel, northern Europe, Alaska and northern Siberia, and drier than average conditions for this season over the Amazon. Recent years, apart from 2023, in the South Asian region have been wetter than average and the forecast suggests this will continue for the 2025-2029 period.

The El Niño/Southern Oscillation (ENSO) Diagnostic Discussion released in September 2024 by the Climate Prediction Center (CPC)/NCEP/NWS, maintained a “La Niña Watch,” forecasting that a transition from ENSO-neutral to La Niña conditions is likely in the next couple of months, with a 71% chance of La Niña during October–December 2025. Thereafter, La Niña remains favored, although the probability decreases to 54% for the December 2025–February 2026 period.

Cloud Simulation Remains A Major Challenge

Cloud Simulation Remains A Major Challenge (image credits: pixabay)
Cloud Simulation Remains A Major Challenge (image credits: pixabay)

Warmer global temperatures produce faster overall evaporation rates, resulting in more water vapor in the atmosphere…and more clouds. Different types of clouds at different locations have different effects on climate. Some shade the Earth, cooling the climate. Others enhance the greenhouse effect with their heat-trapping water vapor and droplets. Scientists expect a warmer world to be a cloudier one, but are not yet certain how the increased cloudiness will feed back into the climate system.

Modeling the influence of clouds in the climate system is an area of active scientific research. Moreover, such models struggle to simulate small-scale processes, such as how raindrops form, which often have an important role in large-scale weather and climate outcomes.

Machine Learning Enters Climate Science

Machine Learning Enters Climate Science (image credits: flickr)
Machine Learning Enters Climate Science (image credits: flickr)

Climate scientist Tapio Schneider is delighted that machine learning has taken the drudgery out of his day. But since 2017, machine learning and artificial intelligence (AI) have transformed the way he works. “Machine learning makes this science a lot more fun,” says Schneider, who works at the California Institute of Technology in Pasadena. “It’s vastly faster, more satisfying and you can get better solutions.”

These simplified imitations of physics-based models do not directly solve the underlying mathematics, so are computationally cheaper alternatives. Emulators are trained for specific tasks, often those requiring many simulations and/or finer resolutions, such as uncertainty quantification and downscaling. ML can potentially improve traditional statistical emulators by enabling them to better capture non-linear relationships. Google DeepMind’s hybrid model, integrating LSTM and CNN, has shown improvements in wind energy forecasting, demonstrating the strength of combining temporal and spatial learning in data-rich settings. Similarly, the CorrDiff model utilizes deep learning for km-scale atmospheric downscaling, but its reliance on dense datasets limits implementation in regions like Southeast Asia.

AI Models Face Unique Climate Challenges

AI Models Face Unique Climate Challenges (image credits: unsplash)
AI Models Face Unique Climate Challenges (image credits: unsplash)

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models. Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations. This challenge arises due to a non-stationary climate system. Current interdependencies between variables may change or cease to exist in future climates. Consequently, ML models trained on the present day are at risk of relying on patterns that may disappear with climate change, making extrapolation into the future troublesome.

Hot Model Problem Creates Forecast Uncertainty

Hot Model Problem Creates Forecast Uncertainty (image credits: unsplash)
Hot Model Problem Creates Forecast Uncertainty (image credits: unsplash)

Using climate model ensembles containing members that exhibit very high climate sensitivities to increasing CO2 concentrations can result in biased projections. Various methods have been proposed to ameliorate this ‘hot model’ problem, such as model emulators or model culling. These climate model outputs are used to inform national (e.g., Fifth U.S. National Climate Assessment, NCA5) and international assessments (e.g., Intergovernmental Panel on Climate Change Sixth Assessment Report, or IPCC AR6) regarding climate change and its societal impact. While the information provided by CMIP6 models is critical for understanding the consequences of anthropogenic greenhouse gas emissions and projecting future climates, many of the models are considered ‘too hot’), meaning they simulate a warming response to the change in radiative forcing that is too strong given other lines of evidence and our physical understanding of the climate system.

It remains to be seen how the IPCC AR6 will reconcile the high ECS from some models with other sources of evidence, and if they will update the “likely” sensitivity range. The ECS values for each of the 40 CMIP6 models available so far are shown in the figure below. Fourteen of these, highlighted in yellow, have an ECS above 4.5C, and eleven of these have sensitivities above that of the highest model in CMIP5 (e.g. 4.7C).

Energy System Integration Needs Climate Data

Energy System Integration Needs Climate Data (image credits: pixabay)
Energy System Integration Needs Climate Data (image credits: pixabay)

AI is crucial to renewable energy optimization and improved efficiency. Machine learning models can analyze weather patterns, energy demand, and grid conditions to forecast renewable energy production and its optimization into the power grid. Optimization and analyses of renewable energy improve its reliability and stability, which will assist in reducing the reliance on fossil fuels.

Other Google AI tools are focused on improving weather forecasting and increasing the value of wind energy by better predicting the output from a wind farm. AI is being used to help companies in the metal and mining, oil, and gas industries to decarbonize their operations. Eugenie.ai, based in California, United States, has developed an emissions-tracking platform that combines satellite imagery with data from machines and processes. AI then analyzes this data to help companies track, trace and reduce their emissions by 20-30%.

Data Quality Remains A Critical Bottleneck

Data Quality Remains A Critical Bottleneck (image credits: pixabay)
Data Quality Remains A Critical Bottleneck (image credits: pixabay)

Despite the significant potential shown by AI applications, several challenges hinder the realization of their full benefits. One of the major challenges that limit the adoption of AI in policy-making and practical adaptation strategies is the need for reliance on the high-quality and vast size of data sets. Large volumes of representative and accurate data are required to train models effectively, and the availability of these data may be a serious challenge.

Furthermore, careful data curation and preprocessing are essential since data biases can lead to skewed predictions and reinforce preexisting inequities. It is still very difficult to scale AI from pilot projects to wider applications. Standardizing AI techniques and tools throughout the different stages while permitting customization to satisfy various level requirements is essential to guaranteeing scalability.

Future Hybrid Models Show Promise

Future Hybrid Models Show Promise (image credits: unsplash)
Future Hybrid Models Show Promise (image credits: unsplash)

Hybrid models: These models use ML to represent those processes too small scale and/or complex for physics-based models to simulate explicitly, such as cloud microphysics and aerosol interactions, while retaining the descriptions of processes that are traditionally captured well, such as large-scale motion. By combining the strengths of the two fields, hybrid models aim to outperform physics-based models, while being more trustworthy than those entirely ML-based. One of the first models to achieve this integration and use it in a real-world setting was NeuralGCM.

ML could provide a step-change in climate modelling. For exampe, with ML, global models representing Earth’s processes at resolutions finer than 1km with great accuracy and speed are a real possibility. Eyring, V. et al. AI-empowered next-generation multiscale climate modelling for mitigation and adaptation. Nat. Geosci. 17, 963–971 (2024).

Conclusion

Conclusion (image credits: flickr)
Conclusion (image credits: flickr)

Climate models are becoming more powerful and accurate, but they’re also revealing just how complex our planet’s systems really are. The marriage of traditional physics-based models with artificial intelligence is opening up new possibilities for understanding our future climate. However, challenges remain significant – from the computational demands requiring massive supercomputers to the delicate balance of incorporating machine learning without losing physical understanding.

What makes climate modeling especially fascinating is that it’s not just about predicting temperature changes anymore. These models are becoming integrated tools for energy planning, disaster preparation, and policy making. The stakes couldn’t be higher as we race against time to understand and adapt to our changing climate.

With all this technological advancement happening so rapidly, one has to wonder – are we finally getting close to truly reliable long-term climate predictions, or are we just discovering how much more we still don’t know?

About the author
Jeff Blaumberg, B.Sc. Economics
Jeff Blaumberg is an economics expert specializing in sustainable finance and climate policy. He focuses on developing economic strategies that drive environmental resilience and green innovation.

Leave a Comment