← Back to Featured Research Archive

Out of Thin Air: The Art of Climate and Weather Modeling

Crunched out of data by computational methods – and soon to be rendered with visualization software on Flux – Derek Posselt’s climate and weather models reveal how changes in the earth’s global mean temperature can influence the weather where you live.

The challenge in this field, says Posselt (pah SELT), an assistant professor in Atmospheric, Oceanic and Space Sciences, is now regional. While scientists have developed a good knowledge base of global trends, Posselt is interested in fine-tuning that information down from scales that are thousands of kilometers wide to those as small as 10 meters. These “local scales” are where clouds and rain form, influenced by feedback mechanisms that scientists don’t yet fully understand. “That’s my entrée into high-performance computing (HPC),” says Posselt. “We can use HPC to represent a wide range of those scales – both the large scales that drive change, and the local scales that respond.” It’s a fast evolving process: since clouds constantly change, the data generated by models comes at both high time and space resolutions. “This is where storage comes in. If you really want to look in detail you want data on a 5-15 minute scale.”

To recreate the astonishing complexity of clouds in the global climate system, Posselt keeps track of more than 50 different model output variables, using data sets that describe factors like thermodynamics, fluid flow, radiation, vegetation, cloud droplets, ice crystals, and others. Models are initialized using observations of the real world, simulations are run, and the output is compared and combined with more observations to infer the state of the system. All models have some degree of uncertainty associated with assumptions made in the representation of clouds and rainfall. To fine-tune the model, Posselt chooses sets of cloud parameters that scientists know to be important but are unconfirmed by empirical data. He checks their observed ranges in the literature and solves a large inverse problem where he estimates the sensitivity of all the parameters together.

HPC is critical at this point. “Every sensitivity test involves a full integration of our model. With every successive dimension the computational problem gets exponentially larger,” Posselt says. “In the problems I am addressing, brute force solution methods might require on the order of 10^N model integrations (with N the number of parameters of interest). The models I use are computationally expensive to run, and at N greater than about 2 it gets too big to do with exhaustive computation. So we simplify the model so it runs quicker, apply methods that reduce the computational burden to 3^N or 4^N and then we use HPC, [which allows us to] run a tremendous number of realizations in an intelligent way.”

Working with a model that is sensitive to so many different variables has its own challenges. “When you try and map a change in one parameter to a change in the output, it’s tremendously difficult to understand what that relationship looks like – there has to be an intermediate way. What I’m looking for now are ways to reduce the dimensionality of the system and the computational burden while preserving the realism of what we’re looking at.”

Posselt hopes that soon he will have a new way of visualizing his models: he is experimenting with using visualization software, VisIt, on Flux. “People have been producing the same sorts of plots for decades – line, contour, etc. – on their desktop. If they have a gigantic dataset, they’ll buy a bigger desktop workstation with more RAM – even if it takes forever to generate one contour plot. People don’t have a good understanding of the power of distributed visualization tools…. But as a faculty member, you also have so little flex time [to learn new software] that you’ll keep using the same tools. This is a fairly big hurdle for people.” For assistance, Posselt has been working closely with Brock Palen at the CAC to verify that he can use VisIt on his data. Visualization of the large datasets he is using requires rendering to be split over 96 cores on Flux. His next step is to ensure that his data can be geo-referenced to a map of the earth’s surface. He is also exploring the possibility of viewing his results on the MIDEN (formerly known as the Cave) at the UM3D Lab in the Digital Media Commons. “We’re excited to see what we can do with distributed visualization that we couldn’t do with one processor.”