Stop being such a melt: the effect of cloud microphysics on the surface energy balance of Larsen C

vertical profiles of cloud ice and liquid mass mixing ratio in observations and model configurations.

This is a short summary of my work on clouds that will eventually end up as the second chapter of my thesis. But because no-one wants to read thesis chapters unless they absolutely have to, I’ve written it in standard English. Voila. No excuses.

What’ve you been doing, Ella?

I’ve written a lot about how difficult it is to model clouds in Antarctica, mostly because they are so complicated, and we have few observations of clouds because it is so challenging to actually take measurements there. But – hopefully I’ve convinced you of the importance of doing so.

Modelling clouds right is crucial for future projections of climate change, and for the next stage of my work, which will be a long-term hindcast of conditions over Larsen. A hindcast is much like a forecast, except that rather than predicting the future, we’re trying to replicate what’s already happened. Clouds are the largest source of uncertainty in global projections of climate change, because they are so poorly simulated in most operational models. Over an Antarctic ice shelf, getting the microphysics and small-scale properties right is vital, because this affects how much energy reaches the surface, and can therefore have a strong influence on melt rates. I want to simulate melt as accurately as possible for the last decade or so, to establish baseline conditions. If we don’t have a good handle on what ‘present’ conditions are, then we will have no hope of understanding future change because we won’t have a benchmark to compare against.

So, to understand the connection between cloud microphysics and melt rates, I’ve been exploring the impact these properties have on the surface energy balance (the amount of energy received at the surface), in both the Met Office Unified Model (UM) and observations.

How have you done that?

Part of the reason I’ve been doing this work is to arrive at the best possible model configuration for modelling clouds over the Antarctic Peninsula, so that I can use this to run my hindcast, and be confident that it will come out with better results than the default set-up of the UM, which is tailored to work best over the UK (which has slightly different weather and climate to Antarctica!). The interesting bit is to understand why that configuration is best – what about the model physics, or parameterisations (bits of model code that simplify processes that cannot be explicitly modelled, e.g. ice nucleation, the formation of cloud ice) makes it represent the observations more closely?

I tested two configurations of the model physics. Now (allow me a diversion here), the unified model has a ‘core’ of equations that describe the laws of physics, e.g. motion, thermodynamics etc. That is the same throughout the whole model, and never changes, much like the laws of physics. But, because there are still lots of unknowns in our understanding of specific processes in the atmosphere, and because we have to simplify things to get them into a sensible form, there are different configurations that are tailored to different parts of the world, with slightly different settings and representations of different processes. For example, the version of the model physics tailored to the tropics, RA1T, produces cloud liquid in a different way to the mid-latitude version (RA1M) because convection is much more important in the tropics (convection is what makes huge thunderstorms, and is driven by heat – where there is lots of moisture, this creates vast ‘supercell’ clouds). The two configurations are both based on the same core physical principles, but they differ in how they produce clouds (amongst other things). Cloud processes are necessarily simplified, but *how* we simplify them can affect the end result – so these two configurations are based on two different approximations of cloud processes, called cloud schemes.

So, we’ve got two configurations based on two cloud schemes, called Smith and PC2. Confusingly, there are two types of cloud scheme – the large scale cloud scheme (like the Smith and PC2 schemes) and the microphysics scheme. You can think of it like this: the large scale cloud scheme decides whether there’s cloud, and then the microphysics figures out the detail of what those clouds are like, and then works out whether it’s raining/snowing/foggy etc. The microphysics scheme is also often called the large-scale precipitation scheme for that reason. Both configurations use the same microphysics, based on a paper by Wilson & Ballard (1999).

Now, I’d also been doing lots of reading, and there are some interesting papers that have shown the importance of certain specific processes in clouds, which are possible to tweak in the UM. The first one helps the microphysics scheme produce more liquid water below 0°C, and comes from a paper by Furtado & Field in 2016. The second adjusts the amount of liquid water that gets depleted by processes that convert liquid into ice, which often happens far too much in models (they get over-excited and convert loads of liquid into ice because it assumes all the ice and liquid are happily mixed together, whereas in real life they hang out in discrete, segregated pockets and don’t interact so much – think cliques on the playground). That one’s based on a paper by Abel et al. (2017). So, I’ve got four experiments – each of the physics configurations, plus the modified versions of each. Now to test them.

So… what have you found?

Phew! We’ve finally got to some results. First what I did was compare the two configurations to see which one was better, and somewhat unsurprisingly, the mid-latitude configuration did better than the tropical configuration over Antarctica. There’s even less convection over ice shelves than there is at mid-latitudes, like over the UK. I then compared the two modified versions as well, to see if these produced as much of an improvement as reported in the literature for other parts of the world. You can see the results for yourself in this figure:

 

vertical profiles of cloud ice and liquid mass mixing ratio in observations and model configurations.
Vertical profiles of mean mass mixing ratios of cloud ice (panel a) and liquid (b) during a case study flight conducted over the Larsen C ice shelf on 18 January 2011. Observations and model profiles are averaged over the same area and times for comparison. The black solid line shows observations, while the dashed lines show profiles calculated by the different model configurations used in each experiment.

What we’ve got here looks complicated, but let me walk you through it. This plot shows some vertical profiles of the concentration of ice (left) and liquid (right) in clouds through the atmosphere from the ground to about 5.5 km. It’s worth noting that this is just one particular case I’ve drawn out. I’ve got some observations taken from an aircraft that was flying up and down through some clouds. I’ve done some processing to turn this data into an average vertical profile that I can compare with an average vertical profile from the model to see how well it’s doing. Models can’t be expected to get individual clouds perfectly in the right place or at the right time (that’s just beyond their power), but they can get quite close when you consider averages. So, the solid black line shows the observations, and the dashed lines shows the model average profiles. Each colour is a different configuration, or experiment.

What it shows is that there’s a lot of complexity in the observations that the model just misses – but as I said above, it is unfair to expect a model to reproduce the multiple layers of ice and liquid that we see in the observations. The main message from this plot is that the model (in all flavours) over-predicts the amount of ice, and under-predicts the amount of liquid observed in this particular case. This is pretty consistent with what the UM and other numerical weather prediction models like it are known to do. The reason for this is that, as I explained above, in the real world, ice and liquid are found in separate patches, whereas the model can’t account for this, and assumes everything is uniformly mixed. That means that ice can scavenge as much liquid as is available, and this turns too much liquid into ice. The Abel et al. modification aims to improve this situation – and you can see that it does, mostly. However, the modelled profiles are still quite far from the observed ones.

What does this mean for melt?

When you have too much ice and not enough liquid, you get too much solar (shortwave) radiation reaching the surface of the ice shelf in the model. That’s because ice crystals are big, but not very numerous, so they allow more solar radiation through them. In other words, they’re more ‘transparent’ to solar radiation. Liquid clouds emit lots of infrared (longwave) radiation because they are warmer (and the hotter something is, the more strongly it radiates – take the example of the sun vs. the Earth), and because the many little liquid droplets radiate more strongly than ice. That means that you also get too little longwave radiation emitted back down to the surface in the model, because it doesn’t produce enough liquid. That’s basically what the schematic below shows:

Schematic showing the effect of various cloud microphysical properties on the radiative (shortwave and longwave) fluxes at the surface.
Schematic demonstrating the complex effects of cloud microphysics on the balance of energy received at the surface. Credit: Ella Gilbert

This overestimated shortwave and underestimated longwave radiation doesn’t quite cancel out because the amount of shortwave you get in the 24-hour sunlight in Antarctica is much larger than the amount of longwave you’d typically see. In the cases I’ve looked at, this leads to an over-estimation of the total amount of energy at the surface of the ice shelf. Melt occurs when there is a surplus of energy at the ice surface and when temperatures are above freezing, and because temperatures in summer can quite often exceed zero, this over-estimation of the amount of energy can lead to an over-estimation in the amount of melting that happens.

Why does it matter if melt is over-estimated?

If we’re going to be confident in the model’s ability to simulate future change, we have to be sure it does a reasonable job at reproducing present conditions. If estimates of present melt are too high, then any future changes might be masked by that initial error, and therefore seem smaller. If we’re underestimating the amount of future change predicted, we could end up being surprised by events, just like we were when Larsen C’s neighbouring ice shelves, Larsen A and B, abruptly collapsed in 1995 and 2003, respectively. It’s important that we understand things now, so we can understand observed changes better as they occur.

How do you make it better?

I mentioned previously that all the experiments were run using the same microphysics scheme. This scheme, Wilson & Ballard, is what we call a ‘single moment’ scheme, which means it only calculates the amounts of ice and liquid in the cloud, and not the number of crystals or droplets. That might sound trivial, but this is very important, because the number of particles in a cloud can affect how it interacts with radiation entering or leaving the atmosphere, as you can see in the schematic above. For instance, clouds with many small particles are brighter, and so reflect more radiation back to space, preventing it from reaching the surface. More solar radiation reaches the surface if you have few, very large ice crystals, but they are also more likely to get so heavy they simply fall out of the cloud as precipitation, which can make the cloud disappear quite rapidly. So you see, the number of particles can have varying (and often competing) effects on cloud radiative properties (how they influence the amount of radiation at the surface) and cloud lifetime.

Single moment microphysics schemes are therefore missing a trick by not simulating this crucial property of clouds. Historically this has been because the extra computer power required to model the number of particles as well is a lot higher because it adds a lot of additional complexity. However, the move to double moment schemes that do do this is the next step for cloud modelling, and this has been shown over the Antarctic Peninsula and Larsen C to have a considerable advantage over single moment schemes, at least for other models.

So! That’s what I’m trying to do next. Unfortunately the results have not been so promising yet because the UM’s double moment scheme is still under development, but I’m confident that I’ll get to the bottom of why. Stay tuned to see what happens once the teething problems have been ironed out.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s