Structural uncertainty through the lens of model building

Abstract

An important epistemic issue in climate modelling concerns structural uncertainty: uncertainty about whether the mathematical structure of a model accurately represents its target. How does structural uncertainty affect our knowledge and predictions about the climate? How can we identify sources of structural uncertainty? Can we manage the effect of structural uncertainty on our knowledge claims? These are some of the questions that an epistemology of structural uncertainty faces, and these questions are also important for climate scientists and policymakers. I develop three desiderata for an epistemological account of structural uncertainty. In my view, an account of structural uncertainty should (1) identify sources of structural uncertainty, (2) explain how these sources limit the applicability of a model, and (3) show how the severity of structural uncertainty depends on the questions that can be asked of a model. I argue that analyzing structural uncertainty by paying attention to the details of model building can satisfy these desiderata. I focus on parametrizations, which are representations of important processes occurring at scales that are not resolved by climate models. Parametrizations are often thought to be ad-hoc, but I show that some important parametrizations are theoretically justified by explicit or implicit scale separation assumptions. These assumptions can also be supported empirically. Analyzing these theoretical and empirical justificatory roles of the scale separation assumptions can provide insights into how parametrizations contribute to structural uncertainty. I conclude by sketching how my approach can satisfy the desiderata I set out at the beginning, highlighting its importance for policy-relevant scientific statements about the climate.

Introduction

Uncertainty in climate science has drawn increasing attention in recent years (e.g., Parker 2006, 2010, 2011, 2013; Stainforth et al. 2007; Knutti 2008; Frigg et al. 2013, 2014; Parker and Risbey 2015). The topic is important epistemically and politically: epistemically, because scientists have only limited abilities to validate and confirm the output of climate modelsFootnote 1; and politically, because policymakers have to take into account the current knowledge concerning the climate and its uncertainty.Footnote 2

There are different types of uncertainty in climate science (Knutti 2008; Parker 2010). This paper focuses on structural uncertainty, that is, uncertainty about the mathematical structure of a climate model.Footnote 3 My goal is to help develop an epistemological account of structural uncertainty that pays close attention to the details of climate modeling. As I show below, the detailed look at model building is needed if an epistemology of structural uncertainty is to be useful to scientists and policymakers.

The paper has two parts. The first part (Sects. 2, 3) motivates the need for an epistemological account of structural uncertainty that is informed of the details of model building. I begin by reviewing some of the existing accounts of structural uncertainty to illustrate how they have identified the sources of structural uncertainty. But a useful account of structural uncertainty should do more than identifying the sources of this uncertainty. To clarify what a useful account must do, I present three desiderata and argue that in order for an account of structural uncertainty to meet the desiderata, it needs to be informed of the details of model building. Accordingly, the second part of the paper (Sect. 4) develops one important detail for an epistemological account of structural uncertainty: the assumptions behind the choices of parameterization schemes, their conceptual justificatory role and their empirical adequacy.Footnote 4

Climate modelers introduce parameterizations in their models to code for the effect of physical processes not explicitly represented in the models on those processes that are explicitly represented. Although parameterizations may seem like something to be removed whenever possible, I will argue that parameterizations can have epistemic merits: they can improve our understanding of the interaction of coherent structures observed in the climate system across spatiotemporal scales (e.g. hurricanes, clouds, and squall lines).Footnote 5 Parameterizations can do this because they eliminate some of the details about the behavior of smaller scale physics that may be irrelevant for the modeling of these coherent structures at a particular scale. Moreover, I argue that the epistemic utility of parameterizations depends on whether they are properly justified theoretically and empirically. If the scale separation assumption can be empirically verified, structural uncertainty about a given parameterization scheme will decrease. At the end of the second part, I briefly show how this detailed look at model building helps an epistemological account of structural uncertainty to meet the desiderata presented in the first part.

Current accounts of structural uncertainty

The leading accounts of structural uncertainty are given by Frigg et al. (2014) and Parker (2006, 2010, 2011). Let us look at them briefly.

Frigg et al

Frigg et al. (2014) present an account of structural uncertainty in the context of nonlinear dynamical models.Footnote 6 In their view:

A model has [structural uncertainty] if the model dynamics differ from the dynamics in the target system…. [I]f a nonlinear model has only the slightest [structural uncertainty], then its ability to generate decision-relevant probability predictions is compromised (Frigg et al. 2014, p. 32).Footnote 7

This account implies that for non-linear systems, small differences in choice of model structure can compromise the ability to generate decision-relevant predictions. Frigg et al. call this effect of structural uncertainty the “hawkmoth effect” (Frigg et al. 2014, p. 39, after Thompson 2013), which is similar to sensitive dependence on initial conditions for chaotic systems. As Frigg et al. say, for both cases, it does not matter how close the model structure or the initial data are to the structure or data that will allow the model to make accurate predictions, because any small deviation in model structure or data can lead to misleading predictions.

The hawkmoth effect suggests that climate models are not reliable for policymaking. But can we manage the hawkmoth effect or determine its severity? Frigg et al. are pessimistic about being able to obtain a non-arbitrary measure on a class of models that would allow scientists to determine which models are or are not reliable. Thus, they argue that assigning probabilities to model-based predictions is a misleading way to represent predictions (Frigg et al. 2014, p. 57).

Parker

Parker’s account of structural uncertainty identifies sources of this uncertainty in more detail. Paraphrasing her account, structural uncertainty arises when:

  • Physical processes of interest are not described by well-established theories.

  • The representation of physical processes involves simplifications and idealizations, because of theoretical or pragmatic constraints.

  • Processes need to be parametrized, and there is no best way to parametrize a process. (Parker 2010, pp. 264–265)

Here, as in Frigg et al.’s account, structural uncertainty is an unavoidable aspect of climate modeling, because the use of simplifications, idealizations and parameterizations is unavoidable in climate modeling.Footnote 8

Toward an epistemology of structural uncertainty

The accounts of structural uncertainty given above identify its sources. In particular, Parker identifies sources within the practice of climate modeling, and as Frigg et al. argue, structural uncertainty poses serious challenges to the way climate models are currently used for policymaking. But an epistemology of structural uncertainty must do more than explaining why this type of uncertainty is hard to assess (e.g., Stainforth et al. 2007; Knutti 2008; Parker 2010; Frigg et al. 2014). Both scientists and policymakers need an epistemological account of structural uncertainty that facilitates communication about the uncertainty of model-based statements within and across the different disciplines. By model-based statements, I mean predictive or conditional statements about the future of the earth’s climate as well as statements about the uncertainty associated with these predictions or conditional statements.Footnote 9 As Stainforth et al. say:

A two-way communication between climate scientists and users of climate science is… of fundamental importance. Only by understanding the needs of different [policy] sectors can the science be usefully directed and communicated. Only by understanding the conditions, assumptions and uncertainties of model-based statements about future climate can decision makers evaluate the relevance of the information and make informed, if subjective, assessments of risk. (Stainforth et al. 2007, p. 2165; my emphasis).Footnote 10

In other words, an effective communication between scientists and policymakers requires an understanding of policymakers’ needs and an understanding of the uncertainties that affect model-based statements and the relevance of the uncertainties to policy. An epistemology of structural uncertainty, in my view, should aim to contribute to the latter understanding.Footnote 11

To achieve this aim, what exactly is needed for an epistemological account of structural uncertainty? In the next section, I present specific desiderata for this account.

Desiderata

As we saw, Frigg et al.’s and Parker’s accounts of structural uncertainty identify sources of this uncertainty. This is the first thing an epistemological account of structural uncertainty should do. Beyond that, in my view, the account should explain epistemic reliability of climate models and show how structural uncertainty can be relevant to specific policy questions. Let me go over these desiderata in turn.

Desideratum 1: Identifying sources of uncertainty

An adequate epistemological account of structural uncertainty should identify sources of uncertainty in the model and the model building practice. That is, it should say what makes climate models structurally uncertain. Philosophers have so far focused on this desideratum: they have identified aspects of the model construction and justification process that are likely to introduce uncertainty. To do so, their strategy has been to show that certain model assumptions—especially those that introduce simplifications and idealizations—generate structural uncertainty.

Identifying the sources of uncertainty should be something an account of structural uncertainty should do because doing so helps explain why a model is not likely to give accurate predictions, that is, why a model is not epistemically reliable.

Desideratum 2: Explaining epistemic reliability

But an adequate account of structural uncertainty should also explain why a model or its component is epistemically reliable.Footnote 12 By epistemic reliability, I have in mind how likely a model gives, or how likely its component helps give, accurate model-based statements, such as predictions about the future climate.Footnote 13 From experience we can learn that a model is epistemically reliable, just as I can learn that my calculator is epistemically reliable. The second desideratum here asks us to explain why a model or its component is reliable.

This desideratum becomes particularly important in the context of climate change, where scientists are asked to predict states of the climate that have never been observed before (Stainforth et al. 2007). So, this desideratum is important for an epistemological account of structural uncertainty because understanding why a model or its component is reliable helps us to determine the reliability of the model or its component in domains other than those we have explored.Footnote 14

Consider, for example, the damped harmonic oscillator model. It is epistemically reliable for predicting the movement of a weight attached to a spring in a viscous medium when the motion is sufficiently slow.Footnote 15 And we know why: the model is constructed by using Newton’s second law of motion and Hooke’s law—which apply to the system of interest—and a damping coefficient (an empirical parameter) that describes the viscous drag (or friction) exerted on the weight. This understanding helps us to determine whether the model will be reliable in a new domain.

By contrast, our understanding of the reliability of weather prediction models relies largely on their track record. These models are used to forecast local weather and are considered reliable mostly for predictions of 1–7 days.Footnote 16 While these models are based on physical theory, they are heavily calibrated to work in the regions in which they are applied. Moreover, they are analyzed in ensembles, which are then empirically corrected for the kind of atmospheric conditions that arise locally.Footnote 17 For example, we cannot reliably take a weather forecasting model calibrated to work for a coastal region in India and use it for a coastal region in the Caribbean without some substantial modification, tuning and validation for the conditions of the Caribbean. It is therefore not always clear why any one of these models is reliable.

The epistemic situation is worse for the case of climate models, since in that case we cannot evaluate a model based on its predictive success.Footnote 18 In these cases, explanations of reliability must rely on the details of model building: why are the model components an adequate representation of the target? How do modeling choices affect the applicability of a model? Explaining epistemic reliability aims at clarifying these questions.

Desideratum 3: Showing policy relevance

An adequate account of structural uncertainty should show how the severity of structural uncertainty can vary depending on questions asked by the policymakers. As Stainforth et al. (2007) suggest, in analyzing climate models, climate scientists should take into account the questions of the policymakers and the information they require. By doing so, scientists can hope to effectively communicate the limits of their models and clarify the extent to which certain analyses of structural uncertainty are tied to the applicability of models. Now, policymakers might expect climate models to be epistemically reliable with respect to particular phenomena at certain spatial and/or temporal scales. And scientists might understand why models are not likely to be reliable in the way expected by policymakers because they identify many sources of structural uncertainty. Scientists might also be able to point out the range of predictions that can be made reliably with the models because they understand why the models are reliable. The third desideratum therefore asks an adequate account of structural uncertainty to show how the policy-relevant importance of structural uncertainty varies with the questions and expectations of policymakers.

One might wonder why an epistemological account of structural uncertainty should show the relevance of structural uncertainty to policymakers. Why should an epistemology of structural uncertainty address policy? My answer is that if scientists are to produce information that can be acted upon by society, policymakers need to know the reliability of climate models with respect to their own questions. Suppose we have an account that meets the first and second desiderata, and we say why a model is unreliable and why it is, or should be, reliable in this or that domain. But we may still fail to be answering the policymaker’s epistemological problem. Thus, I suggest we include a separate desideratum—the third one here—specifically asking an epistemological account of structural uncertainty to explicitly take into consideration how structural uncertainty changes according to the questions and expectations of policymakers.

Structural uncertainty through the lens of model building

Thus far I have argued for the need to develop an epistemological account of structural uncertainty that meets the three desiderata given above. In this section, I develop a part of the account that meets these desiderata.Footnote 19 To meet all these desiderata, we have to look at the details of model building. Because to identify sources of structural uncertainty, to explain why a given climate model is reliable in a given domain, or to show how structural uncertainty of a model is or is not relevant to policy, we have to understand how climate model building works.

The account of structural uncertainty I develop below makes a number of claims:

  1. (1)

    Parameterizations can help us understand the across-scale interactions of different components of the climate (Sect. 4.1).

  2. (2)

    This epistemic merit of parameterizations depends on the availability of theoretical and empirical justification for them (Sect. 4.1).

  3. (3)

    Scale separation assumptions play an important role in the development of certain parameterization schemes (Sect. 4.2).

  4. (4)

    In modeling, the scale separation assumptions provide theoretical justification for parameterizations in the form of implicit or explicit equilibrium arguments (Sect. 4.2).

  5. (5)

    In modeling, parameterizations are partially justified empirically by evidence for the scale separation assumptions (Sect. 4.3).

  6. (6)

    If parameterizations are not justified theoretically or empirically, they have more structural uncertainty (Sect. 4.4).

  7. (7)

    But if parameterizations are justified theoretically and empirically, they provide a partial explanation of why a climate model is epistemically reliable with respect to phenomena and their interactions at certain scales (Sect. 4.4)

  8. (8)

    Analyzing justifications for parameterizations helps address the policymaker’s epistemological question (Sect. 4.5)

As can be seen, my account of structural uncertainty centers on parameterizations and scale separation assumptions, and there are two reasons for this focus. First, as noted above in relation to Parker’s account of structural uncertainty, simplifications are often a source of structural uncertainty. Parameterizations are a kind of simplification in climate models and are a major source of structural uncertainty. My account aims to analyze the extent to which parameterizations introduce structural uncertainty. Second, as I show below, climate modelers sometimes use equilibrium arguments, which imply the scale separation assumption, to provide theoretical justification for parameterization schemes, and they also try to provide empirical support for the scale separation assumptions. This justificatory practice is different from introducing parameterization schemes ad-hoc. Ad hoc parameterizations are a severe source of structural uncertainty, but theoretically and/or empirically justified parameterizations can reduce structural uncertainty at the scales at which they are justified. Thus, my account aims to articulate the kinds of justification that are available for parameterizations.

Before going into the details of how scale separation is used in modeling, it is worth clarifying the relation between spatiotemporal scales and phenomena. Bogen and Woodward (1988) defined phenomena as coherent, repeatable patterns in nature. I interpret phenomena in this sense as target systems in climate modeling, and they occur at many different spatiotemporal scales (Emanuel 1986). That is, phenomena depend for their existence on spatiotemporal scales.Footnote 20 Since the climate is complex and climatic phenomena occur across many scales, it is difficult to identify phenomena at different scales and model their interaction across scales. This is a main goal of climate science in so far as it aims to produce policy-relevant knowledge. For example, scientists are interested in knowing how the increase in global average temperature (a large-scale phenomenon) will affect smaller scale phenomena like hurricane frequency and intensity in the Caribbean (Emanuel 1999, 2005).

The need for parameterizations

Modelers introduce parameterizations because of pragmatic and epistemic constraints (McFarlane 2011). When understood as a constraint, a parameterization is an element of the model that should be removed as these constraints are overcome. However, increasing the resolution of a model, i.e. decreasing the number of parametrized processes, does not always reduce structural uncertainty (Van der Sluijs et al. 1998; McWilliams 2007; McFarlane 2011; Knutti and Sedláček 2013). So, are there epistemic merits to parameterizations that are relevant to an account of structural uncertainty? Akio Arakawa, who developed one of the most widely used cloud parameterization schemes, suggests that parameterizations do have epistemic merits:

Even under a hypothetical situation in which we have a model that resolves all scales, it alone does not automatically give us an understanding of scale interactions. Understanding inevitably requires simplifications, including various levels of “parameterizations”…. Parameterizations thus have their own scientific merits (Arakawa 2004, p. 2496).

According to Arakawa, parameterizations can be simplifications that aid the understanding of interactions across scales. Understanding how phenomena interact across scales can reduce structural uncertainty as it contributes to the theoretical understanding that increases the epistemic reliability of a model. And across-scale interactions are a recognized source of structural uncertainty (Slingo et al. 2003). In what follows, I will provide an argument for Arakawa’s claim.Footnote 21

Parameterizations are ways to account for the effect of unresolved processes on the resolved ones. So, in general, to have any epistemic merit they need to account for this effect for the right reasons. One such reason would be a physically principled argument for the use of a given parameterization scheme. Another reason would be the empirical evidence for the assumptions used in the physical argument. The next subsections are going to provide an example of how climate scientists have constructed these arguments and how they have appealed to empirical evidence.

Scale separation and parameterizations

One of the main strategies used to develop parameterization schemes is to implicitly or explicitly assume that there is some kind of physical or statistical equilibrium between the resolved scale and the unresolved processes (see, e.g., Williams 2005, p. 2935; McFarlane 2011, p. 491). This equilibrium assumption is a scale separation assumption: at equilibrium, the smaller scale processes occur on scales that are so fast (or so small) that they do not matter for the larger scale description. In fact, parameterization schemes often appeal to scale separation in order to distinguish between large scale and small scale phenomena (Plant and Craig 2008, p. 89; Arakawa 2004). For example, the scale separation assumption is used to justify the parameters appearing in equations that describe Newtonian fluids, such as viscosity.Footnote 22 The assumption that there is a separation of scale is introduced independently of whether any such separation can be detected in observations (Plant and Craig 2008, p. 90).Footnote 23

An illustrative example from climate science of a parameterization is the Arakawa-Schubert scheme for the parameterization of cumulus convection (Arakawa and Schubert 1974; Arakawa 2004). Cumulus convection is a physical process that is responsible for the persistence of cumulus clouds.Footnote 24 The Arakawa-Schubert parameterization attempts to code for the collective effect of smaller scale phenomena on the persistence of a group of clouds in a slowly varying environment, and the effect of these clouds on the environment. In particular, this parameterization scheme does not explicitly represent the processes that occur internally to each cloud, such as the conversion of humidity into rain droplets, but it explicitly represents the (vertical) mass flux of moist air into and out of the collective group of clouds (Arakawa 2004, p. 2498).

Now, this parameterization is justified in terms of a quasi-equilibrium of the mass flux with its environment: the environment creates an instability that is favorable for clouds to entrain (influx) moisture from the environment, and the clouds respond via a stabilization during which the clouds detrain (outflow) moisture to the environment. This quasi-equilibrium condition is warranted if the microphysical processes internal to the cloud occur at time scales that are sufficiently small to be considered to change instantaneously with its large scale environment (Arakawa 2004, p. 2505). However, it is unclear whether any such scale separation can be observed (Arakawa 2004, p. 2496).

Independently of whether there is any empirical verification of the use of this equilibrium assumption, we can ask what the physical insight of using this assumption might be. For sufficiently large scales, the effect of a group of clouds can be represented in terms of their average contribution to the mass flux of humidity. At these particular scales, the clouds can be considered to be in quasi-equilibrium with their environment. Thus, for the purpose of modeling the large scale environment, the Arakawa-Schubert parameterization explains the coherence and stability of a group of cumulus clouds in terms of the quasi-equilibrium of mass-flux.Footnote 25

It is important to note that this or any parameterization scheme will only work under specific atmospheric conditions. As atmospheric conditions change, due to anthropogenic greenhouse gas emissions or otherwise, moisture availability, chemical composition of the atmosphere and temperature conditions change too. These changes will affect the applicability of this parameterization scheme and the validity of its justification. So, the applicability of parameterizations changes as the climate changes, and this can be a source of structural uncertainty.Footnote 26 Nevertheless, highlighting the conditions under which modelling assumptions is valid is a starting point for an account that addresses the applicability of models.

Empirical justification of scale separation assumptions

We have seen so far that the scale separation assumption is one of the key ingredients in justifying parameterization schemes. Thus, whether a parameterization scheme introduces structural uncertainty can depend on whether the scale separation assumption used to justify the parameterization is itself warranted.

A scale separation assumption is clearly warranted when such a relevant scale separation is observed in the time series data for the parameters of interest. In such cases, the parameterization justified by the scale separation assumption is not likely to introduce substantial structural uncertainty.Footnote 27 However, as noted above, such a scale separation is rarely observable in climate data, and not all parameterization schemes are formulated with parameters that are easily observable.

Another empirical interpretation of scale separation is in statistical terms. The Arakawa-Schubert parameterization described above, for example, requires that the scheme describes a sufficiently large cloud ensemble, such that individual fluctuations in the parameters that describe the microphysics of clouds are negligible for the quasi-equilibrium assumption to hold (Arakawa and Chen 1987, p. 117). So, this scheme will be applicable only if there is a sufficiently large number of cumulus clouds in the target system.Footnote 28

Both the theoretical and the empirical aspects of the parameterization schemes can help explore structural uncertainty. For example, when the scale separation cannot be clearly detected in the data, but the theoretical justification is well received, there are various strategies that are taken to explain the lack of scale separation.

One such strategy is to add a stochastic component to the parameterization scheme (see Bony et al. 2015). This is the main approach that has been taken for the case of the Arakawa-Schubert parameterization, where scale separation is contested.Footnote 29 The idea behind this strategy is that while the physical explanation of the stability of the system (e.g. quasi-equilibrium) is appropriate, the scale separation is only loosely observable (Plant and Craig 2008, pp. 87–88). For these cases, then, it is suggested to adopt a stochastic approach (Plant and Craig 2008; Berner et al. 2017), and the uncertainty related to this kind of parameterization can be explored by having a large enough ensemble of models that captures the variability of the system with respect to this parameterization (Plant and Craig 2008).

Another strategy is to develop different parameterizations altogether, that do not rest on assumptions of equilibrium. Giving up equilibrium assumptions may mean that different kinds of mechanisms may be responsible for the observed stability (e.g. Yano et al. 2012), or that no principled parameterization can be obtained. In the former case, the effectiveness of different parameterization schemes in predicting the target of interest—how an ensemble of clouds behaves in a large scale environment—at the appropriate scale can be explored to account for structural uncertainty. In the latter case, the problem collapses to the case of ad hoc parameterizations.

While not explicitly stated in the context parameterizations, this kind of theoretical understanding has been recognized as being important for systematically exploring representational issues in climate models: Bony et al. (2006, p. 3447), for example, suggest that understanding physical mechanisms behind empirical estimates of parameters in the climate (in their case, climate feedbacks), can help understand why different models differ in their estimates, how reliable different models can be for these estimates, and guide the development of strategies to compare model output with observational data. In other words, this kind of reasoning can guide a systematic exploration of one important aspect of structural uncertainty. To summarize, the epistemic merits of parameterizations are that in some cases, if well justified, they can help the development of understanding of the interactions of different components of the climate across scales. And, as a minimum, systematically exploring justificatory strategies of parameterizations can aid the analysis of the structural uncertainty introduced by parameterizations.

Sources of structural uncertainty and partial explanation of reliability

Studying structural uncertainty from the perspective of model building allows the philosopher and the scientist to shed light on the assumptions that generate structural uncertainty. My analysis shows that both theoretical justifications and empirical considerations are used for the development of certain parameterization schemes. Structural uncertainty stemming from parameterizations, then, is a function of how well justified and how empirically accurate these assumptions are. We can now ask, does this approach meet the desiderata described above?

The first desideratum is that an account of structural uncertainty should reveal sources of uncertainty in the model. Frigg et al. (2014) and Parker (2006, 2010, 2011) already do so: nonlinearity puts limits to probabilistic model predictions, and various aspects of modeling strategies, theoretical or pragmatic, introduce structural uncertainty. Focusing on the details of modeling strategies such as the use of parameterizations can give a more detailed account of the severity of structural uncertainty. And while structural uncertainty may indeed be an irreducible aspect of models (McWilliams 2007), it can be explored systematically.Footnote 30

The second desideratum is that an account of structural uncertainty should indicate when and why a model is epistemically reliable, i.e. an account should articulate the conditions under which a model is likely to predict the behavior of its target accurately. Focusing on how parameterizations are introduced in models and how they are justified can shed light on this desideratum: when parameterizations are justified theoretically and verified empirically, there is good reason to believe that, for those scales at which the parameterization is relevant, the model is likely to predict the behavior of its target reliably.

The content of a scale separation assumption also indicates the domains of inapplicability of a particular model: the parameters that are introduced in a model will be a constraint on the kind of processes that are predicted by the model and the predictions that cannot be made by the model. For example, the Arakawa-Schubert parameterization can only be applied when the target has a sufficiently large number of cumulus clouds to warrant the use of the statistical equilibrium assumption to separate scales of motion.

Another implication of this analysis is that, since the assumption effectively ignores details at smaller scales, in order to obtain reliable downscaled models, scientists will need to identify what phenomena are relevant at various smaller scales and how they interact with phenomena at other scales. A downscaling process that does not take theoretical constraints of the scale separation assumption into consideration will end up being unreliable.Footnote 31

Policy relevance

The third desideratum concerns way structural uncertainty depends on the question that is asked of models: for what parameters and spatiotemporal scales is it possible to explore structural uncertainty systematically? This question is important because policy relevant knowledge also needs to accurately represent the conditions under which these knowledge claims are made, and the uncertainties that are tied to them.

While parameterizations are only one component of models that can contribute to model uncertainty, it does suggest that structural uncertainty may vary depending on the scale and parameter of interest, and scientists should make this explicit when giving estimates of uncertainty. However, more interesting policy-relevant questions arise when structural uncertainty cannot be thoroughly assessed for those predictions required by policymakers: for example, when alternative parameterization schemes are not investigated when a particular one is not well justified.Footnote 32 In these cases, is scientific information decision relevant? There are serious issues, in fact, with climate predictions for which the assessment and representation of uncertainty has not been taken into consideration thoroughly (Frigg et al. 2013, 2014, 2015).

Conclusion

In this paper, I have argued that philosophers have identified an important problem in climate science; namely, how best to assess and communicate the nature of structural uncertainty in climate modeling. However, their characterization of structural uncertainty is not sufficiently well-explicated. I have argued that a useful account of structural uncertainty should meet desiderata that are driven by scientific, philosophical and policy motivations. By focusing on the context of model building and highlighting the centrality of theoretical and empirical considerations in parameterization schemes, and in particular assumptions about scale separation, I have provided an account of structural uncertainty that starts to address epistemic and representational problems facing scientists and philosophers as well as the socio-political needs of policymakers. This is not a complete account, but it highlights how a different strategy for answering philosophical questions can provide insights into scientific practice that have not been highlighted enough so far.

Notes

  1. 1.

    See the debate between Parker (2009) and Lloyd (2009), and the work of Steele and Werndl (2013, 2018) and references therein.

  2. 2.

    Uncertainty about climate knowledge is usually represented in terms of likelihoods of future events and qualitative levels of confidence. For example, Intergovernmental Panel on Climate Change (2013, p. 4) uses the levels of likelihood from exceptionally unlikely to virtually certain and the levels of confidence from very low to very high.

  3. 3.

    A model structure refers to the number and kind of mathematical objects used in the model, such as variables, parameterization schemes, and the relations between them.

  4. 4.

    Following Stensrud et al. (2015), I use the terms “parameterization” and “parameterization schemes” interchangeably. A parameterization may contain one or more empirical parameters.

  5. 5.

    In climate science, the term “coherent structures” refers to those elements of the climate that can be clearly identified as phenomena (see, e.g., Yano 2016).

  6. 6.

    To focus on structural uncertainty, Frigg et al. (2014) assume that we can ignore uncertainty stemming from parameter values as well as from the collection, analysis and input of data into computer models. I will make the same assumption in this paper.

  7. 7.

    This is a natural extension of the classic results about chaos obtained, for example, by May and Oster (1976).

  8. 8.

    In addition, the chaotic and multiscale nature of many elements of the climate system (such as turbulence) implies that there might be a limit to the precision with which models can represent and predict natural processes (McWilliams 2007). This is another reason structural uncertainty an irreducible part of modeling.

  9. 9.

    Predictive statements about the future of the climate can be understood as statements about the future state of the climate. Conditional statements take the form of projections that represent possible future climate under different forcing scenarios. See Werndl (2019) for a discussion on the difference between predictions and projections.

  10. 10.

    See also Smith and Stern (2011).

  11. 11.

    Since structural uncertainty is only one type of uncertainty that affects model-based statements, an epistemology of structural uncertainty only contributes to, rather than provide by itself, an adequate understanding of the uncertainties that affect model-based statements.

  12. 12.

    Assuming that other types of uncertainty can be ignored (see footnote 6).

  13. 13.

    My formulation of reliability is similar to Winsberg’s (2006). For other formulations, However, Katzav (2014), Frigg et al. (2015), and Baumberger et al. (2017).

  14. 14.

    For simplicity, below I refer to the reliability of a model, dropping reference to a component of a model. I will refer to the latter when the context demands.

  15. 15.

    Of course, other factors contribute to the reliability of the damped harmonic oscillator. For example, the damping coefficient needs to have the right value.

  16. 16.

    There are other forecast models that are used to predict weather patterns at larger spatial scales for longer temporal scale. These are not the focus of this example.

  17. 17.

    For a discussion of ensembles in climate prediction, see Allen et al. (2006, Sect. 6) and Parker (2013).

  18. 18.

    Retrodictions (i.e., predictions of past climate) have their limitations. For an overview of the debate on confirmation of climate models, see Oreskes (2018).

  19. 19.

    As I noted above, Frigg et al.’s and Parker’s accounts contribute a part that meets the first desideratum. With respect to this desideratum, my discussion is meant to complement their accounts.

  20. 20.

    There are good reasons to believe that the scale-dependency of phenomena are not unique to climate science. The ecologist Levin (1992), for example, has emphasized the importance of the concept of scale for identifying ecological phenomena. Loeb and Imara (2017) have recently made a similar case for astrophysics. Furthermore, philosophers have long recognized the importance of thinking about spatiotemporal scales for scientific understanding of the world. For an early example, see William James’s essay “Great men and their environment” (1896 [1979], see especially p. 166); for more recent examples, see Wimsatt (2007) and Baldissera Pacchetti (2018). I thank Yoichi Ishida for pointing out James’s essay to me.

  21. 21.

    I will also only refer to parameterizations in deterministic models. The various epistemic implications of deterministic and stochastic parameterizations are interesting and slightly relevant for the present discussion but will be ignored for the purpose of this argument, since deterministic parameterizations are still the dominant form of parameterizations in climate models. For an argument for introducing stochastic parameterizations in climate models, see Berner et al. (2017).

  22. 22.

    In this context, the scale separation assumption is also known as the “continuum assumption”, which used to justify the irrelevance of individual molecular motions for the description of properties of fluids (Emanuel 1986).

  23. 23.

    The extent to which scales can be effectively separated is related to the question of model error and predictability across various scales and is a subject of ongoing atmospheric research: small scale turbulence may be important for large scale motion through upscale energy cascade, but it is unclear how turbulence affects larger scale, slowly varying coherent structures observed in the atmosphere—such as the Madden–Julian Oscillation, which is a 30 to 60 day cycle of rainfall over the western Indian and tropical eastern Pacific oceans (Slingo et al. 2003; Tribbia and Baumhefner 2004; Slingo and Palmer 2011; Hoskins 2013; Krishnamurthy 2019). Here I am focusing on the atmospheric components of the climate, and the problem of predictability of the climate is much more severe when more components (cryosphere, biosphere, lithosphere, etc.) are added as target components of climate models. Thus, the ongoing debate about the predictability of the climate is fundamental in atmospheric physics and geophysics. In addition to references just given, see Lovejoy et al. (2001) for an empirical argument of the multifractal nature of the energy spectrum of the atmosphere. This interpretation of the atmospheric energy spectrum, however, is rather controversial. See Lovejoy et al. (2009) and, for a critique of their approach, Lindborg et al. (2010). In any case, even if the energy spectrum cannot be separated across spatial scales, there is some coherence for temporal scales (Dijkstra 2013). This coherence suggests that there might be a way to understand and predict the phenomena at and across scales (Krishnamurty 2019). I thank an anonymous reviewer for helpful comments on this issue.

  24. 24.

    Cumulus clouds are fluffy clouds usually seen when the sky is otherwise clear.

  25. 25.

    Like many developing physical theories, the quasi equilibrium justification of this parameterization scheme is not uncontested. See McFarlane (2011, pp. 490–491) for alternative interpretations. The justification I have presented above, however, is currently the most influential one.

  26. 26.

    I thank an anonymous reviewer for pointing out this difficulty and David Stainforth for a helpful conversation on the consequences of this difficulty for my account.

  27. 27.

    Whether a parameterization scheme works does not only depend on its theoretical and empirical justification. The scheme will work only for a limited range of temporal scales, and will also depend on the resolution of the discretization grid of the model (McFarlane 2011, p. 491).

  28. 28.

    The difference between physical and statistical equilibrium is an interesting difference to explore, especially in the context of this parameterization scheme. However, an analysis of this difference cannot be pursued in this paper due to space limitations.

  29. 29.

    For arguments against the applicability of scale separation, see Yano (1999), Plant and Craig (2008), Yano and Plant (2012, 2020) and Yano et al. (2012). See Adams and Rennó (2003) for a response.

  30. 30.

    I agree with Parker (2011) that multi-model ensembles do not yield robust results because they are insufficient at exploring uncertainty. My approach provides a complementary approach to the exploration of structural uncertainty with multi-model ensembles.

  31. 31.

    For the limitations of downscaling, see Frigg et al. (2013, 2015).

  32. 32.

    This may result from a parameterization that is ad hoc, because of lack of theoretical development, or because of computational constraints.

References

  1. Adams, D. K., & Rennó, N. O. (2003). Remarks on quasi-equilibrium theory. Journal of the Atmospheric Sciences,60, 178–181.

    Article  Google Scholar 

  2. Allen, M. R., Kettleborough, J. A., & Stainforth, D. A. (2006). Model error in weather and climate forecasting. In T. Palmer & R. Hagedorn (Eds.), Predictability of weather and climate (pp. 391–427). Cambridge: Cambridge University Press.

    Google Scholar 

  3. Arakawa, A. (2004). The cumulus parameterization problem: Past, present, and future. Journal of Climate,17, 2493–2525.

    Article  Google Scholar 

  4. Arakawa, A., & Chen, J. (1987). Cumulus assumptions in the cloud parameterization problem. In T. Matsuno (Ed.), Short- and medium-range numerical weather prediction: Collection of papers presented at the WMO/IUGG NWP symposium, Tokyo, 4–8 August 1986 (pp. 107–131). Tokyo: Meteorological Society of Japan.

    Google Scholar 

  5. Arakawa, A., & Schubert, W. H. (1974). Interaction of a cumulus cloud ensemble with the large-scale environment, Part I. Journal of the Atmospheric Sciences,31, 674–701.

    Article  Google Scholar 

  6. Baldissera Pacchetti, M. (2018). A role for spatiotemporal scales in modeling. Studies in History and Philosophy of Science Part A,67, 14–21.

    Article  Google Scholar 

  7. Baumberger, C., Knutti, R., & Hirsch Hadorn, G. (2017). Building confidence in climate model projections: An analysis of inferences from fit. Wiley Interdisciplinary Reviews: Climate Change,8, 1–20.

    Google Scholar 

  8. Berner, J., Achatz, U., Batte, L., Bengtsson, L., Cámara, A. D. L., Christensen, H. M., et al. (2017). Stochastic parameterization: Toward a new view of weather and climate models. Bulletin of the American Meteorological Society,98, 565–588.

    Article  Google Scholar 

  9. Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review,97, 303–352.

    Article  Google Scholar 

  10. Bony, S., Colman, R., Kattsov, V. M., Allan, R. P., Bretherton, C. S., Dufresne, J. L., et al. (2006). How well do we understand and evaluate climate change feedback processes? Journal of Climate,19, 3445–3482.

    Article  Google Scholar 

  11. Bony, S., Stevens, B., Frierson, D. M., Jakob, C., Kageyama, M., Pincus, R., et al. (2015). Clouds, circulation and climate sensitivity. Nature Geoscience,8, 261–268.

    Article  Google Scholar 

  12. Dijkstra, H. A. (2013). Nonlinear climate dynamics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  13. Emanuel, K. A. (1986). Overview and definition of mesoscale meteorology. In P. S. Ray (Ed.), Mesoscale meteorology and forecasting (pp. 1–17). Boston: American Meteorological Society.

    Google Scholar 

  14. Emanuel, K. A. (1999). Thermodynamic control of hurricane intensity. Nature,401, 665–669.

    Article  Google Scholar 

  15. Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature,436, 686–688.

    Article  Google Scholar 

  16. Frigg, R., Bradley, S., Du, H., & Smith, L. A. (2014). Laplace’s demon and the adventures of his apprentices. Philosophy of Science,81, 31–59.

    Article  Google Scholar 

  17. Frigg, R., Smith, L. A., & Stainforth, D. A. (2013). The myopia of imperfect climate models: The case of UKCP09. Philosophy of Science,80, 886–897.

    Article  Google Scholar 

  18. Frigg, R., Smith, L. A., & Stainforth, D. A. (2015). An assessment of the foundational assumptions in high-resolution climate projections: The case of UKCP09. Synthese,192, 3979–4008.

    Article  Google Scholar 

  19. Hoskins, B. (2013). The potential for skill across the range of the seamless weather-climate prediction problem: A stimulus for our science. Quarterly Journal of the Royal Meteorological Society,139, 573–584.

    Article  Google Scholar 

  20. Intergovernmental Panel on Climate Change (IPCC). (2013). Climate change 2013: The physical science basis. Contribution of Working Group I to the fifth assessment report of the Intergovernmental Panel on Climate Change. Retrieved from https://www.ipcc.ch/report/ar5/wg1/.

  21. James, W. (1896 [1979]). Great men and their environment. In F. H. Burkhardt, F. T. Bowers, & I. K. Skrupskelis (Eds.), The will to believe and other essays in popular philosophy (pp. 163–189). Cambridge, MA: Harvard University Press.

  22. Katzav, J. (2014). The epistemology of climate models and some of its implications for climate science and the philosophy of science. Studies in History and Philosophy of Modern Physics,46, 228–238.

    Article  Google Scholar 

  23. Knutti, R. (2008). Should we believe model predictions of future climate change? Philosophical Transactions of the Royal Society A,366, 4647–4664.

    Article  Google Scholar 

  24. Knutti, R., & Sedláček, J. (2013). Robustness and uncertainties in the new CMIP5 climate model projections. Nature Climate Change,3, 369–373.

    Article  Google Scholar 

  25. Krishnamurthy, V. (2019). Predictability of weather and climate. Earth and Space Science,6, 1043–1056.

    Article  Google Scholar 

  26. Levin, S. A. (1992). The problem of pattern and scale in ecology: The Robert H. MacArthur award lecture. Ecology,73, 1943–1967.

    Google Scholar 

  27. Lindborg, E., Tung, K. K., Nastrom, G. D., Cho, J. Y. N., & Gage, K. S. (2010). Comment on “Reinterpreting aircraft measurements in anisotropic scaling turbulence” by Lovejoy et al. (2009). Atmospheric Chemistry and Physics Discussions,10, 1401–1402.

    Article  Google Scholar 

  28. Lloyd, E. A. (2009). I—Varieties of support and confirmation of climate models. Proceedings of the Aristotelian Society, Supplementary,83, 213–232.

    Article  Google Scholar 

  29. Loeb, A., & Imara, N. (2017). Astrophysical Russian dolls. Nature Astronomy,1, 0006.

    Article  Google Scholar 

  30. Lovejoy, S., Schertzer, D., & Stanway, J. D. (2001). Direct evidence of multifractal atmospheric cascades from planetary scales down to 1 km. Physical Review Letters,86, 5200–5203.

    Article  Google Scholar 

  31. Lovejoy, S., Tuck, A. F., Schertzer, D., & Hovde, S. J. (2009). Reinterpreting aircraft measurements in anisotropic scaling turbulence. Atmospheric Chemistry and Physics,9, 5007–5025.

    Article  Google Scholar 

  32. May, R. M., & Oster, G. F. (1976). Bifurcations and dynamic complexity in simple ecological models. American Naturalist,110, 573–599.

    Article  Google Scholar 

  33. McFarlane, N. (2011). Parameterizations: Representing key processes in climate models without resolving them. Wiley Interdisciplinary Reviews: Climate Change,2, 482–497.

    Google Scholar 

  34. McWilliams, J. C. (2007). Irreducible imprecision in atmospheric and oceanic simulations. Proceedings of the National Academy of Sciences,104, 8709–8713.

    Article  Google Scholar 

  35. Oreskes, N. (2018). The scientific consensus on climate change: How do we know we’re not wrong? In E. A. Lloyd & E. Winsberg (Eds.), Climate modelling: Philosophical and conceptual issues (pp. 31–64). Cham: Palgrave Macmillan.

    Google Scholar 

  36. Parker, W. S. (2006). Understanding pluralism in climate modeling. Foundations of Science,11, 349–368.

    Article  Google Scholar 

  37. Parker, W. S. (2009). II—Confirmation and adequacy-for-purpose in climate modelling. Proceedings of the Aristotelian Society, Supplementary,83, 233–249.

    Article  Google Scholar 

  38. Parker, W. S. (2010). Predicting weather and climate: Uncertainty, ensembles and probability. Studies in History and Philosophy of Modern Physics,41, 263–272.

    Article  Google Scholar 

  39. Parker, W. S. (2011). When climate models agree: The significance of robust model predictions. Philosophy of Science,78, 579–600.

    Article  Google Scholar 

  40. Parker, W. S. (2013). Ensemble modeling, uncertainty and robust predictions. Wiley Interdisciplinary Reviews: Climate Change,4, 213–223.

    Google Scholar 

  41. Parker, W. S., & Risbey, J. S. (2015). False precision, surprise and improved uncertainty assessment. Philosophical Transactions of the Royal Society A,373, 20140453.

    Article  Google Scholar 

  42. Plant, R. S., & Craig, G. C. (2008). A stochastic parameterization for deep convection based on equilibrium statistics. Journal of the Atmospheric Sciences,65(1), 87–105.

    Article  Google Scholar 

  43. Slingo, J., Inness, P., Neale, R., Woolnough, S., & Yang, G. (2003). Scale interactions on diurnal to seasonal timescales and their relevance to model systematic errors. Annals of Geophysics,46(1), 139–155.

    Google Scholar 

  44. Slingo, J., & Palmer, T. (2011). Uncertainty in weather and climate prediction. Philosophical Transactions of the Royal Society A,369, 4751–4767.

    Article  Google Scholar 

  45. Smith, L. A., & Stern, N. (2011). Uncertainty in science and its role in climate policy. Philosophical Transactions of the Royal Society A,369, 4818–4841.

    Article  Google Scholar 

  46. Stainforth, D. A., Downing, T. E., Washington, R., Lopez, A., & New, M. (2007). Issues in the interpretation of climate model ensembles to inform decisions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,365, 2163–2177.

    Article  Google Scholar 

  47. Steele, K., & Werndl, C. (2013). Climate models, calibration, and confirmation. British Journal for the Philosophy of Science,64, 609–635.

    Article  Google Scholar 

  48. Steele, K., & Werndl, C. (2018). Model-selection theory: The need for a more nuanced picture of use-novelty and double-counting. British Journal for the Philosophy of Science,69(2), 351–375.

    Google Scholar 

  49. Stensrud, D. J., Coniglio, M. C., Knopfmeier, K. H., & Clark, A. J. (2015). Model physics Parameterization. In G. R. North, J. Pyle, & F. Zhang (Eds.), Encyclopedia of atmospheric science (2nd ed., Vol. 4, pp. 167–180). Amsterdam: Academic Press.

    Google Scholar 

  50. Thompson, E. L. (2013). Modelling North Atlantic storms in a changing climate. PhD thesis, Imperial College London.

  51. Tribbia, J. J., & Baumhefner, D. P. (2004). Scale interactions and atmospheric predictability: An updated perspective. Monthly Weather Review,132, 703–713.

    Article  Google Scholar 

  52. Van der Sluijs, J., Van Eijndhoven, J., Shackley, S., & Wynne, B. (1998). Anchoring devices in science for policy: The case of consensus around climate sensitivity. Social Studies of Science,28, 291–323.

    Article  Google Scholar 

  53. Werndl, C. (2019). Initial-condition dependence and initial-condition uncertainty in climate science. British Journal for the Philosophy of Science,70, 953–976.

    Article  Google Scholar 

  54. Williams, P. D. (2005). Modelling climate change: The role of unresolved processes. Philosophical Transactions of the Royal Society,363, 2931–2946.

    Article  Google Scholar 

  55. Wimsatt, W. C. (2007). Re-engineering philosophy for limited beings: Piecewise approximations to reality. Cambridge, MA: Harvard University Press.

    Google Scholar 

  56. Winsberg, E. (2006). Models of success versus the success of models: Reliability without truth. Synthese,152, 1–19.

    Article  Google Scholar 

  57. Yano, J. I. (1999). Scale-separation and quasi-equilibrium principles in Arakawa and Schubert’s cumulus parameterization. Journal of the Atmospheric Sciences,56, 3821–3825.

    Article  Google Scholar 

  58. Yano, J. I. (2016). Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent? Journal of Physics A: Mathematical and Theoretical,49, 284001.

    Article  Google Scholar 

  59. Yano, J. I., Liu, C., & Moncrieff, M. W. (2012). Self-organized criticality and homeostasis in atmospheric convective organization. Journal of the Atmospheric Sciences,69, 3449–3462.

    Article  Google Scholar 

  60. Yano, J. I., & Plant, R. S. (2012). Convective quasi-equilibrium. Reviews of Geophysics,50, RG4004. https://doi.org/10.1029/2011rg000378.

    Article  Google Scholar 

  61. Yano, J. I., & Plant, R. S. (2020). Why does Arakawa and Schubert’s convective quasi-equilibrium closure not work? Mathematical analysis and implications. Journal of the Atmospheric Sciences,77, 1371–1385.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank Robert Batterman, Erik Curiel, Roman Frigg, Yoichi Ishida, David Stainforth, Porter Williams, and three anonymous reviewers for their helpful comments on previous versions of this paper. I would also like to thank the scientists at the Priestley International Center for Climate, and in particular Lauren Gregoire and Chetan Deva, for stimulating discussions on parameterizations and uncertainty in climate modeling.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Marina Baldissera Pacchetti.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Baldissera Pacchetti, M. Structural uncertainty through the lens of model building. Synthese (2020). https://doi.org/10.1007/s11229-020-02727-8

Download citation

Keywords

  • Uncertainty
  • Climate science
  • Climate model
  • Parameterization
  • Scale separation
  • Spatiotemporal scales