, Volume 20, Issue 2, pp 215–221 | Cite as

Modeling for Understanding v. Modeling for Numbers

20th Anniversary Paper


I draw a distinction between Modeling for Numbers, which aims to address how much, when, and where questions, and Modeling for Understanding, which aims to address how and why questions. For-numbers models are often empirical, which can be more accurate than their mechanistic analogues as long as they are well calibrated and predictions are made within the domain of the calibration data. To extrapolate beyond the domain of available system-level data, for-numbers models should be mechanistic, relying on the ability to calibrate to the system components even if it is not possible to calibrate to the system itself. However, development of a mechanistic model that is reliable depends on an adequate understanding of the system. This understanding is best advanced using a for-understanding modeling approach. To address how and why questions, for-understanding models have to be mechanistic. The best of these for-understanding models are focused on specific questions, stripped of extraneous detail, and elegantly simple. Once the mechanisms are well understood, one can then decide if the benefits of incorporating the mechanism in a for-numbers model is worth the added complexity and the uncertainty associated with estimating the additional model parameters.

Key words

modeling prediction theory mechanistic empirical extrapolation interpolation 


I draw a distinction between two types of modeling that actually represent extremes on a continuum. The first I call is Modeling for Numbers. The questions addressed using these models can be summarized as: How much, where, and when? For example, how much carbon will be sequestered or released, by which parts of the biosphere, on what time course over the next 100 years (for example, Cramer and others 2001)? The use of these models is clearly important; they address pressing environmental issues and attract a large amount of research money and effort. The second type of modeling I call is Modeling for Understanding. The questions addressed with these models can be summarized as: How and why? For example, why can there be only one species per limiting factor (Levin 1970)? These for-understanding questions are more qualitative than the for-numbers questions. The emphasis of modeling for understanding is to understand underlying mechanisms, often by stripping away extraneous detail and thereby sacrificing quantitative accuracy. Modeling for understanding is at least as important as modeling for numbers (Ågren and Bosatta 1990), although the application to pressing ecological issues might be less direct.

Modeling for Numbers

There is no inherent reason as to why a for-numbers model has to be mechanistic. Answers to how much, where, and when can frequently be found based on past experience using purely empirical or statistical models. Such models have been used for thousands of years, for example, to know when to sow crops (for example, after the Nile flood; Janick 2002). Modern science relies on non-mechanistic models in many ways. For example, to assess the medical risk of smoking, LaCroix and others (1991) followed 11,000 individuals, 65 years of age or older, for 5 years to quantify the relationship between mortality rates and smoking (Table 1). The resulting tabular model is purely correlative and therefore cannot address the how and why connecting smoking to mortality, but it has diagnostic and predictive value. The push for “big data” approaches in Ecology hopes to capitalize on analogous analyses of large ecological databases (for example, Hampton and others 2013).
Table 1

Relative Mortality Rates in Relation to Smoking for Men and Women Over 65.

Source LaCroix and others (1991)




Current smoker

2.1 (1.7–2.7)

1.8 (1.4–2.4)

Former smoker

1.5 (1.2–1.9)

1.1 (0.8–1.5)

Numbers indicate the factor (and 95% confidence limit) by which mortality rates increase relative to individuals of the same sex that have never smoked.

Empirical models are common in ecology. For example, biomass allometric equations (Yanai and others 2010), stand self-thinning relationships (Vanclay and Sands 2009), and degree-day sum phenology models (Richardson and others 2006) are all empirical models. Although various mechanisms might be hypothesized based on an examination with these models (for example, West and Brown 2005), the models themselves have no underlying mechanism and therefore describe, rather than explain, the relationship.

Empirical models like the ones listed above have obvious value. I would further argue that in terms of producing quantitative predictions, empirical models in Biology are often, perhaps usually, more accurate than mechanistic models. For example, I cannot conceive of a mechanistic model doing as well predicting increased mortality rates with smoking as the LaCroix and others (1991) tabular model (Table 1); there is simply too much uncertainty associated with any hypothesized causal mechanism. Even if the underlying mechanism is well understood, error in estimating the parameters needed to implement a mechanistic model adds uncertainty that might overwhelm any benefit of a mechanistic approach (O’Neill 1973). I think that most empirical models are more accurate than their mechanistic analogues, with two caveats: (1) that there are enough data available to adequately calibrate the empirical model and (2) that the predictions are interpolated within the domain of the data used to calibrate the empirical model.

The weakness of empirical models is in extrapolation. Outside the domain of the calibration data, there is just no way to know how well the empirical model will work, and in many cases, the extrapolation is known to be poor (for example, Richardson and others 2006). I suspect that most empirically based ecological models will not do well if extrapolated to the warmer, high-CO2 conditions of the future; the new conditions will change productivity, allometry, competition, phenology, and many other ecosystem characteristics and thereby alter the relationships underlying these models.

Of course, many of the most pressing environmental issues involve extrapolation into conditions for which there is little or no data (for example, under future CO2 concentrations). Because of the long response times of most ecosystems, experimental approaches cannot generate the data needed to develop empirical models quickly enough to be of practical use. The only alternative is to use mechanistic models.

But why should mechanistic models be better for extrapolation than empirical models? The main reason is that it is often possible to empirically constrain mathematical representations of the components of a system even when it is not possible to similarly constrain an empirical representation of the whole system. Mechanistic models take advantage of the hierarchical structure of ecosystems (O’Neill and others 1986) and tie system-level behaviors to the characteristics of and interactions among the components of that system (Rastetter and Vallino 2015). Because of this hierarchical structure, the system components have to be smaller and respond more quickly than the system itself (O’Neill and others 1986), which makes them more tractable for experimental and observational studies than the whole system. For example, the long-term, whole-system question to be addressed might be: Will forests sequester carbon over the next 200 years of elevated CO2 and warming? Addressing this question empirically at the ecosystem scale would require replicated experiments on whole forests that lasted 200 years. With that data, one might then derive empirical relationships between initial stand biomass and soil properties and the magnitude of carbon sequestration or loss. However, such an approach is not of much use for predicting those responses for the next 200 years because the experiment takes too long. The mechanistic alternative is to instead conduct short-term experiments (<10 years) on individual trees of different ages and different species, and on soils with different characteristics and then piece that information and any other available information together in a mechanistic model of the ecosystem to try to predict the long-term, whole-system rates of carbon sequestration.

This mechanistic approach also has its caveats (Rastetter 1996). Although short-term experiments can constrain representations of system components, they do not yield information about feedbacks acting at a system level when those components are linked together. Thus, there is no way to know if the slow-responding system feedbacks that might dominate long-term responses are adequately represented in the model. The model might therefore be corroborated with existing short-term data even though it is inadequate for making long-term projections. Conversely, the system-level predictions of the model might be falsified with short-term, high-frequency data even though the long-term, slow-responding feedbacks that dominate long-term responses, and will eventually override the short-term, high-frequency responses, are in fact adequately represented in the model. Confidence in such models should therefore be taken with caution (Stroeve and others 2007), but should build slowly over time with the iterative process of model development and testing and the accumulation of many independent sources of corroborating evidence.

A key objective of modeling for numbers is often prediction accuracy. O’Neill (1973) postulated a tradeoff between model complexity and errors associated with estimating the parameters that are needed to represent that complexity in the model. As more process detail is incorporated into the model, prediction of system-level dynamics should improve because more of the processes determining those dynamics are included in the model. However, the added model complexity comes at the price of having to estimate more parameters. Error in estimating those parameters will propagate through the model and, as more parameters are added, prediction accuracy will deteriorate. Thus, as the model becomes more complex, there should be a tradeoff between errors associated with lack of mechanistic detail in a too simple representation of the system (systematic error) versus cumulative errors associated with estimation of more and more parameters to account for those mechanistic details (estimation error). This tradeoff results in an optimum model complexity where overall prediction error is minimized.

The O’Neill (1973) analysis, however, presupposes that the underlying structure of the system being modeled is actually understood. Only if that structure is understood will added model complexity be guaranteed to reduce systematic error. If it is not understood, then the added complexity might have no relation to the real mechanism, and systematic error could actually increase. Thus, there is at least one more axis to be considered in the O’Neill (1973) analysis, an axis reflecting how well the system is understood.

Modeling for Understanding

My ontological perspective is strictly reductionist; in principle, all properties of a system can be explained, and therefore understood, based on the properties of its component parts and their interactions. However, what seems straightforward in principle is often intractable in practice. The problem is the daunting complexity of biological systems. Bedau (2013) argues that some systems have interactions that are “too complex to predict exactly in practice, except by crawling the causal web.” In this view, emergent system properties can be fully explained in terms of the properties of its component parts and their interactions, but that explanation might be “incompressible” in the sense that the system properties can only be replicated by simulation of the full complexity of the system (Bedau 2013).

The issue of incompressibility is hugely problematic. Taken to its extreme, it implies that a system can only be understood from the perspective of a model that is at least as complex as the system itself. What possible use could such a model be, other than to demonstrate that you have “crawled the causal web” correctly? Certainly, the heuristic value of such a model would be very limited. Indeed, the formulations of many for-understanding models in ecology are selected explicitly for ease of analytical or graphical analysis, that is, for compressibility (for example, Lotka 1925; Volterra 1926; MacArthur and Levins 1964; Tilman 1980). Achieving this compressibility requires a high degree of abstraction, a focus on a specific subset of system properties, and the sacrifice of quantitative accuracy. In exchange, there can be substantial heuristic return.

Unlike for-numbers models, which at least have accurate quantitative prediction as a common goal, it is difficult to generalize about for-understanding models except to say that they have to be mechanistic. Otherwise, how could they address how and why questions? However, a mechanistic model does not require inclusion of every process or mechanism ever described for the system. As I imply above, such an approach is counterproductive; it degrades the heuristic value of the model and therefore impedes understanding rather than enhances it. The key modeling step, and often the most difficult aspect of modeling for understanding, is identifying only those components and processes absolutely needed to address the question being asked.

The typical for-understanding model has three elements: (1) a characterization of the potential behaviors of each of the relevant system components, (2) a characterization of the interactions among these system components, and (3) a set of boundary conditions that specify, for example, the initial properties of the system components and the influence of any factors outside the system on the components of the system. The very best for-understanding models are elegantly simple.

A classic example of an elegant for-understanding model is the Lotka–Volterra model of the interactions among competing species (Lotka 1925; Volterra 1926):
$$ \frac{{{\text{d}}N_{i} }}{{{\text{d}}t}} = r_{i} \,N_{i} \,\frac{{K_{i} - \sum_{j = 1}^{n} {\alpha_{ji} \,N_{j} } }}{{K_{i} }} $$
where Ni is the number of individuals of species i, ri is the intrinsic growth parameter for species i, Ki is the number of individuals of species i that the environment is able to support in the absence of competition (carrying capacity), αji is the number of individuals of species i that are displaced from the carrying capacity by one individual of species j, n is the number of competing species, and t is time. The components of the system are the environment, characterized by the carrying capacities for each of the n species (Ki), and the n species of competing populations, characterized by their intrinsic rates of growth (riNi). The environment interacts with each of the species through a density-dependent feedback that slows the rate of population growth as the population size approaches the environment’s carrying capacity for that species ([Ki  Ni]/Ki; here I have assumed αii = 1; Figure 1). Each species j interacts with the other species i by reducing the carrying capacity of the environment for species i in proportion to the abundance of species j (αjiNj). The only boundary conditions needed are the initial sizes of each of the n populations.
Figure 1

Comparison of net growth versus species abundance for equations 1, 2, and 5. Upper panel—Lotka (1925) and Volterra (1926) model (equation 1) with ri = 0.01, αii = 1, and αji = 0 for j ≠ i. Middle panel—MacArthur and Levins (1964) model (equation 2) with gi = 0.1, kij = 10, and mi = 0.05 and R = the most limiting resource is held constant at the specified value. Lower panel—Rastetter and Ågren (2002) model (equation 5) with gi = 0.125, kij = 10, and mi = 0.05, αi = 0.001, γi = 0.00431, and R = the most limiting resource is held constant at the specified value.

The Lotka–Volterra model spawned lots of research up through the early 1980s (for example, Gause 1934; MacArthur and Wilson 1967; Parry 1981). This research sought to examine the nature of competition, the structuring of communities, and the struggle for existence. The model is still used today, but mostly as a component within larger models (for example, Pao 2015). However, perhaps the most important legacy of this or any for-understanding model is that through its limitations, it inspires a new generation of models.

The most influential of these next-generation models is one developed by MacArthur and Levins (1964) and further developed and applied especially by Tilman (1977, 1980):
$$ \frac{{{\text{d}}B_{i} }}{{{\text{d}}t}} = g_{i} \,B_{i} \mathop {\hbox{min} }\limits_{j = 1}^{p} \left[ {\frac{{\,R_{j} }}{{k_{ij} + R_{j} }}} \right] - m_{i} \,B_{i} $$
$$ \frac{{{\text{d}}R_{j} }}{{{\text{d}}t}} = S_{j} - \sum\limits_{i = 1}^{n} {\left( {q_{ij} \,g_{i} \,B_{i} \mathop {\hbox{min} }\limits_{k = 1}^{p} \left[ {\frac{{R_{k} }}{{k_{ik} + R_{k} }}} \right]} \right)} $$
where Bi is the biomass of species i, gi and mi are the growth and turnover parameters for species i, Rj is the abundance of resource j, kij is the half-saturation constant for uptake of resource j by species i, Sj is the net supply rate of resource j to the environment, qij is the amount of resource j needed to produce one unit of biomass for species i, n is the number of species, p is the number of resources, and t is time. This model was developed to provide a more explicit representation of the mechanisms controlling the dynamics of the environmental resources (equation 3) than the earlier Lotka–Volterra model (Tilman 1987). It provided a much deeper understanding of the nature of resource limitation and competition than did a general decrease in the carrying capacity of the environment for species i by species j. In addition, it provided a quantitative interpretation for Gause’s (1934) competitive exclusion principle (n  p). From equation 2, at steady state in the absence of competition, species i will draw down the abundance of its most-limiting resource (j) to a value \( \left( {R_{ij}^{*} } \right) \) just able to sustain the population:
$$ R_{ij}^{*} = \frac{{k_{ij} \,m_{i} }}{{g{}_{i} - m_{i} }} $$

If any other species k is introduced into the environment and is able to draw down resource j below this level \( \left( {R_{kj}^{*} < R_{ij}^{*} } \right) \) then species i will go extinct (Figure 1). Thus, the environment can support at most p species, each with a different most-limiting resources and none able to draw down other resources below the \( R_{ij}^{*} \) of the other species.

A limitation of the MacArthur and Levins (1964) model is that it provides no resolution to Hutchinson’s (1961) paradox of the plankton; in the real world, the number of coexisting species in a plankton community (and many other communities) appears to exceed the number of potentially limiting resources. Rastetter and Ågren (2002), in response to this limitation, showed that any number of species can coexist even on only one limiting resource by replacing the growth term in equation 2 with one that is concave downward with respect to biomass rather than proportional to biomass (Figure 1; making turnover concave upward works equally well):
$$ \frac{{{\text{d}}B_{i} }}{{{\text{d}}t}} = g_{i} \,B_{i} \frac{{\alpha_{i} B_{i} + 1}}{{\gamma_{i} B_{i} + 1}}\mathop {\hbox{min} }\limits_{j = 1}^{p} \left[ {\frac{{\,R_{j} }}{{k_{ij} + R_{j} }}} \right] - m_{i} \,B_{i} $$
$$ \frac{{{\text{d}}R_{j} }}{{{\text{d}}t}} = S_{j} - \sum\limits_{i = 1}^{n} {\left( {q_{ij} \,g_{i} \,B_{i} \frac{{\alpha_{i} B_{i} + 1}}{{\gamma_{i} B_{i} + 1}}\mathop {\hbox{min} }\limits_{k = 1}^{p} \left[ {\frac{{R_{k} }}{{k_{ik} + R_{k} }}} \right]} \right)} , $$
where αi < γi are parameters that make the growth term in equation 5 monotonically increasing and concave downward as biomass increases. Rastetter and Ågren (2002) argue that this formulation is more realistic than equation 2 because the surface area through which uptake occurs increases proportionally more slowly than total biomass. For example, once a vegetation canopy closes, there is little further increase in leaf area even though the total biomass continues to increase.

In addition to allowing the coexistence of any number of species regardless of the number of potentially limiting resources, this new formulation also made several other predictions: (1) the number of coexisting species increases with the net supply rate of the resource (Sj), (2) to displace an existing species, a new species must decrease the concentration of the limiting resource by a finite, rather than infinitesimal amount, (3) the magnitude of the decrease in the limiting resource required to displace a resident species increases as the net resource supply rate (Sj) increases, and (4) as the net resource supply rate (and hence community productivity) increases, the Shannon-Wiener diversity first increases as the number of coexisting species increases and then decreases as the evenness in relative abundance of those species decreases.

Each step in this sequence of models, from Lotka (1925) and Volterra (1926) to MacArthur and Levins (1964) and to Rastetter and Ågren (2002), was made because of a perceived limitation in the previous model. However, the way that the limitations of for-understanding models are assessed seems to me to be qualitatively different from the way that limitations of for-numbers models are assessed. For-numbers models are assessed based on goodness of fit, precision, and accuracy of prediction. With such simple elegance, stripped of extraneous detail, testing of a for-understanding model is more nuanced. How does one test an abstraction? Generally, these models are tested in very controlled environments where extraneous factors can be minimized (for example, chemostats, pot studies; Gause 1934; Ayala and others 1973; Tilman 1977). However, “in the field … additional complexities are likely to occur [that] do not exist in the laboratory. Experimentally tested models, however, [might] help in the understanding of natural processes” (Ayala and others 1973). Thus, “the more general models of theoretical biology are used to deduce the form of possible solutions, rather than to predict future states of the system” (Wangersky 1978).

The importance of for-understanding models is much more qualitative and much deeper than simply to predict how much, when and where. Because the extraneous detail has been stripped from these models, it is easy to impose conditions where these models fail. However, is such a test relevant? At some level, “all models are wrong but some are [nevertheless] useful” (Box 1979). In my mind, the real value of these models is heuristic (Oreskes and others 1994). Each step in the progression along the series of models presented above represents a whole new way of thinking about the problem. Thus, the progression seems to me to be more analogous to Kuhn’s (1996) paradigm shifts than to the incremental corrective steps, driven by Popper’s (1968) falsification or Platt’s (1964) strong inference, expected in the development of a for-numbers model.


For-understanding models are well developed in community and evolutionary ecology and provide a strong theoretical foundation for the science (for example, Moore and Ruiter 2012, or scan the titles of any volume of Theoretical Ecology). For-understanding models are less well developed in ecosystem biogeochemistry (but see Ågren and Bosatta 1996), arguably resulting in a less well established theoretical foundation (see Menge and others 2008 and Pastor 2016 for approaches that merge evolutionary ecology and biogeochemistry). Conversely, for-numbers models have made enormous strides in biogeochemistry, especially in relation to global carbon budgets (for example, Thornton and others 2009). The interplay between for-understanding and for-numbers model is vital. Global carbon models are beginning to incorporate feedbacks associated with nutrient limitation and cycling; for-understanding modeling at the ecosystem scale to set a firm scientific foundation for carbon–nutrient interactions should help in that development. The interactions between community and biogeochemical processes are not well understood or incorporated in for-numbers models; again a scientific foundation based on for-understanding models is needed to resolve these issues and to identify how to best incorporate them in larger for-numbers models.

“Observation and theory get on best when they are mixed together, both helping one another in the pursuit of truth” (Eddington 1935). Students should therefore develop a broad enough understanding of theoretical modeling to be able to communicate effectively with modelers even if they do not model themselves. Modeling is a vital part of any science. In Ecology, it provides a way to synthesize knowledge across a diverse array of sub-disciplines and to extrapolate that knowledge to make predictions. However, the ability to predict is not the same as understanding. Understanding requires the theoretical foundation provided by for-understanding models. In general, more theoretical work characterized by modeling for understanding is needed in ecology, especially in biogeochemistry. First we need to understand our planet; then we might be able to predict the consequences of what we are doing to it and devise ways of avoiding the worst of those consequences.



This work has been supported in part by NSF grants 0949420, 1026843, 1065587, 1107707, and 1503781. I also thank Gus Shaver, Göran Ågren, Bonnie Kwiatkowski, and Joe Vallino for many years of batting around these ideas.


  1. Ågren G, Bosatta E. 1990. Theory and model or art and technology in ecology. Ecol Model 50:213–20.CrossRefGoogle Scholar
  2. Ågren G, Bosatta E. 1996. Theoretical ecosystems ecology; understanding element cycles. Cambridge: Cambridge University Press.Google Scholar
  3. Ayala FJ, Gilpin ME, Ehrenfeld JG. 1973. Competition between species: theoretical models and experimental tests. Theor Popul Biol 4:331–56.CrossRefPubMedGoogle Scholar
  4. Bedau MA. 2013. Weak emergence drives the science, epistemology, and metaphysics of synthetic biology. Biol Theory 8(4):334–45.CrossRefGoogle Scholar
  5. Box GEP. 1979. Robustness in the strategy of scientific model building. In: Launer RL, Wilkinson GN, Eds. Robustness in statistics. New York: Academic Press. p 201–36.CrossRefGoogle Scholar
  6. Cramer W, Bondeau A, Woodward FI, Prentice IC, Betts RA, Brovkin V, Cox PM, Fisher V, Foley JA, Friend AD, Kucharik C, Lomas MR, Ramankutty N, Sitch S, Smith B, White A, Young-Molling C. 2001. Global response of terrestrial ecosystems structure and function to CO2 and climate change: results from six dynamic global vegetation models. Glob Change Biol 7:357–73.CrossRefGoogle Scholar
  7. Eddington A. 1935. New pathways in science. New York: MacMillan Co. p 211.Google Scholar
  8. Gause GF. 1934. The struggle for existence. Baltimore: Williams and Wilkins.CrossRefGoogle Scholar
  9. Hampton SE, Strasser CA, Tewksbury JJ, Gram WK, Budden AE, Batcheller AL, Duke CS, Porter JH. 2013. Big data and the future of ecology. Front Ecol Environ 11:156–62.CrossRefGoogle Scholar
  10. Hutchinson GE. 1961. The paradox of the plankton. Am Nat 95:137–45.CrossRefGoogle Scholar
  11. Janick J. 2002. Ancient Egyptian agriculture and the origins of horticulture. Acta Hortic 582:23–39.CrossRefGoogle Scholar
  12. Kuhn TS. 1996. The structure of scientific revolutions. 3rd edn. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  13. LaCroix AZ, Lang J, Scherr P, Wallace RB, Cornoni-Huntley J, Berkman L, Curb JD, Evans D, Hennekens CH. 1991. Smoking and the mortality among older men and women in three communities. N Engl J Med 324:1619–25.CrossRefPubMedGoogle Scholar
  14. Levin S. 1970. Community equilibria and stability, and an extension of the competitive exclusion principle. Am Nat 104:413–23.CrossRefGoogle Scholar
  15. Lotka AJ. 1925. Elements of physical biology. Baltimore: Williams and Wilkins.Google Scholar
  16. MacArthur R, Levins R. 1964. Competition, habitat selection, and character displacement in a patchy environment. Proc Natl Acad Sci USA 51:1207–10.CrossRefPubMedPubMedCentralGoogle Scholar
  17. MacArthur RH, Wilson EO. 1967. The theory of island biogeography. Princeton: Princeton University Press.Google Scholar
  18. Menge DNL, Levin SA, Hedin LO. 2008. Evolutionary tradeoffs can select against nitrogen fixation and thereby maintain nitrogen limitation. PNAS 105:1573–8.CrossRefPubMedPubMedCentralGoogle Scholar
  19. Moore JC, de Ruiter PC. 2012. Energetic food webs: an analysis of real and model ecosystems. Oxford: Oxford University Press.CrossRefGoogle Scholar
  20. O’Neill RV. 1973. Error analysis of ecological models. In: Nelson DJ, Ed. Radionuclides in ecosystems. CONF-710501. Springfield: National Technical Information Service. p 898–908.Google Scholar
  21. O’Neill RV, DeAngelis DL, Waide JB, Allen TFH. 1986. A hierarchical concept of ecosystems. Princeton: Princeton University Press.Google Scholar
  22. Oreskes N, Shrader-Frechette K, Belitz K. 1994. Verification, validation, and confirmation of numerical models in the earth sciences. Science 263:641–6.CrossRefPubMedGoogle Scholar
  23. Pao CV. 2015. Dynamics of Lotka-Volterra cooperation systems governed by degenerate quasilinear reaction-diffusion equations. Nonlinear Anal 23:47–60.CrossRefGoogle Scholar
  24. Parry GD. 1981. The meanings of r- and K-selection. Oecologia 48:260–4.CrossRefGoogle Scholar
  25. Pastor J. 2016. Ecosystems ecology and evolutionary biology, a new frontier for experiments and models. Ecosystems 20(2). doi:10.1007/s10021-016-0069-9.
  26. Platt JR. 1964. Strong inference. Science 146:347–53.CrossRefPubMedGoogle Scholar
  27. Popper KR. 1968. The logic of scientific discovery. New York: Harper and Row.Google Scholar
  28. Rastetter EB. 1996. Validating models of ecosystem response to global change. BioScience 46(3):190–8.CrossRefGoogle Scholar
  29. Rastetter EB, Ågren GI. 2002. Changes in individual allometry can lead to species coexistence without niche separation. Ecosystems 5:789–801.CrossRefGoogle Scholar
  30. Rastetter EB, Vallino JJ. 2015. Ecosystem’s 80th and the Reemergence of Emergence. Ecosystems 18:735–9.CrossRefGoogle Scholar
  31. Richardson AD, Bailey AS, Denny EG, Martin CW, O’Keefe J. 2006. Phenology of a northern hardwood forest canopy. Glob Change Biol 12:1174–88.CrossRefGoogle Scholar
  32. Stroeve J, Holland MM, Meier W, Scambos T, Serreze M. 2007. Arctic sea ice decline, faster than forecast. Geophys Res Lett 34:L09501. doi:10.1029/2007GL029703.CrossRefGoogle Scholar
  33. Thornton PE, Doney SC, Lindsay K, Moore JK, Mahowald N, Randerson JT, Fung I, Lamarque J-F, Feddema JJ, Lee Y-H. 2009. Carbon-nitrogen interactions regulate climate-carbon cycle feedbacks: results from an atmosphere-ocean general circulation model. Biogeosciences 6:2099–120.CrossRefGoogle Scholar
  34. Tilman D. 1977. Resource competition between plankton algae: an experimental and theoretical approach. Ecology 58:338–48.CrossRefGoogle Scholar
  35. Tilman D. 1980. Resources: a graphical-mechanistic approach to competition and predation. Am Nat 116:362–93.CrossRefGoogle Scholar
  36. Tilman D. 1987. The importance of the mechanisms of interspecific competition. Am Nat 129:769–74.CrossRefGoogle Scholar
  37. Vanclay JK, Sands PJ. 2009. Calibrating the self-thinning frontier. For Ecol Manag 259:81–5.CrossRefGoogle Scholar
  38. Volterra V. 1926. Fluctuations in the abundance of a species considered mathematically. Nature 118:558–60.CrossRefGoogle Scholar
  39. Wangersky PJ. 1978. Lotka-Volterra population models. Ann Rev Ecol Syst 9:189–218.CrossRefGoogle Scholar
  40. West GB, Brown JH. 2005. The origin of allometric scaling laws in biology from genomes to ecosystems: towards a quantitative unifying theory of biological structure and organization. J Exp Biol 2008:1575–92.CrossRefGoogle Scholar
  41. Yanai RD, Battles JJ, Richardson AD, Blodgett CA, Wood DM, Rastetter EB. 2010. Estimating uncertainty in ecosystem budget calculations. Ecosystems 11:239–48.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.The Ecosystems CenterMarine Biological LaboratoryWoods HoleUSA

Personalised recommendations