Computer Simulation and Statistical Modeling: Rivals or Complements?

  • Thomas K. Burch
Open Access
Part of the Demographic Research Monographs book series (DEMOGRAPHIC)


The model-based view of science encourages a ‘toolbox’ approach to theory, models, methods, and techniques. Some tools are multipurpose. Some purposes can be served by more than one tool. Some standardization in the use of tools is inevitable, but it is important to avoid stylized analysis, or the rote use of a tool for a given purpose. These ideas help clarify the relationship of two kinds of quantitative analysis, simulation and statistical modeling (what Adrian Raftery, paraphrasing C. P. Snow, has termed ‘the two main cultures of quantitative research’), notably the lack of interaction between them in day-to-day research.

4.1 Introduction

Social science is rife with polarities , defined by the Oxford English Dictionary as ‘the possession or exhibition of two opposite or contrasted aspects, principles, or tendencies.’ There is the fundamental polarity inherent in all empirical science between data and theory—in John Locke’s phrase, ‘experience and reflection on experience.’ There is a polarity between the micro and macro levels of analysis (Courgeau (2004) speaks of ‘une opposition macro-micro’) with at best partial synthesis of the two.

In some of the social sciences, notably sociology and political science, there are polarities between quantitative and qualitative research, between empirical research and critical analysis, and between value-free and explicitly ideological social science.

Less widely discussed are polarities involving different methodological traditions within quantitative social science. These affect many scientific disciplines, including demography. They have led to tension and at times hostility, thereby weakening empirical social science in its central tasks, and in its confrontations with post-modernist critics and old-fashioned radical positivists. Not least of these is the polarity between statistical modeling , viewed as the fitting of models to observational data, and computer modeling or simulation, viewed as an attempt to represent some portion of the real world, or some theory about the real world, in a way that goes beyond observational data. The statistician Adrian Raftery, paraphrasing C.P. Snow, refers to ‘two main cultures of quantitative research – statistical modeling and deterministic simulation models…’ noting that the proponents of the two approaches seldom interact (Raftery 2004).

In this chapter, I examine the last-mentioned polarity as it manifests itself in demography and related empirical social sciences. My central argument is that statistical modeling and computer simulation are best viewed as complementary not competing modes of analysis. I attribute much of the tension between the two approaches to a continuing misunderstanding of the interrelations among data, models, theory and reality, and to confusion about the epistemological character of different kinds of demographic analysis (Burch, 2003c and Chap. 2 above). Focus is on the failure to recognize that it is not so much the form of an analytic tool, but its application, the use to which it is put, that determines the epistemological character of an analysis.

Viewed in this light, much of the tension surrounding simulation and statistical modeling derives from the fact that the two approaches naturally tend toward different uses, simulation toward theoretical analysis, and statistical modeling toward empirical analysis. The polarity is at base the familiar polarity between theory and experiment – or, viewing science as a human institution, between theorists and empiricists – still very much unresolved in contemporary social science, but seen as a natural division of labor in more mature sciences.

To set the stage for the discussion, the next section presents a 2 × 2 table illustrating the crucial distinction between the form of an analytic tool and its uses or applications. I then consider the dichotomy between models of data and models of the real world (or theories about the real world), and suggest a softening of the earlier dichotomies, with examples of mixed forms of analysis. I conclude with a comment on the abstract, and therefore incomplete, character of all scientific knowledge, whether empirical or theoretical.

4.2 Analytic Tools and Their Disparate Uses

Much of the methodological confusion surrounding the epistemology of theory, models, and data can be clarified by distinguishing an analytic tool and the purpose for which it is being used. A simple two-by-two table can help. Figure 4.1 emphasizes the fact that the same mathematical or statistical tool can be used for different kinds of scientific analysis, that different tools can be used for the same purpose, and so on for the other two cells: same/same and different/different. The table can be used to classify demographic work, with examples readily available for each of the cells, although the distribution is far from even.
Fig. 4.1

Analytic tools and their uses

Cell a (same/same) might be thought of as containing stylized or stereotypical demographic analysis, in which the same analytic apparatus is used over and over for the same purpose. A classic example is the use of the cohort-component projection algorithm for population forecasting. This use has been canonized by governments and other official organizations, such as the United Nations Population Division and The World Bank. In demographic texts, the cohort-component technique is routinely presented as the standard forecasting method, sometimes the only method.

This has resulted in comparability among population forecasts by governments and other agencies, and insured a kind of correctness in procedure. But it also has discouraged exploration of other possible approaches to population forecasting, the use of different tools for the same purpose. The identification of population forecasting with cohort-component forecasts is a form of what Oeppen and Wilson (2003) have termed reification in demography - the confusion of an abstract measure or model in demography with the underlying real-world process to which it pertains.1 The continued use of the same technique, and virtually only that technique, for the same purpose is not inherently fallacious, but it tends to lead to reification by sheer repetition and habit. We tend to develop tunnel vision.

For population forecasting, cell b (same purpose/different tools) would include, as well as cohort-component projection : the Leslie matrix; the exponential and logistic functions, and their extensions (common in population biology and ecology); systems models for projection (e.g., with feedback from outcomes to inputs). In the beginnings of modern population forecasting in the early decades of the twentieth century, a variety of methods was explored, before the cohort-component approach became dominant almost to the point of monopoly (de Gans 1999; Burch 2003a and Chap. 10 below) . Only recently has the field begun once again to explore alternatives to the standard model, such as the addition of feedbacks, socio-economic and environmental variables, and stochastic inputs (with confidence intervals on outputs).

Another classic example of the use of different tools for the same purpose is the use of both the stable population model and cohort-component projection to clarify the effects of changing mortality, fertility and migration on population age structure. Each approach yielded the same general conclusions (e.g., the centrality of fertility decline in population aging). But each shed distinctive light on various facets of the problem. And some problems were more tractable using one approach rather than the other. Transient dynamics and the role of migration, for example, were easier to study using the projection model.2

The stereotypical approach to population forecasting and other demographic analyses - the tendency to identify one tool with one specific use – has also had the result that cell c (sametool for differentpurposes) is not as full of examples as might be desirable. The regular presentation and use of the cohort-component projection algorithm for projection, for instance, helped obscure its value for other applications. Romaniuc (1990) has provided the most systematic exposition of this point in his discussion of the standard projection model as ‘prediction,’ ‘simulation,’ and ‘prospective analysis.’ In general, he views the uses of the algorithm as being ranged on a continuum, from the most realistic (prediction) to the least realistic (simulation). Burch (2003b and Chaps. 8 and 9 below) has developed the same line of thought with respect to the life table , arguing that at base it is a theoretical model of cohort survival, with the description of period mortality rates only one of its uses. But current demography texts tend to remain in cell a – the projection algorithm is for forecasting; the life table is for measuring current mortality.

There is nothing novel in the idea that a given analytic tool can be used for several different purposes. Coleman begins his classic work on mathematical sociology (1964) by outlining four major, and different, uses of mathematics in science. The idea is implicit in the adoption of mathematics by so many different disciplines and in the use of some mathematical concepts in so many different substantive contexts. Witness the exponential function, used to study radioactive decay, population growth, interest rates and discounting, and fixing the time of death by body temperature, to mention only a few. In demography, the exponential is usually presented as a ‘technique,’ even though many of its actual uses are speculative and theoretical in character—doubling times, reductio ad absurdem projections (e.g., human population for the next 2000 years at 1.5% per annum).

But some tools are closely wedded to particular kinds of analysis, and this leads to cell d (differenttools for differentpurposes). Statistical models, for example, are designed for analyzing data, for summarizing and describing data sets, including relationships among variables. But they are not so useful for the statement of theoretical ideas or the representation of real world systems. For this, one needs tools that can deal with unobserved entities or unmeasured variables in a way that goes beyond simply combining them in an error term.

4.3 Modeling Data and Modeling Ideas About the Real World

A key distinction is between models of a set of empirical observations (data) and models of a set of ideas about real-world processes. Wickens (1982) provides a nice statement of the point, noting that ‘statistical models describe what the data look like, but not how they came about,’ and proceeds to suggest ways for ‘interpreting the data through a description of the mechanisms that underlie them’ (p. 9; see also Hedström and Swedberg 1998). There is a sense in which data are outcomes of a partly ‘black box’ whose inner working must be modelled in some other way.

The most common multivariate statistical models such as linear regression and its refinements are often interpreted as though they represent underlying real-world processes. But, as Abbott (1988) has argued convincingly, such an interpretation typically involves the fallacy of reification , the invention of a ‘general linear reality,’ a social world that is linear and largely atemporal. Abbott distinguishes this ‘representational’ interpretation of linear models from their more appropriate ‘entailment’ use: if a theory or hypothesis is sound, then I should find a certain statistical structure in a relevant data set. But now statistical modeling is being used not to state some theoretical idea, but to test it. Traditional statistical analysis is pre-eminently a tool of empirical research.3

The differential equation, by contrast, is pre-eminently the tool of theory. Lotka puts it as follows:

In the language of the calculus, the differential equations display a certain simplicity of form, and are therefore, in the handling of the theory at least, taken as the starting point, from which the equations relating to the progressive states themselves, as functions of time, are then derived by integration (1956, p. 42).

In a footnote, he adds: ‘In experimental observation usually (though not always) the reverse attitude is adopted.’ The study of data on the progressive states of a system over time or at a point in time (as is often the case in contemporary social science) is a matter of statistical analysis.

As I have argued elsewhere, the infrequent use of differential equations in demographic research may be due in part to demography’s relative lack of interest in theory (Burch 2011, and Chap. 5 below).

Cell d (different tools, different purposes) has occasioned more than a little confusion regarding the uses of statistics and modelling, including differential/difference equations models. Much of the confusion stems from a mistaken notion that multivariate statistical analysis can yield essentially theoretical propositions as results. As noted earlier, Abbott (1988) has cogently argued against this idea, typified in his quote from Blalock (1960) that ‘These regression equations are the “laws” of science’ (p. 275), a statement firmly rooted in logical empiricism. Abbott’s review of the differing fortunes of Blalock’s Social Statistics (featuring multiple regression) and Coleman ’s Introduction to Mathematical Sociology (featuring differential equations) suggests the popularity of Blalock’s view. Coleman’s work had gone out of print by the time Abbott wrote; Blalock’s text was regularly reprinted and published in new editions. The notion that theory essentially moves beyond the data has been resisted by many quantitative social scientists.

A similar confusion is found in some writings of the systems dynamics school of modeling, based on the pioneering work of Jay Forrester at MIT. In several of their works, representatives of this school extol the superiority of systems dynamics modeling over the use of statistical analysis, as if the two served the same scientific function. In one recent manual, multiple regression is held up to ridicule (for an elaboration, see Burch 2002, and Chap. 3 above). But there is little recognition that most of their systems dynamics models were in fact theoretical, in the sense of speculative and untested. Although they contain some data and often try to reproduce some data series as output, they are not fundamentally empirical. They were often criticized by empiricists, who found them fanciful, and by theorists, who found them lacking in firm theoretical grounding. As Hummon (1990) remarks of computer simulation, ‘Applications in the past have tended to focus on large, complex systems. Simulation models of complex organizations, cities, the world environmental system, were the standard fare. Of course, coherent theories for these phenomena did not exist’ (pp. 65–66). He goes on to note that more recent applications of computer modeling to theory construction are more focused, often tied to existing theoretical models.

4.4 Hybrids and Mixed Forms: Revisiting the Dichotomies

The distinction developed in the previous section is important but it is not absolute. One should think of it in terms of the emphasis or even spirit of a particular analysis. Is it mostly about the data, or is it mostly about the theoretical ideas? Some analyses lie at one or the other pole, for example, regression analysis of data with few if any guiding theoretical ideas, or at the opposite extreme, pure theoretical speculation oriented only casually toward empirical observation.

The advance of any science involves an ever closer intermingling of data and theory. Theory must ultimately be evaluated in terms of its ability to explain or predict empirical observations. And data, at least in a scientific context, is meaningful only if it is collected with an eye to theoretical development . Recent developments in statistics, theory and computer modeling have tended to blur the distinction and promote healthy intermingling.

Some statistical models, for example, have moved a little closer to theoretical models. Structural equations models such as path analysis are a case in point. When properly used, they are fit only after one has formulated at least a primitive model of the process at hand. Assumptions must be made about temporal ordering, causal linkage, and direct and indirect effects. A path model begins to unravel the complex process that links outputs to inputs. They are rudimentary mechanistic models. Such models remain largely statistical and empirical, however, insofar as they include only variables that have been measured (at least indirectly, as in factor analysis, latent variables, and similar techniques).

By the same token, computer modeling as theoretical elaboration has begun to incorporate elements of statistical analysis. The development of stochastic population projections is a case in point (see, for example, Raftery et al.1995; Lee 1999; Keilman et al.2000).

These blends can require a re-examination of categories like stochastic and deterministic. Consider a largely deterministic systems model that has been slightly modified by the addition of random terms to one of the key relationships (y = (f)x + a, where a is some sort of random term), and by the inclusion of one or more conditional statements involving such a ‘randomized’ variable. This could represent a threshold, such that very different outcomes result from the variable being above or below the threshold. An example might be differential equations models of species extinction once the population declines below a certain size. Now it is a case of a random event triggering a qualitatively different response in an otherwise deterministic model. Do we call such a model deterministic or stochastic?

Another example of a movement of computer modeling toward statistics can be found in more recent incarnations of dynamics systems software, characterized earlier as somewhat hostile toward statistical methods. GoldSim , a relatively new product, emphasizes the addition of random terms to model variables (as described just above), and the running of multiple simulations to yield both average values and variance of results. Recent versions of older software such as ModelMaker and Vensim emphasize statistical tests for goodness of fit to the data. More importantly, they include optimization routines to find some parameter values that yield the best fit (with already known parameter values fixed, at least within given ranges). Statistical methods are being used to specify some key elements of a complex model. But many elements, perhaps the majority, remain beyond the ken of empirical observation. One has a theoretical model that is partly estimated statistically.

The emerging field of computational modeling (agent-based or rule-based models) provides many more examples of an intermingling of determinism and chance (in demography, see Billari and Prskawetz 2003). In earlier micro-simulation such as Monte-Carlo models, events occur mainly by chance, according to various probability distributions (but see some early models of family and household formation, in which kinship rules play a central part; see, for example, Wachter , 1987). In agent-based models, chance still operates at many points. But central to most models are strong behavioral rules which are determinative, that is, not subject to chance.

4.5 Concluding Comments

Our conventional views of empirical social science need revision. Demography and mainstream empirical sociology need to develop a more sophisticated approach to theoretical models. We need to reconsider the superordinate position we have granted highly flawed and limited statistical analyses, which have regularly been used to ‘disprove’ perfectly sound and useful theory. Statistical analysis may be closer to the data, but that does not necessarily mean it is closer to reality (see Chap. 2 above). A carefully crafted theory or model, which can include unmeasured, even unmeasurable variables, may be a better representation of reality for many purposes.

Demographers and other highly quantitative social scientists often think of statistical analysis of data as solid and hard-headed, firmly grounded in reality – in sharp contrast to the verbal speculation of theorists or the ‘made-up’ numbers of simulators. But the epistemological differences between theory, modeling and statistical analysis are not as great as our conventional thinking would have it. Statistical analysis is not the bedrock it often is taken to be.

Empirical data sets which we subject to statistical analysis are abstract representations of concrete reality; they are partial, selective, over-simplified depictions of some complex concrete real-world system. The data set does not constitute theory in any meaningful sense, but it often is shaped by the influence of implicit theoretical assumptions as to what data are important. There is no such thing as pure empirical description. There always is selection. Each datum is empirical, and real to the extent of its precision, but the assumption that the overall data set represents an object or system is just that, an assumption.

The standard multivariate statistical models also are highly abstract creations of the human mind. They assume a specific mathematical structure among a limited set of variables, whether that structure exists in the real world or not. If they are thought to represent the world, they are almost certainly grossly over-simplified representations, which is not to say that they may not be useful for some purposes. But an abstraction—say, a linear or log-linear model—added to an abstraction—a data set as described above—does not yield absolute truth about the concrete world. Like theory, they are selective and partial representations. They may be useful for some purposes, but that does not make them true. If such multivariate models are viewed, on the other hand, as ‘entailments’ of some well-developed theory, then they become analogous to experimental results. They do not of themselves explain anything; they only indicate that an explanatory theory has some plausibility.

These limitations of statistical analysis of data are the basis for the notion of the underdetermination of theory by empirical research (see Turner 1987). A striking statement of the problem is provided by Bracher et al. (1993). After a state-of-the-art analysis of unusually rich survey data, they comment: ‘However detailed and comprehensive the ‘explanatory’ factors that we have had at our disposal, they are, after all, only dim reflections of the possibly unmeasurable factors that keep marriages together or drive them apart…’ (p. 423). It is precisely the role of theory to go beyond such dim reflections.

But, as suggested above, the two kinds of work are not so much diametrically opposed as lying toward the opposite ends of a continuum of human attempts to describe and understand the real world of human behavior. All human knowledge, including empirical statistical analyses, is a human invention, a construct. Like theory or modelling, it is selective, abstract, limited, incomplete, provisional—in short relative, not absolute. There are important differences among different kinds of social scientific knowledge and the processes that generate them. But their fundamental epistemological character is the same. Our scientific convictions often are held as absolutes, as fundamentally true. In fact, scientific knowledge can aspire at best to ‘realism without truth’ (Giere 1999).


  1. 1.

    Abstractions in the standard cohort-component projection model include its deterministic and linear character, the absence of feedbacks or interrelations among input variables, and the absence of any but core demographic variables (no environmental, economic or socio-cultural variables).

  2. 2.

    The habit of thinking in terms of many different tools for the same general purpose is common among demographers when it comes to measurement (cf. the large variety of measures of fertility), but not so common when it comes to other kinds of analysis. The idea of a ‘toolbox’ of theories and models is central to the model- based view of science among philosophers (see Giere 1999, 1988). It is found in the work of some empirically inclined social scientists See Coleman (1964) on ‘sometimes-true theory’; Meehan (1968); Keyfitz (1975). The more influential doctrine of scientific procedure has been logical empiricism, which aims toward discovery of the one true theory.

  3. 3.

    A qualification is needed on this point, based on the recognition of two very different kinds of theory in science. Cartwright’s (1983) finds a common distinction in physics between ‘phenomenological’ and ‘fundamental’ theory. The former deals with empirical regularities and ‘laws’ (e.g., Newton’s law of falling bodies), without delving very deeply into explanatory mechanisms. Coleman (1964, pp. 34–52) makes a related distinction between ‘synthetic’ and ‘explanatory’ theory. Insofar as statistical analysis yields findings of strong empirical regularities, even universal relationships, it can provide the building blocks of a theoretical system. This has been precisely the logical empiricist program for science. But in the human sciences, strong empirical laws are sufficiently rare that some other approach to theory development is required (see Meehan 1968). Theory must ultimately be based on empirical research, but, in the face of culture and history and an absence of universal generalizations, it does not simply flow from it through the application of inductive logic. See the contemporary theoretical physicist Roger Newton (1997) on theory as an act of the creative imagination.


  1. Abbott, A. (1988). Transcending general linear reality. Sociological Theory, 6, 169–186.CrossRefGoogle Scholar
  2. Billari, F. C., & Prskawetz, A. (2003). Agent-based computational demography using simulation to improve our understanding of demographic behaviour. Heidelberg: Physica-Verlag.CrossRefGoogle Scholar
  3. Blalock, H. M. (1960). Social Statistics. New York: McGraw-Hill.Google Scholar
  4. Bracher, M., Santow, G., Morgan, S. P., & Trussell, J. (1993). Marriage dissolution in Australia: Models and explanations. Population Studies, 47, 403–426.CrossRefGoogle Scholar
  5. Burch, T. K. (2002). Computer modeling of theory: Explanation for the 21st century. In R. Franck (Ed.), The explanatory power of models: Bridging the gap between empirical and theoretical research in the social sciences (pp. 245–265). Boston: Kluwer Academic Publishers. See also Ch. 3 in this volume.CrossRefGoogle Scholar
  6. Burch, T. K. (2003a). The cohort-component population projection: A strange attractor for demographers. In J. Fleischhacker, H. A. de Gans, & T. K. Burch (Eds.), Populations, projections and politics (pp. 39–58). See Ch.10 in this volume.Google Scholar
  7. Burch, T. K. (2003b). The life table as a theoretical model. Paper presented at annual meetings of the population Association of America, 1–3 May 2003, Minneapolis, Minnesota. See Ch. 8 in this volume.Google Scholar
  8. Burch, T. K. (2003c). Data, models, theory and reality: The structure of demographic knowledge. In F. C. Billari & A. Prskawetz (Eds.), Agent-based computational demography (pp. 19–40). Heidelberg: Physica-Verlag. See Ch. 2 in this volume.Google Scholar
  9. Burch, T. K. (2011). Does demography need differential equations? Canadian Studies in Population, 38, 151–164. See Ch. 5 in this volume.CrossRefGoogle Scholar
  10. Cartwright, N. (1983). How the Laws of physics lie. Oxford: Clarendon Press.CrossRefGoogle Scholar
  11. Coleman, J. S. (1964). Introduction to mathematical sociology. New York: The Free Press.Google Scholar
  12. Courgeau, D. (2004). Du Group À L’Individu: Synthèse Multiniveau. Paris: Èditions de L’Institut National D’Études Démographiques.Google Scholar
  13. De Gans, H. A. (1999). Population forecasting 1895–1945: The transition to modernity. Dordrecht: Kluwer Academic Publishers.CrossRefGoogle Scholar
  14. Fleischhacker, J., & de Gans, H. A. (2003). In T. K. Burch (Ed.), Populations, projections and politics: Critical and historical essays on twentieth century population forecasting. Amsterdam: Rozenberg Publishers.Google Scholar
  15. Giere, R. N. (1988). Explaining science: A cognitive approach. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  16. Giere, R. N. (1999). Science without laws. Chicago: University of Chicago Press.Google Scholar
  17. Hedstrom, P., & Swedberg, R. (1998). Social mechanisms: An analytic approach to social theory. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  18. Hummon, N. P. (1990). Computer simulation in sociology. Journal of Mathematical Sociology, 15, 65–66.CrossRefGoogle Scholar
  19. Keilman, N., Pham, D. Q., & Hetland, A. (2000). Why population forecasts should be probabilistic: Illustrated by the case of Norway. Demographic Research, 6, 409–453.CrossRefGoogle Scholar
  20. Keyfitz, N. (1975). How do we know the facts of demography? Population and Development Review, 1, 267–288.CrossRefGoogle Scholar
  21. Lee, R. (1999). Probabilistic approaches to population forecasting. In W. Lutz, J. Vaupel, & D. Ahlburg (Eds.), Frontiers of population forecasting. Supplement to volume 24, Population and Development Review (pp. 156–190).Google Scholar
  22. Lotka, A. J. (1956). Elements of mathematical biology. New York: Dover Publications. (First published in 1924 under the title Elements of Physical Biology).Google Scholar
  23. Meehan, E. J. (1968). Explanation in social science: A system paradigm. Homewood: The Dorsey Press.Google Scholar
  24. Newton, R. (1997). The truth of science: Physical theories and reality. Cambridge, MA: Harvard University Press.Google Scholar
  25. Oeppen, J., & Wilson, C. (2003). On reification in demography. In J. Fleischhacker, H. de Gans, & T. K. Burch (Eds.), Population, projections and politics (pp. 113–129).Google Scholar
  26. Raftery, A. E. (2004). See relevant section of personal web page at: http://www./ Scholar
  27. Raftery, A. E., Givens, G. H., & Zeh, J. E. (1995). Inference from a deterministic population dynamics model for bowhead whales. Journal of the American Statistical Association, 90, 402–416.CrossRefGoogle Scholar
  28. Romaniuc, A. (1990). Population projection as prediction, simulation and prospective analysis. Population Bulletin of the United Nations, 29, 16–31. See also revised version in Canadian Studies in Population 30[2003] 35–50.Google Scholar
  29. Turner, S. P. (1987). Underdetermination and the promise of statistical sociology. Sociological Theory, 5, 172–184.CrossRefGoogle Scholar
  30. Wachter, K. W. (1987). Microsimulation and household cycles. In J. Bongaarts & T. K. Burch (Eds.), Family demography: Models and their applications (pp. 215–227). Oxford: Clarendon Press.Google Scholar
  31. Wickens, T. D. (1982). Models for behavior: Stochastic processes in psychology. San Francisco: W.H. Freeman.Google Scholar

Copyright information

© The Author(s) 2018

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Thomas K. Burch
    • 1
  1. 1.Department of Sociology and Population Research GroupUniversity of VictoriaVictoriaCanada

Personalised recommendations