Introduction

Modelling is widely applied to land use and other human activities to investigate the development of socio-ecological systems. Such models provide experimental settings that would otherwise be unavailable, and can produce fundamental advances in our understanding of system dynamics, sensitivities and uncertainties. Model results can also influence the subsequent actions of individuals, societies and institutions. Therefore, socio-ecological systems models have been used in diverse theoretical and applied settings as tools of exploration or prediction, for purposes of research, policy-making and practical land management (Veldkamp and Lambin 2001; Agarwal et al. 2002; Milne et al. 2009).

The achievement of a model’s purpose always depends upon that model’s ability to represent relevant aspects of the real-world system, and the extent to which the model itself is understood (Hofstede 1980). As simplifications designed to enhance understanding, models tread a fine line between real-world relevance and counter-productive complexity, especially as computing power increases (Young et al. 1996; Batty and Torrens 2005; Lustick and Miodownik 2009). However, determining the appropriate level of complexity for any given model is a difficult—if not intractable—problem. For complex systems models, this determination requires far more than a choice of terms to include, but also a choice between two fundamentally different conceptualisations of the system in question; as a coherent structural system in its own right or as an emergent product of the actions and interactions of populations of entities within the system (Batty and Torrens 2005; Easterly 2008).

In many cases, models of land use take a ‘top-down’ (or ‘pattern-based’), reductionist approach that describes specific changes as consequences of system-wide (usually economic) developments, encapsulating observational data in the form of equations or algorithms. This approach is well-established and allows for the application of successful models across large geographical extents (Heistermann et al. 2006; van Meijl et al. 2006; Verburg et al. 2008; Verburg and Overmars 2009; Meiyappan et al. 2014). However, many other models take an alternative ‘bottom-up’ (or ‘process-based’) approach that focuses on basic processes and entities and allows system-wide developments to emerge from these, synthetically producing output data from local interactions. Such models are increasingly used in land science to account for the actions and interactions of individual land managers, populations, institutions and societies in extracting desired goods and services from their environment (Galvin et al. 2006; Matthews et al. 2007; Milne et al. 2009).

Justifications have been offered for both approaches, and for combinations of them, but the choice has generally been seen as one of practicality (e.g. model and data availability, computational feasibility) rather than as a philosophical imperative. As a result, top-down models (such as statistical, optimisation, equilibrium or other equation-based models) dominate in continental to global scale studies, and bottom-up models (such as agent-based, multi-agent or cellular-automata) in sub-national scale studies (Agarwal et al. 2002; Janssen and Ostrom 2006). However, modelling practicalities are becoming less decisive with advances in model design and calibration that remove barriers, in particular, to bottom-up modelling over large geographical areas, or to the incorporation of elements of both approaches within a single model (Arneth et al. 2014; Rounsevell et al. 2013). As a result, several instances of hybrid models exist, incorporating elements of top-down and bottom-up approaches as appropriate to their purpose (e.g. Verburg and Overmars 2009; Murray-Rust et al. 2014).

We argue that the choice of modelling approach in land use science is not arbitrary or incidental, but that it has substantial implications for our understanding of the modelled system. Furthermore, we suggest that there is a philosophical imperative to account for the role of human agency and intentionality in the development of the land system. While top-down approaches can be useful heuristic tools, they are unable to account for the true internal dynamics of the system and so do not provide a reliable basis for predicting, understanding or attempting to intervene in system development. We make this argument on the basis of established principles in philosophy and social science that are highly relevant to land use modelling. We outline some major contributions to relevant philosophical debates, their interpretations in previous studies, and their implications for the theory and practice of land use modelling. Our intention is to contribute to an ongoing and necessary discussion about the basis and purpose of land use modelling.

The land use system as a social system

The use of the Earth’s land surface for the provision of food and other goods and services is a major and pervasive form of human activity (Foley et al. 2005; Ellis et al. 2010). Like other human activities, land use is constrained by physical reality in the form of natural resources and processes. However, this should not obscure the fact that human interactions with naturally occurring and man-made components of the material world are products of their individual, social and institutional context. Land use is governed by legal concepts of ownership and entitlement, concepts of distributive justice, patterns of authority, cultural, aesthetic and religious values and individual decision-making. The potential changes in land use as these conditions vary are myriad, reflecting complex dynamics within and between social and environmental systems (Hanna and Folke 1996; Röling 1997; Ostrom et al. 1999; Ballet et al. 2007). Indeed, these inter-relationships have motivated the development and widespread adoption of socio-ecological systems theory (e.g. Kinzig 2001; Redman et al. 2004; Galvin et al. 2006).

Land use models generally acknowledge the social nature of land use, and many top-down and bottom-up models explicitly seek to represent coherent socio-ecological systems (e.g. Tallis and Kareiva 2006; Lacitignola et al. 2007; Asselen and Verburg 2012). The basic difference between the approaches, then, lies not in their understanding of the modelled system but in their treatment of it—the description that they explicitly or implicitly identify as most appropriate. In practice, this identification is generally informal and strongly influenced by the context and purpose of the modelling exercise, and related factors that make one approach more or less convenient or feasible than another. However, the choice between approaches can never be fully reduced to such factors, because fundamentally different conceptualisations of system dynamics are involved, and both cannot be equally valid. Furthermore, this choice has important implications for the ways in which we understand and interact with the modelled systems (e.g. Shackley et al. 1998; Epstein 2008). These implications are apparent in long-established but contrasting positions in sociology, the social sciences generally and in the philosophy of social science.

Philosophical conceptualisations of social processes

The two dominant land use modelling approaches correspond closely to foundational philosophical conceptualisations of social processes. Top-down modelling, in its encapsulation of system development in general equations or rules, is fundamentally reductionist in nature, and is consistent with similar approaches in social science such as deterministic reductionism (Young et al. 1996; Hollis 2002; Batty and Torrens 2005). The most significant such approach, positivism, originated in the post-Enlightenment recognition of the immense potential of scientific inquiry to predict the outcome of physical processes. This encouraged belief in the power of observation to explain complexity through reduction to a rigorous system of law-like principles. Social scientific positivism represents the claim that similar methodological considerations apply in the explanation of social phenomena (e.g. Comte 1852; Mill 1865; Winch 1958; Durkheim 1982).

Top-down methodologies are also consistent with theoretical approaches that conceive social, psychological, linguistic and economic systems in terms of fixed relations between some constituent elements. A particularly notable example is Karl Marx’s claim that the characteristics of changing human cultures are determined by underlying socio-economic forces, independent of human will, ‘which can be determined with the precision of natural science’ (McLellan 2010, pp. 424–427). Although the extent of Marx’s commitment to this reductionist and determinist thesis is contested, his formulation remains a stark example of an interpretation of social life in which causality is identified with the law-like operations of large scale, aggregate features, and not with the ideas, values and motives of the individuals and groups who comprise a society. In general, such interpretations are therefore identified as structuralist or functionalist (e.g. Lévi-Strauss 1963; Hollis 2002; Jakobson and Halle 2002).

However, much of the positivist optimism concerning the relevance of natural science methodology to the explanation of social phenomena has been displaced from the 1950s onwards. Particularly decisive were several major contributions that stressed the fundamental importance of human agency to the development of social systems. These represent particularly cogent parallels and justifications for bottom-up modelling approaches.

The role of human agency in explaining social phenomena

Movement away from positivist views in the social sciences was stimulated by the highly influential argument of the philosopher and sociologist Peter Winch that ‘the notion of a human society involves a scheme of concepts which is logically incompatible with the kinds of explanation offered in the natural sciences’ (Winch 1958, p. 72). Crucially, in this view, the difference between these modes of explanation is not one of degree, but of kind. This distinction is rooted in Wittgenstein’s ‘second revolution in philosophy’ and his repudiation of the search for elementary propositions that could be laid against the objects that comprise reality, like the ruler imagined in his Tractatus Logico-Philosophicus (Wittgenstein 1922). Instead, Wittgenstein focused on the relationship of human thought to reality, as mediated by language, with its ‘prodigious diversity’ and internal criteria of intelligibility rooted within ‘forms of life’. This relationship precluded the general forms of understanding and action envisaged by positivist approaches, because these would require an impossible vantage point beyond language, or language-dependent symbolisms, from which to determine the relationship of language to reality—and human actions (Wittgenstein 1968).

Winch further argued that the social scientist is confronted with rules, principles and ideas that are internally related to particular forms of life and social practices, and the interpretations of these rules by their constituent agents. Social scientific explanation must therefore prioritise human intentionality and the ‘subjectively intended’ meaning of actions, a requirement completely alien to explanations in the natural sciences (Winch 1958). In our own communities we assume a level of background agreement which makes descriptions of behaviour appear seductively transparent and self-evident. However, we have no assurance that our understanding corresponds with what is observed unless we are able to grasp the internal rules and concepts as well as a wider social context that gives them their meaning. This is why, despite some areas of common understanding, it is possible to misinterpret the intentions and behaviour of others quite radically. Consequently, ‘social interaction can more profitably be compared to the exchange of ideas in a conversation than to the interaction of forces in a physical system’; a conceptualisation incompatible with the idea of causation as employed in the natural sciences (Winch 1958, p. 128; see also, e.g. Maturana 1988).

Human decision-making is based upon alternative courses of action implicit in the linguistic and conceptual rules according to which those decisions are made (which is not to rule out irrational decisions). These rules are indicative rather than programmatic, a point made by Searle where he argues that concepts like ‘money’ and ‘marriage’ are whatever people choose to regard as money and marriage rather than anything that can be identified objectively with physical or behavioural correlates. There is no absolute criterion outside a particular mode of discourse or social practice of what constitutes an application or a breach of the rules (Searle 1984). These rules, moreover, do not lead inexorably in one direction but are subject to a potentially indefinite range of interpretations (e.g. Wittgenstein 1968, pp. 80–81). This often undermines attempts by social analysts or modellers to predict changes in behaviour because of the obvious risk of mistaking trends for causally determined relations at the aggregate level:

“…even given a specific set of initial conditions, one will still not be able to predict the outcome to a historical trend because the continuation or breaking off of that trend involves human decisions which are not determined by their antecedent conditions…the point is that such trends are in part the outcome of the intentions and decisions of their participants.” (Winch 1958, p. 93)

Popper makes a similar claim in his refutation of Hegel and Marx; that the identification of regularities in human affairs too readily encourages confusion between historical or sociological ‘laws’ and trends; between conditional scientific predictions in the physical sciences and the unconditional prophecies of social theories that fail to recognise the retrospective and a priori nature of detected trends (Popper 1969a). There is absolutely no guarantee in principle that a trend will continue beyond the point at which it has been identified as a significant social phenomenon.

The legacies of Wittgenstein and Winch: critical realism

Wittgenstein and Winch have exercised profound influences within and beyond social science, many of which are beyond the scope of our argument. However, the work of ‘critical realists’ or ‘critical social theorists’ is especially relevant [e.g. Habermas’ theory of ‘communicative action’ (1984), Bhaskar’s ‘transcendental realism’ (2010) and Giddens’ theory of ‘structuration’ (1976, 1984)]. Despite their diversity, these theories share a tendency to impart structure for the purpose of social scientific explanation; in other words, an element of compromise between the clearly contrasting approaches discussed above. A particularly controversial example is Bhaskar’s conflation of ‘cause’ in the natural sciences with that of ‘reason’ in the explanation of human behaviour (Bhaskar 2010). Sayer follows Bhaskar in formulating an idea of cause that distinguishes it from invariant relationships between distinct events under specifiable conditions, proposing the alternative of potentialities that may or may not be realized. In addition, he too argues that human ‘reasons’ may also be ‘causes’ (Sayer 2000, pp. 110–111).

In fact, these positions do not necessarily violate the basic distinction drawn by Winch between ‘causes’ in natural science and ‘reasons’ in social science because the structures they propose remain entirely subject to social processes. Bhaskar, for example, subsumes ‘reason’ within his metaphysical conception of a universal, non-deterministic causality. Similarly, Sayer’s argument may be interpreted as a play on the generality of the concept of causation; the rhetorical point that ‘reasons’ are ‘causes’ in contexts of purposive human behaviour. Behaviour expressing beliefs, values and reasons indeed has its social consequences, but the task remains of understanding them as responses to criteria internal to the community of agents in question. Indeed, Sayer concedes (in words that closely echo Winch) that there are fundamental differences between the external viewpoint of the natural sciences and the internal one required for the explanation of human behaviour (Sayer 2000, p. 110). Sayer acknowledges ‘the strangeness of social science’ that is ‘perhaps clearest in studies which exhaustively search for enduring regularities in aspects of human behaviour which are manifestly susceptible to change…’—for example when, ‘…in the course of an interview aimed at eliciting an objective account of people’s views or experiences they are inadvertently led to revise them as a result of having to reflect upon them, thereby ‘distorting’ our results’ (Sayer 2000, pp. 252–253).

Of course, these arguments are not universally accepted and indeed are potentially interpretable either as ‘postpositivist’ justifications for top-down modelling or as fundamental challenges to the very concept of causation in human systems (e.g. Pattee 2012; Turner and Robbins 2008). Nevertheless, they retain a fundamental consistency with Winch’s work, offering a difference in emphasis that actually reinforces their shared philosophical underpinnings and implications for modelling. Failure to recognise the centrality of individual intentionality may invalidate the most scrupulously constructed models and vitiate their predictions—indeed, these issues call into question any attempt to model social systems predictively (see below). However, the critical-realist emphasis on social structures illustrates the need to go beyond the individual level and consider the role of social interaction in shaping the development of social systems.

Social interaction: methodological individualism versus ontological individualism

While the individual-level case for bottom-up modelling is clear in the philosophy of social science, it is also open to misinterpretation. This is most true in its implications for the nature of social interaction. At one extreme is the view presented by Mill in his System of Logic:

“Men are not, when brought together, converted into another kind of substance…Human beings in society have no properties but those which are derived from, and may be resolved into, the laws of the nature of individual man” (Mill 1900, p. 573).

However, interpretations of this kind fail to account for the social nature of human beings, the essential contribution of traditions, institutions and social practices to individual consciousness and agency. Instead, the centrality of human intentionality merely implies that, while crucially important, social relationships are not of a law-like, deterministic character at either social or individual levels but require interpretation in terms of their internal meanings.

Therefore, emphasis on individual consciousness does not minimise the role of large scale social institutions, but rather portrays them as constellations of ideas, rules and values arising from the interaction of beliefs and actions of individual members of a society (Blumer 1969). As such, institutions exercise profound influences over individual actors, but are equally liable to change profoundly as a result of intentional human agency in response to them; a concept that does not exclude the uniquely creative role that can be played by individuals living and working within specific cultural traditions (as also argued, e.g. by Fuchs et al. 2002 and debated in an economic context by, e.g. Stigler and Becker 1977; Hodgson 2003; Bathelt and Gluckler 2013).

Consequently, our argument does not imply ‘ontological individualism’, the doctrine that all social phenomena are ultimately and exhaustively reducible to the level of individual motives and actions (e.g. Mill 1900, above), or ‘the fallacy of overestimating the extent to which social properties depend on individual people’, as Epstein (2012) characterizes it. Even this brief review should illustrate the overstatement in Epstein’s claim that ‘ontological individualism is typically taken as a truism in the philosophy of social science, and is a background assumption of both analytical and computational models in the social sciences’ (Epstein 2012, p8; also, e.g. O’Sullivan and Haklay 2000). Winch, for instance, argues that the intelligibility of beliefs, attitudes and expectations of individuals, ‘cannot be explained in terms of the actions of any individual persons’ and that ‘The ways of thinking embodied in institutions govern the way the members of the societies studied by the social scientist behave’ (Winch 1958, pp. 127–128). The relationship is inherently iterative and stochastic, and developments will depend on the next move in the ‘conversation’, according to interpretations of the relevant rules by the participating parties.

Popper advocates methodological individualism as a means of avoiding confusion between abstract theoretical models and the social behaviour they purport to describe (e.g. Popper 1969b, pp. 89–99). He recommends that models should be analyzed in terms of the attitudes, expectations and relationships amongst individuals; in other words, by preserving the sense that holistic interpretations of social phenomena are heuristic in character. He nevertheless defends methodological individualism against the idea that these phenomena can be exhaustively reduced to statements about the motives and actions of individuals, a position he describes as ‘psychologism’ and attributes to Mill ‘…Our actions cannot be explained without reference to our social environment, to social institutions and to their manner of functioning’. (Popper 1969b, p. 90)

Winch clarifies this issue. Social institutions are not simply theoretical constructs that we employ to explain human behaviour. Concepts like ‘marriage’, ‘war’ or ‘government’ are constitutive of our understanding of our own society and belong essentially to our behaviour (Winch 1958, p. 128) (this point perhaps applies even more clearly to concepts like ‘blasphemy’, ‘heresy’ or ‘obscenity’). Any coherent description of the behaviour of individuals is logically parasitic upon the socially mediated concepts that determine its significance for the actors. As a result, our understanding of human behavior depends upon our ability to grasp the rules and principles that are internal to particular social practices and ways of life. This is quite different to the position of an uncommitted observer who detects regularities in forms of behavior, because it requires at least some degree of immersion, if only imaginatively, in those practices and forms of life

Implications for land use modelling

Modelling approach

Top-down and bottom-up methodologies both have some support in social theory, at least superficially. However, the key question is whether they are to be viewed as purely optional alternatives or whether there is a case for prioritising one over the other—and if so, under what circumstances and to what ends. The philosophical debates outlined above have several clear implications for these questions.

‘Bottom-up’ modelling

Perhaps most compellingly, the philosophical stances of Wittgenstein, Winch and Popper suggest that understanding, explanation and prediction in the study of social processes in general must involve the analysis of the behaviours and interactions of actors within the given system. The implication for modelling social processes, including patterns of land use, is that the ‘bottom-up’ approach most closely fulfils this philosophical requirement. However, this fundamental and compelling justification has rarely been used to motivate models of land use change. Indeed, very few models operate according to any explicit theory; something that has previously been noted and has prompted attempts to suggest unified or coherent frameworks (e.g. Turner and Robbins 2008; Hersperger et al. 2010; Schlüter et al. 2014). Consequently, social systems models have been characterised as ‘arbitrary, poorly comparable, competent in highly specific domains of knowledge and disarmingly inapt in any other’ (Conte and Paolucci 2014, p. 4).

Of course, the more abstract and general argument that it is necessary to account for behavioural and social processes is frequently made, and has helped to drive the development and adoption of computational techniques such as agent-based modelling (e.g. Matthews et al. 2007; Clifford 2008). Furthermore, a wide range of particular philosophical issues have been discussed at the interface of social and computational sciences (e.g. Axelrod 1997; Macy and Willer 2002; Miller and Page 2009; Chattoe-Brown 2013), and at the interface of these with geographical science (e.g. Batty 2005; Clifford 2008; O’Sullivan 2008; Turner and Robbins 2008; Torrens 2010). Considerable literature also exists on the interpretation of theory in methodological terms, whether for specific contexts (e.g. Parunak et al. 1998; Cecconi et al. 2010) or with respect to problems such as model application over large systems or geographical extents (e.g. Cioffi-Revilla 2002; Paolucci et al. 2012; Binder et al. 2013; Rounsevell et al. 2013).

However, we suggest that two aspects of the philosophical debates outlined above have not been sufficiently considered in land use modelling: their fundamental, general nature, requiring some consideration to be given to them across the spectrum of land system models, and their practical (as opposed to philosophical or technical) implications for model design. In the first case, we contend that, at a very basic level, the bottom-up approach is better able to uncover the true dynamics of social systems, which may indeed be actively obscured by top-down approaches that ‘confuse order arising from complexity with rational order’ and that have therefore ‘ignored [such order] and adopted methods that exclude it’ (Goldspink 2000, 1.4). Individual, social and institutional behaviours, and their effects, are linked (emergent) facets of one another, and bottom-up approaches are uniquely well-placed to describe the co-evolution of these (Röling 1997; Batty and Torrens 2005; Helbing et al. 2011).

In a practical sense, though, it is clear that bottom-up modelling has not yet fulfilled this potential. Instead, models have tended to converge on narrow, minimalistic interpretations, leaving important facets of human behaviour unexplored (e.g. Antunes and Coelho 2004; Helbing and Balietti 2011; Conte and Paolucci 2014). This is certainly true when individual behaviour is prioritised at the expense of social behaviour, or when social behaviour does not have the ‘downward’ effects that are highlighted so prominently in social theory (O’Sullivan and Haklay 2000; Gilbert 2002; Sawyer 2000; Conte et al. 2013). Perhaps most fundamentally, bottom-up models have been criticised for their lack of attention to cognitive accuracy, focusing on behavioural effects rather than generative behavioural processes (Conte and Paolucci 2014; Dignum et al. 2010). This not only renders models inapplicable in novel circumstances (such as those to be encountered by future societies), but dissolves the principal distinction between top-down and bottom-up approaches.

Nevertheless, it is important not to overstate the significance of these criticisms. While there can never be a strict isomorphism at a fundamental level between model algorithms and the forms of behaviour that are the subjects of examination, improved descriptions remain technically feasible. Furthermore, the basic philosophical case outlined above remains strong, and indeed encourages the necessary links between social scientific theory and modelling practice. There is an obvious need and opportunity for models to build upon and reflect the ranges of behaviours exhibited by relevant populations, while forgoing any temptation to embed them in a deterministic, ‘top down’ account of social processes (or, for that matter, a deterministic ‘bottom-up’ account).

Top-down modelling

Notwithstanding the strong, general case for bottom-up modelling, it is clear that bottom-up accounts are not comprehensive, accurate or sufficient in all cases, and that top-down approaches do have a substantial and legitimate role to play. The investigation of macro-level social phenomena is an important stage in reaching an understanding of social processes and can satisfy ‘a well-motivated positive need for functional abstraction and for relational explanation in terms of typical causal role’, as well as focusing attention on the ‘downward’ element of social interactions (Meyering 2000, p. 189). In other words, ‘top-down’ models should be recognised and valued explicitly as heuristic and provisional in character rather than deterministic descriptions of causative effects.

For instance, recent work on the effects of economic inequality found strong correlations with a comprehensive range of macro-scale social phenomena such as life expectancy, infant mortality, mental illness, obesity and crime levels (Wilkinson and Pickett 2011). It is intuitively obvious that the vast majority of individuals could not be reacting directly to comparative income and wealth differentials; instead it has been suggested that inequality creates social divisiveness, anxiety and feelings of inferiority, leading to ill-health and social dysfunction (ibid., Layte 2011). This indicates enormous scope for further research at the level of actors within the system in terms of subjective perceptions and understandings of these phenomena. Such insights are common to top-down analyses of all social systems. In the context of land use change, macro-scale relationships between economic or population growth and agricultural expansion or intensification may result from complex behavioural, social and institutional interactions rather than any direct causation, but nevertheless suggest specific, promising foci for further research (Lambin et al. 2001; Castella et al. 2005). Similarly, attempts to ‘socialise the pixel’ by working backwards from aggregate properties to underlying social processes (e.g. Geoghegan et al. 1998) do not identify causative effects but can illuminate underlying processes producing observed trends.

Notwithstanding the importance of such contributions, top-down analyses must be conducted with care. Even where explicitly treated as descriptions of correlations rather than causative factors, they may be erroneously interpreted as explanations of reality. Models are seductively easy to view as ‘reductionist propositions [that]…consist in expressing the phenomenon to be explained in more fundamental terms’ (Maturana 1988). This kind of interpretation is particularly appealing to policy-makers who wish to avoid the complex, value-based nature of governance decisions (Lyons 2005), and is apparent in the development of misleadingly prescriptive ‘one size fits all’ governance strategies (Ballet et al. 2007; Pannell 2008; Kenward et al. 2011). Any given approach to modelling complex systems contains within it assumptions, often hidden, about the basic dynamics of these systems, and the consequences of these assumptions for understanding and management need to be carefully considered. Indeed, top-down assumptions about (a lack of) meaningful individual and social behaviour risk severely limiting the value of the models produced: ‘the potential cost of simplification is irrelevance’ (Chattoe-Brown 2013, p. 3.3).

Prediction

One of the strongest implications of the philosophical arguments outlined above concerns predictive modelling. It is widely recognised that prediction is, at best, just one of many possible uses of models of social systems (e.g. Epstein 2008). Nevertheless, it remains one of the most commonly anticipated results of modelling exercises. Even where more nuanced objectives such as ‘projection’ are identified, scope for confusion in presentation and interpretation often persists (e.g. López et al. 2001; Veldkamp and Lambin 2001; Wu et al. 2006; Sohl et al. 2007; Pocewicz et al. 2008).

Explicit or implicit claims for predictive ability are made most often on behalf of top-down models based on statistical trends, correlations or other ostensibly positive features. Such claims are alluring for the same reasons that they are misleading—they relate to the identification of clear, strong relationships that appear consistent across time and which are therefore extrapolated into the future. Statistically, this is inappropriate and unreliable, but it is also a fundamentally unsafe approach to modelling social systems. Both Winch and Popper highlight the potential for social behaviour to depart radically from precedent for reasons that may be expressed and understood differently, if at all, by the actors involved. In some circumstances changes of global significance may be almost entirely unanticipated by expert and ostensibly well-informed observers despite a wealth of empirical data concerning the systems under investigation.

Several dramatic examples of such predictive failures exist. Recently, the global financial crash exposed weaknesses in established economic models that portrayed market economies as dynamically stable in the absence of external interventions (Stiglitz 2000). It is possible to interpret the failure of such models as resulting from an incorrect interpretation of available data, but a more fundamental critique relates to their deterministic approach to those data. John Maynard Keynes referred to the ‘uncontrollable and disobedient psychology of the business world’ contaminating the alleged law-like dynamics of market economies (Keynes 1964, p. 317). But his formal refutation of classical economic doctrine identified unpredictability as an inherent feature of market economies. Decision-making within a monetary system is invariably contingent on factors like the (potentially flawed) anticipation of future trends in demand and prices for capital and consumer goods, as well as the effects of new forms of competition and the vagaries of consumer preference.

A similar problem has been identified by some commentators in treatments of political systems. The collapse of the Soviet Union took western observers (including academics and intelligence agencies) by surprise, not because of any lack of detailed information about the system, but because the analytical framework within which it was conceptualised involved the application of a priori concepts of system-level relations. In this case, the framework was based on attempts to define totalitarianism in value-neutral terms that treated the USSR as a uniquely inflexible, monolithic social, economic and political system (Arendt 1962; Friedrich and Brzezinski 1965). As the leader of a team of CIA analysts studying the Soviet Union argued:

“It seems likely that ultimately the reason for the failure of professionals to understand the Soviet predicament lay in their indifference to the human factor. In the desire to emulate the successes of the natural scientists, whose judgments are “value free,” politology (sic) and sociology have been progressively dehumanized, constructing models and relying on statistics (many of them falsified) and, in the process, losing contact with the subject of their inquiries—the messy, contradictory, unpredictable homo sapiens.” (Cited by Jones and Silberzahn 2013, pp. 125–126).

This analysis could equally apply to top-down models of land use that describe the system as one comprised of homogeneous and rational economic agents. Such models are unable to anticipate the impacts of events such as the Soviet collapse both because they share the above interpretations of the political and economic systems that support human land use, and because they adopt a parallel interpretation of the land use system itself. Sudden transitions or ‘regime-shifts’ in land use are equally dependent on the basic processes at play rather than system-level properties, and their anticipation therefore depends upon knowledge of behavioural, social and other micro-scale factors (Weisbuch 2000; Lambin et al. 2001; Castella et al. 2005; Lambin and Meyfroidt 2010).

This is not to claim that intentionalistic interpretations could be infallible guides to such events or to social trends more generally (e.g. Jennings 2000; Kontorovich 2001). As argued above, human behaviour is governed exhaustively by the human intentions, values and conventions embedded in different social institutions and practices—irrespective of whether the resulting actions are rational, correct or self-consistent. Indeed, the philosophical arguments we have outlined call into question the very idea of causation in social systems (Pattee 2012; Hulswit 2006). This represents an insurmountable obstacle to infallible prediction under any approach. Nevertheless, informed and sensitive analysis of diverse human motivations rather than reliance on macro-scale predictive models should alert us more effectively to prospective ‘tipping points’.

Model validation and use

The lack of predictability in social systems is not only a problem for predictive modelling, but also for model validation. Generally, and especially in the case of top-down models, validation involves the assessment of agreement between model results (predictions) and historical data. If a model is able to reproduce observed changes consistently, its design and parameterisation are regarded as valid. Conversely, if a model predicts changes that are not observed, it is regarded as faulty.

In fact, any model of a social system that reliably reproduces an historical outcome should be evaluated sceptically as probably over-fitted to particular data or trends (Batty and Torrens 2005). This is especially pertinent in the context of future conditions or scenarios that have no observable historical precedent. In any case, as argued above, there is nothing inevitable about the results of social processes, and an observed outcome is only one among a wide range of possible outcomes. Complete inability to reproduce such an outcome should call model validity into question, but so should inability to produce numerous and potentially radically different counter-factual results. Similar issues relate to the otherwise legitimate use of spatial or system analogues, in which processes similar to those modelled occur in different geographical locations or systems rather than times: these extend the scope for comparing model results to observations, but involve the same risk of over-fitting to a particular outcome.

Instead, there is a clear case for validation to focus on modelled processes rather than highly variable and unpredictable emergent outcomes (e.g. McCarl and Apland 1986; Batty and Torrens 2005). However, measuring process accuracy is no easy task, and bottom-up models have many specific problems of validation that increase with the complexity of behaviour included. This is especially true where attempts are made to validate process and pattern concurrently, without accounting for the non-unique and potentially confounding relationships between the two (as also applies when assumptions of general equilibrium and actor rationality are considered validated because models containing them reproduce observed patterns) (Windrum et al. 2007). A number of approaches have been suggested to account for these difficulties (e.g. Werker and Brenner 2004; Windrum et al. 2007), but it must also be appreciated that models of human and natural systems are at some level impossible to validate, because systems are never closed or static, and quantitative characteristics can never uniquely identify a cause (e.g. Oreskes et al. 1994; Schindler and Hilborn 2015).

Given the impossibility of strict validation, a more open process assessing model performance or ‘robustness’ may be preferable (Berger 2001). Such a process should involve extended, iterative periods of calibration and exploration of uncertainty (Batty and Torrens 2005; Troost and Berger 2014). Indeed, given the various purposes of social theories and models that are entirely distinct from prediction (e.g. Epstein 2008), validation of this kind may be an end in itself. In generating ranges of possible outcomes rather than reproducing historical observations, models explore the ‘noise’ in social systems and substantially increase our understanding of those systems (Edmonds 2000; Lustick and Miodownik 2009). This is particularly relevant given the considerable potential identified above for incorporating more social theories and cognitive richness into bottom-up models.

While the philosophical imperative for process-based understanding is clear, its practical implementation is therefore considerably more complex and more dependent on specific circumstances. The increasing use of bottom-up (especially agent-based) models of land use change mirrors the earlier move away from positivism in social science. However, no model can accurately and completely describe a social system, and model validation and use must therefore focus on particular aspects of that system, while carefully recognising shortcomings and omissions. Of particular relevance is the impossibility of defining system boundaries, which, like the behaviours and relationships within the system, are fundamentally fluid and non-algorithmic (e.g. Epstein 2012). Uncertainty is an inherent property of social systems, and models are particularly useful for allowing controlled—if artificial—experiments that explore such uncertainty (Young et al. 1996; Janssen and Ostrom 2006; Lustick and Miodownik 2009; Brown et al. 2014). This may be best achieved through the complementary stages of analysis identified above, with top-down approaches used to identify broad relationships of interest, and bottom-up approaches used to investigate processes responsible for those relationships. Such an approach rests on firm philosophical foundations and so maximises our ability to understand the past and future development of systems such as human land use.

Conclusions

The land use system is fundamentally a social system, the development of which is determined by individual behaviours, conceptions and decisions, together with interactions with emergent social and institutional structures. This non-deterministic system is not predictable and considerable scope exists for predictive models to mislead about possible future developments. Nevertheless, models remain highly valuable heuristic and exploratory tools that can substantially improve our understanding of the land system and its interactions with other human and natural systems.

These points are especially pertinent in the context of global climatic, social and demographic changes. It is likely that many of these changes will be sudden and/or extreme, without any available historical parallels, rendering models that are closely based (calibrated or validated) on historical data obsolete. This is especially true for models that either neglect or constrain behaviour, as individual land managers, institutions and societies may respond to these changes in quite different ways.

In these circumstances, recognition of the central role of human intentionality in land use change is imperative. Bottom-up, process-based models are uniquely well-placed to achieve this, according closely with some of the central arguments in the philosophy of social science. Such approaches allow for more accurate, rigorous and explicit treatment of the system and its inevitable uncertainties, and therefore can substantially improve our understanding of system development. This is most (or, perhaps, only) true where an exploratory approach that builds on the insights of social science is taken to modelling, with top-down models utilised appropriately to investigate macro-scale trends and relationships for further analysis with bottom-up models.

These conclusions have clear implication for the practice of land use modelling. Models should be designed in ways that are appropriate to their objectives, with bottom-up designs used to investigate hypothesised causal relationships and potential future developments. Validation should not restrict models to reproduction of historical changes, but should focus on process accuracy and assessment of ranges of results. The crucial role of dynamic interactions across levels of social organisation, from individual to formal institutional levels, should be accounted for. Finally, models should be used to highlight and explore uncertainties, so that practical and political decision-making can respect the fundamentally complex, social nature of the system it seeks to alter.