Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 The Role of Envisioning in Creating the Future

Envisioning is a primary tool in futures studies (Garrett 1993; Slaughter 1993; Kouzes and Posner 1996; Razak 1996; Adesida and Oteh 1998). There has also been significant practical success in using envisioning and “future searches” in organizations and communities around the world (Weisbord 1992; Weisbord and Janoff 1995). This experience has shown that it is quite possible for disparate (even adversarial) groups to collaborate on envisioning a desirable future, given the right forum.

Meadows (1996) discusses why the processes of envisioning and goal setting are so important (at all levels of problem solving); why envisioning and goal setting are so underdeveloped in our society; and how we can begin to train people in the skill of envisioning, and begin to construct shared visions of a sustainable and desirable society. She tells the personal story of her own discovery of that skill and her attempts to use the process of shared envisioning in problem solving. From this experience, several general principles emerged, including:

  1. 1.

    In order to effectively envision, it is necessary to focus on what one really wants, not what one will settle for. For example, the lists below show the kinds of things people really want, compared to the kinds of things they often settle for.

    Really want

    Settle for

    Self-esteem

    Fancy car

    Serenity

    Drugs

    Health

    Medicine

    Human happiness

    GNP

    Permanent prosperity

    Unsustainable growth

  2. 2.

    A vision should be judged by the clarity of its values, not the clarity of its implementation path. Holding to the vision and being flexible about the path is often the only way to find the path.

  3. 3.

    Responsible vision must acknowledge, but not be crushed by, the physical constraints of the real world.

  4. 4.

    It is critical for visions to be shared because only shared visions can be responsible.

  5. 5.

    Vision must be flexible and evolving.

This chapter represents a step in the ongoing process of creating a shared vision of the future of science. It lays out a personal vision of the kind of science I would really want to see in the future and why this new vision of science would be an improvement over what we now have. The paper itself is an attempt to share that vision, without getting bogged down in speculation about how the vision might be achieved or impediments to it’s achievement. Hopefully, the ideas presented here will generate a dialogue culminating in a shared vision of the future of science that can motivate movement in the direction of the vision.

2 Consilience Among All the Sciences

“Consilience” according to Webster, is “a leaping together”. Biologist E. O. Wilson’s book by that title (Wilson 1998) attempted a grand synthesis, or “leaping together” of our current state of knowledge by “linking facts and fact-based theory across disciplines to create a common groundwork for explanation” and a prediction of where we are headed. Wilson believes that “the Enlightenment thinkers of the seventeenth and eighteenth centuries got it mostly right the first time. The assumptions they made of a lawful material world, the intrinsic unity of knowledge, and the potential of indefinite human progress are the ones we still take most readily into our hearts, suffer without, and find maximally rewarding through intellectual advance. The greatest enterprise of the mind has always been and always will be the attempted linkage of the sciences and humanities. The ongoing fragmentation of knowledge and resulting chaos in philosophy are not reflections of the real world but artifacts of scholarship. The propositions of the original Enlightenment are increasingly favored by objective evidence, especially from the natural sciences” (p. 8). Wilson takes an unabashedly logical positivist and reductionist approach to science and to consilience, arguing that: “The central idea of the consilience world view is that all tangible phenomena, from the birth of stars to the workings of social institutions, are based on material processes that are ultimately reducible, however long and tortuous the sequences, to the laws of physics” (p. 266). Deconstructionists and post-modernists, in this view, are merely gadflys who are nonetheless useful in order to keep the “real” scientists honest.

While there is probably broad agreement that integrating the currently fragmented sciences and humanities is a good idea, many will disagree with Wilson’s neo-Enlightenment, reductionist prescription. The problem is that the type of consilience envisioned by Wilson would not be a real “leaping together” of the natural sciences, the social sciences, and the humanities. Rather, it would be a total takeover by the natural sciences and the reductionist approach in general. There are, however, several well-known problems with the strict reductionist approach to science (Williams 1997), and several of its contradictions show up in Wilson’s view of consilence.

Wilson recognizes that the real issue in achieving consilience is one of scaling – how do we transfer understanding across the multitude of spatial and temporal scales from quarks to the universe and everything in between. But he seems to fall back on the overly simplistic reductionist approach to doing this – that if we understand phenomena at their most detailed scale we can simply “add up” in linear fashion from there to get the behavior at larger scales. While stating that “The greatest challenge today, not just in cell biology and ecology but in all of science, is the accurate and complete description of complex systems” (p. 85), he puts aside some of the main findings from the study of complex systems – that scaling in adaptive, living systems is neither linear nor easy, and that “emergent properties,” which are unpredictable from the smaller scale alone, are important. While acknowledging on the one hand that analysis and synthesis, reductionism and wholism, are as inseparable as breathing out and breathing in, Wilson glosses over the difficulty of actually doing the synthesis in complex adaptive systems and the necessity of studying and understanding phenomena at multiple scales simultaneously, rather than reducing them to the laws of physics.

The consilience we are really searching for, I believe, is a more balanced and pluralistic kind of “leaping together,” one in which the natural and social sciences and the humanities all contribute equitably. A science that is truly transdisciplinary and multiscale, rather than either reductionistic or wholistic, is, in fact, evolving, but I think it will be much more sophisticated and multifaceted in its view of the complex world in which we live, the nature of “truth” and the potential for human “progress” than the Enlightenment thinkers of the seventeenth and eighteenth centuries could ever have imagined. The remainder of this paper attempts to flesh out what this new transdisciplinary future for the reintegrated natural and social sciences might look like.

3 Reestablishing the Balance Between Synthesis and Analysis

Science, as an activity, requires a balance between two quite dissimilar activities. One is analysis – the ability to break down a problem into its component parts and understand how they function. The second is synthesis – the ability to put the pieces back together in a creative way in order to solve problems. In most of our current university research and education, these capabilities are not developed in a balanced, integrated way. For example, both natural and social science research and education focuses almost exclusively on analysis, while the arts and engineering focus on synthesis. But, as mentioned above, analysis and synthesis, reductionism and wholism, are as inseparable as breathing out and breathing in. It is no wonder that our current approach to science is so dysfunctional. We have been holding our breath for a long time!

In the future, the need for a healthy balance between analysis and synthesis will be recognized at all levels of science education and research. One can already see the beginnings of this development. For example, the National Center for Ecological Analysis and Synthesis (NCEAS – http://www.nceas.ucsb.edu/) was established in response to the recognition in the ecological community that the activity of synthesis was both essential and vastly under-supported. Ecologists recognized that they could only obtain funding and professional recognition for collecting new data. They never had the time, resources, or professional incentives to figure out what their data meant, or how it could be effectively used to build a broader understanding of ecosystems or to manage human interactions with them more effectively. The response to NCEAS so far has been overwhelmingly positive, and I expect that synthesis, as a necessary component of the scientific process, will eventually receive its fair share of resources and rewards. Funding for synthesis activities will become available from the major government science funding agencies on an equal footing with analysis activities. For example, NSF has recently established the National Socio-Environmental SYNthesis Center (SESYNC – http://www.sesync.org/) aimed at broadening synthesis activity to better encompass the social sciences and humanities.

In the universities, the curriculum will be restructured to achieve a better balance between synthesis and analysis. More courses will be “problem-based,” workshops aimed at collaboratively addressing real problems via creative synthesis. Research has conclusively shown that “problem-based” curricula are very effective not only at supporting synthesis, but also at developing better analytical skills, since students are much more motivated to learn analytical tools if they have a specific problem to solve (Grigg 1995; Scott and Oulton 1999; Wheeler and Lewis 1997). There are already a few entire universities structured around the model of problem-based learning, including Maastricht University in the Netherlands and the University of Aalborg in Denmark. In addition, the capabilities of current and developing electronic communication technology will be more effectively employed in university education in the future. The market will soon be flooded with courses delivered over the Internet, but with little coordination among them and little recognition of the importance of integrating synthesis and communication into the educational process. The university of the future will take full advantage of the Internet, but it will also take much better advantage of the local face-to-face interactions on campus. Analysis courses are most amenable to delivery over the web. They could therefore afford to use the best faculty from around the world to produce them and could be continuously updated and improved. Grading would be internalized in the course, but testing would be proctored by the local host universities. This use of the Internet to provide most basic “tools” courses would free faculty to participate in synthesis courses, rather than repeating the same basic tools courses over and over at all campuses. Synthesis courses would be face-to-face “problem-based” studio or workshop courses focused on interactively solving real, current problems in the field (using the tools from the analysis courses or developing new tools in the process). These courses would be offered at local campuses or at the location of the problem itself, with quality control via the requirement for peer review of the results. Grading would be part of the peer review process and therefore would be performed external to the courses themselves.

This restructuring of research funding and the universities will also break down the strict disciplinary divisions that now exist. In the future, disciplinary boundaries will be as porous as many state and national boundaries are today. Likewise, one’s disciplinary background will be noted much as one’s place of birth is noted today – an interesting fact about one’s path through life, but not a central defining characteristic. By focusing on problems and synthesis (rather than tools) universities will reclaim their role in society as the font of knowledge and wisdom (rather than merely technical expertise).

4 A Pragmatic Modeling Philosophy

Practical problem solving requires the integration of three elements: (1) creation of a shared vision of both how the world works and how we would like the world to be; (2) systematic analysis appropriate to and consistent with the vision; and (3) implementation appropriate to the vision. Scientists generally focus on only the second of these steps, but integrating all three is essential to both good science and effective management. “Subjective” values enter in the “vision” element, both in terms of the formation of broad social goals and in the creation of a “pre-analytic vision” which necessarily precedes any form of scientific analysis. Because of this need for vision, completely “objective” scientific analysis is impossible. In the words of Joseph Schumpeter (1954, p. 41):

“In practice we all start our own research from the work of our predecessors, that is, we hardly ever start from scratch. But suppose we did start from scratch, what are the steps we should have to take? Obviously, in order to be able to posit to ourselves any problems at all, we should first have to visualize a distinct set of coherent phenomena as a worthwhile object of our analytic effort. In other words, analytic effort is of necessity preceded by a preanalytic cognitive act that supplies the raw material for the analytic effort. In this book, this preanalytic cognitive act will be called Vision. It is interesting to note that vision of this kind not only must precede historically the emergence of analytic effort in any field, but also may reenter the history of every established science each time somebody teaches us to see things in a light of which the source is not to be found in the facts, methods, and results of the preexisting state of the science.”

Nevertheless, it is possible to separate the process into the more subjective (or normative) envisioning component, and the more systematic, less subjective analysis component (which is based on the vision). “Good science” can do no better than to be clear about its underlying pre-analytic vision, and to do analysis that is consistent with that vision.

The task would be simpler if the vision of science were static and unchanging. But as the quote from Schumpeter above makes clear, this vision is itself changing and evolving as we learn more. This does not invalidate science as some deconstructionists would have it. Quite the contrary, by being explicit about its underlying pre-analytic vision, science can enhance its honesty and thereby its credibility. This credibility is a result of honest exposure and discussion of the underlying process and its inherent subjective elements, and a constant pragmatic testing of the results against real world problems, rather than by appeal to a non-existent objectivity.

The pre-analytic vision of science is changing from the “logical positivist” view (which holds that science can discover ultimate “truth” by falsification of hypothesis) to a more pragmatic view that recognizes that we do not have access to any ultimate, universal truths, but only to useful abstract representations (models) of small parts of the world. Science, in both the logical positivist and in this new “pragmatic modeling” vision, works by building models and testing them. But the new vision recognizes that the tests are rarely, if ever, conclusive (especially in the life sciences and the social sciences), the models can only apply to a limited part of the real world, and the ultimate goal is therefore not “truth” but quality and utility. In the words of William Deming “All models are wrong, but some models are useful” (McCoy 1994).

The goal of science is then the creation of useful models whose utility and quality can be tested against real world applications. The criteria by which one judges the utility and quality of models are themselves social constructs that evolve over time. There is, however, fairly broad and consistent consensus in the peer community of scientists about what these criteria are. They include: (1) testablity; (2) repeatability; (3) predictability; and (4) simplicity (i.e. Occam’s razor – the model should be as simple as possible – but no simpler!). But, because of the nature of real world problems, there are many applications for which some of these criteria are difficult or impossible to apply. These applications may nevertheless still be judged as “good science”. For example, some purely theoretical models are not directly “testable” – but they may provide a fertile ground for thought and debate and lead to more explicit models which are testable. Likewise, field studies of watersheds are not, strictly speaking, repeatable because no two watersheds are identical. But there is much we can learn from field studies that can be applied to other watersheds and tested against the other criteria of predictability and simplicity. How simple a model can be depends on the questions being asked. If we ask a more complex or more detailed question, the model will probably have to be more complex and detailed. Complex problems require “complex hypotheses” in the form of models. These complex models are always “false” in the sense that they can never match reality exactly. As science progresses and the range of applications expands, the criteria by which utility and quality are judged must also change and adapt to the changing applications.

5 A Multiscale Approach to Science

In understanding and modeling ecological and economic systems exhibiting considerable biocomplexity, the issues of scale and hierarchy are central (Ehleringer and Field 1993; O’Neill et al. 1989). The term “scale” in this context refers to both the resolution (spatial grain size, time step, or degree of complexity of the model) and extent (in time, space, and number of components modeled) of the analysis. The process of “scaling” refers to the application of information or models developed at one scale to problems at other scales. The scale dependence of predictions is increasingly recognized in a broad range of ecological studies, including: landscape ecology (Meentemeyer and Box 1987), physiological ecology (Jarvis and McNaughton 1986), population interactions (Addicott et al. 1987), paleoecology (Delcourt et al. 1983), freshwater ecology (Carpenter and Kitchell 1993), estuarine ecology (Livingston 1987), meteorology and climatology (Steyn et al. 1981) and global change (Rosswall et al. 1988). However, “scaling rules” applicable to biocomplex systems have not yet been adequately developed, and limits to extrapolation have been difficult to identify (Turner et al. 1989). In many of these disciplines primary information and measurements are generally collected at relatively small scales (i.e. small plots in ecology, individuals or single firms in economics) and that information is then often used to build models and make inferences at radically different scales (i.e. regional, national, or global). The process of scaling is directly tied to the problem of aggregation, which in complex, non-linear, discontinuous systems (like ecological and economic systems) is far from a trivial problem.

5.1 Aggregation

Aggregation error is inevitable as attempts are made to represent n-dimensional systems with less than n state variables, much like the statistical difficulties associated with sampling a variable population (Bartel et al. 1988, Gardner et al. 1982; Ijiri 1971). Cale et al. (1983) argued that in the absence of linearity and constant proportionality between variables – both of which are rare in ecological systems – aggregation error is inevitable. Rastetter et al. (1992) give a detailed example of scaling a relationship for individual leaf photosynthesis as a function of radiation and leaf efficiency to estimate the productivity of the entire forest canopy. Because of non-linear variability in the way individual leaves process light energy, one cannot simply use the fine scale relationship between photosynthesis and radiation and efficiency along with the mean values for the entire forest to represent total forest productivity without introducing significant aggregation error. Therefore, strategies to minimize aggregation error are necessary.

Jarvis and McNaughton (1986) explain the source of aggregation error shown by Rastetter by highlighting the discrepancy in transpiration control theory between meteorologists and plant physiologists. The meteorologists believe that weather patterns determine transpiration and have developed a series of equations that successfully calculate regional transpiration rates. The plant physiologists believe in stomatal control of transpiration and have demonstrated this with leaf chamber experiments in the field and laboratory. Therefore, it seems that different processes control transpiration at different scales, and aggregation from a single leaf to regional vegetation is impossible without accounting for this scale-dependent variability in transpiration control. One must somehow understand and embed this variability into the coarse scale.

Turner et al. (1989) list four steps for predicting across scales:

  1. 1.

    identify the spatial and temporal scale of the process to be studied;

  2. 2.

    understand the way in which controlling factors (constraints) vary with scale;

  3. 3.

    develop the appropriate methods to translate predictions from one scale to another; and

  4. 4.

    empirically test methods and predictions across multiple scales.

Rastetter et al. (1992) describe and compare four basic methods for scaling that are applicable to complex systems:

  1. 1.

    partial transformations of the fine scale relationships to coarse scale using a statistical expectations operator;

  2. 2.

    moment expansions as an approximation to 1;

  3. 3.

    partitioning or subdividing the system into smaller, more homogeneous parts (see the resolution discussion further on); and

  4. 4.

    calibration of the fine scale relationships to coarse scale data.

They go on to suggest a combination of these four methods as the most effective overall method of scaling in complex systems. (Rastetter et al. 1992).

5.2 Hierarchy Theory

Hierarchy theory provides an essential conceptual base for building coherent models of complex systems (Allen and Starr 1982; O’Neill et al. 1986; Salthe 1985; Gibson et al. 2000). Hierarchy is an organizational principle that yields models of nature that are partitioned into nested levels that share similar time and space scales. In a constitutive hierarchy, an entity at any level is part of an entity at a higher level and contains entities at a lower level. In an exclusive hierarchy, there is no containment relation between entities, and levels are distinguished by other criteria, e.g. trophic levels. Entities are to a certain extent insulated from entities at other levels in the sense that, as a rule, they do not directly interact; rather they provide mutual constraints. For example, individual organisms see the ecosystem they inhabit as a slowly changing set of external (environmental) constraints and the complex dynamics of component cells as a set of internal (behavioral) constraints.

From the scaling perspective, hierarchy theory is a tool for partitioning complex systems in order to minimize aggregation error (Thiel 1967; Hirata and Ulanowicz 1985). The most important aspect of hierarchy theory is that ecological systems’ behavior is limited by both the potential behavior of its components (biotic potential) and environmental constraints imposed by higher levels (O’Neill et al. 1989). The flock of birds that can fly only as fast as its slowest member, or a forested landscape that cannot fix atmospheric nitrogen if specific bacteria are not present are examples of biotic potential limitation. Animal populations limited by available food supply and plant communities limited by nutrient remineralization are examples of limits imposed by environmental constraints. O’Neill et al. (1989) use hierarchy theory to define a ‘constraint envelope’ based upon the physical, chemical and biological conditions within which a system must operate. They argue that hierarchy theory and the resulting ‘constraint envelope’ enhance predictive power. Although they may not be able to predict exactly what place the system occupies within the constraint envelope, they can state with confidence that a system will be operating within its constraint envelope.

Viewing biocomplexity through the lens of hierarchy theory should serve to illuminate the general principles of life systems that occur at each level of the hierarchy. While every level will necessarily have unique characteristics, it is possible to define forms and processes that are isomorphic across levels (as are many laws of nature). Troncale (1985) has explored some of these isomorphisms in the context of general system theory. In the context of scaling theory we can seek isomorphisms which assist in the vertical integration of scales. These questions feed into the larger question of scaling, and how to further develop the four basic methods of scaling mentioned above for application to complex systems.

5.3 Fractals and Chaos

One well-known isomorphism is the “self-similarity” between scales exhibited by fractal structures (Mandelbrot 1977) which may provide another approach to the problem of scaling. This self-similarity implies a regular and predictable relationship between the scale of measurement (here meaning the resolution of measurement) and the measured phenomenon. For example, the regular relationship between the measured length of a coastline and the resolution at which it is measured is a fundamental, empirically observable one. It can be summarized in the following equation:

$$ \mathrm{L}=\mathrm{k}\cdot {\mathrm{s}}^{\left(1-\mathrm{D}\right)} $$
(1.1)

where:

  • L = the length of the coastline or other “fractal” boundary

  • s = the size of the fundamental unit of measure or the resolution of the measurement

  • k = a scaling constant

  • D = the fractal dimension

Primary questions concern the range of applicability of fractals and chaotic systems dynamics to the practical problems of modeling ecological economic systems. The influence of scale, resolution, and hierarchy on the mix of behaviors one observes in systems has not been fully investigated, and this remains a key question for developing coherent models of complex ecological economic systems.

5.4 Resolution and Predictability

The significant effects of nonlinearities raise some interesting questions about the influence of resolution (including spatial, temporal, and component) on the performance of models, and in particular their predictability. Costanza and Maxwell (1994) analyzed the relationship between resolution and predictability and found that while increasing resolution provides more descriptive information about the patterns in data, it also increases the difficulty of accurately modeling those patterns. There may be limits to the predictability of natural phenomenon at particular resolutions, and “fractal like” rules that determine how both “data” and “model” predictability change with resolution.

Some limited testing of these ideas was done by resampling land use map data sets at several different spatial resolutions and measuring predictability at each. Colwell (1974) used categorical data to define predictability as the reduction in uncertainty (scaled on a 0–1 range) about one variable given knowledge of others. One can define spatial auto-predictability (Pa) as the reduction in uncertainty about the state of a pixel in a scene, given knowledge of the state of adjacent pixels in that scene, and spatial cross-predictability (Pc) as the reduction in uncertainty about the state of a pixel in a scene, given knowledge of the state of corresponding pixels in other scenes. Pa is a measure of the internal pattern in the data, while Pc is a measure of the ability of some other model to represent that pattern.

A strong linear relationship was found between the log of Pa and the log of resolution (measured as the number of pixels per square kilometer). This fractal-like characteristic of “self-similarity” with decreasing resolution implies that predictability, like the length of a coastline, may be best described using a unitless dimension that summarizes how it changes with resolution. One can define a “fractal predictability dimension” (DP) in a manner analogous to the normal fractal dimension (Mandelbrot 1977, 1983). The resulting DP allows convenient scaling of predictability measurements taken at one resolution to others.

Cross-predictability (Pc) can be used for pattern matching and testing the fit between scenes. In this sense it relates to the predictability of models versus the internal predictability in the data revealed by Pa. While Pa generally increases with increasing resolution (because more information is being included), Pc generally falls or remains stable (because it is easier to model aggregate results than fine grain ones). Thus we can define an optimal resolution for a particular modeling problem that balances the benefit in terms of increasing data predictability (Pa) as one increases resolution, with the cost of decreasing model predictability (Pc). Figure 1.1 shows this relationship in generalized form.

Fig. 1.1
figure 1

Relationship between resolution and predictability for data and models (From Costanza and Maxwell 1994)

6 Cultural and Biological Co-evolution

In modeling the dynamics of complex systems it is impossible to ignore the discontinuities and surprises that often characterize these systems, and the fact that they operate far from equilibrium in a state of constant adaptation to changing conditions (Rosser 1991, 1992; Holland and Miller 1991; Lines 1990; Kay 1991). The paradigm of evolution has been broadly applied to both ecological and economic systems (Boulding 1981; Arthur 1988; Lindgren 1991; Maxwell and Costanza 1993) as a way of formalizing understanding of adaptation and learning behaviors in non-equilibrium dynamic systems. The general evolutionary paradigm posits a mechanism for adaptation and learning in complex systems at any scale using three basic interacting processes: (1) information storage and transmission; (2) generation of new alternatives; and (3) selection of superior alternatives according to some performance criteria.

The evolutionary paradigm is different from the conventional optimization paradigm popular in economics in at least four important respects (Arthur 1988): (1) evolution is path dependent, meaning that the detailed history and dynamics of the system are important; (2) evolution can achieve multiple equilibria; (3) there is no guarantee that optimal efficiency or any other optimal performance will be achieved, due in part to path dependence and sensitivity to perturbations; and (4) “lock-in” (survival of the first rather than survival of the fittest) is possible under conditions of increasing returns. While, as Arthur (1988) notes “conventional economic theory is built largely on the assumption of diminishing returns on the margin (local negative feedbacks)” life itself can be characterized as a positive feedback, self-reinforcing, autocatalytic process (Kay 1991; Günther and Folke 1993) and we should expect increasing returns, lock-in, path dependence, multiple equilibria and sub-optimal efficiency to be the rule rather than the exception in economic and ecological systems.

6.1 Cultural vs. Genetic Evolution

In biological evolution, the information storage medium is the genes, the generation of new alternatives is by sexual recombination or genetic mutation, and selection is performed by nature according to a criteria of “fitness” based on reproductive success. The same process of change occurs in ecological, economic, and cultural systems, but the elements on which the process works are different. For example, in cultural evolution the storage medium is the culture (the oral tradition, books, film or other storage medium for passing on behavioral norms), the generation of new alternatives is through innovation by individual members or groups in the culture, and selection is again based on the reproductive success of the alternatives generated, but reproduction is carried out by the spread and copying of the behavior through the culture rather than biological reproduction. One may also talk of “economic” evolution, a subset of cultural evolution dealing with the generation, storage, and selection of alternative ways of producing things and allocating that which is produced. The field of “evolutionary economics” has grown up in the last decade or so based on these ideas (cf. Day and Groves 1975; Day 1989). Evolutionary theories in economics have already been successfully applied to problems of technical change, to the development of new institutions, and to the evolution of means of payment.

For large, slow-growing animals like humans, genetic evolution has a built-in bias towards the long-run. Changing the genetic structure of a species requires that characteristics (phenotypes) be selected and accumulated by differential reproductive success. Behaviors learned or acquired during the lifetime of an individual cannot be passed on genetically. Genetic evolution is therefore usually a relatively slow process requiring many generations to significantly alter a species’ physical and biological characteristics.

Cultural evolution is potentially much faster. Technical change is perhaps the most important and fastest evolving cultural process. Learned behaviors that are successful, at least in the short term, can be almost immediately spread to other members of the culture and passed on in the oral, written, or video record. The increased speed of adaptation that this process allows has been largely responsible for homo sapiens’ amazing success at appropriating the resources of the planet. Vitousek et al. (1986) estimate that humans now directly control from 25 to 40 % of the total primary production of the planet’s biosphere, and this is beginning to have significant effects on the biosphere, including changes in global climate and in the planet’s protective ozone shield.

Both the benefits and the costs of this rapid cultural evolution are potentially significant. Like a car that has increased speed, humans are in more danger of running off the road or over a cliff. Cultural evolution lacks the built-in long-run bias of genetic evolution and is susceptible to being led by its hyper-efficient short-run adaptability over a cliff into the abyss.

Another major difference between cultural and genetic evolution may serve as a countervailing bias, however. As Arrow (1962) has pointed out, cultural and economic evolution, unlike genetic evolution, can at least to some extent employ foresight. If society can see the cliff, perhaps it can be avoided.

While market forces drive adaptive mechanisms (Kaitala and Pohjola 1988), the systems that evolve are not necessarily optimal, so the question remains: What external influences are needed and when should they be applied in order to achieve an optimum economic system via evolutionary adaptation? The challenge faced by ecological economic systems modelers is to first apply the models to gain foresight, and to respond to and manage the system feedbacks in a way that helps avoid any foreseen cliffs (Berkes and Folke 1994). Devising policy instruments and identifying incentives that can translate this foresight into effective modifications of the short-run evolutionary dynamics is the challenge (Costanza 1987).

What is really needed is a coherent and consistent theory of genetic and cultural co-evolution. These two types of evolution interact with each other in complex and subtle ways, each determining and changing the landscape for the other.

6.2 Evolutionary Criteria

A critical problem in applying the evolutionary paradigm in dynamic models is defining the selection criteria a priori. In its basic form, the theory of evolution is circular and descriptive (Holling 1987). Those species or cultural institutions or economic activities survive which are the most successful at reproducing themselves. But we only know which ones were more successful after the fact. To use the evolutionary paradigm in modeling, we require a quantitative measure of fitness (or more generally performance) in order to drive the selection process.

Several candidates have been proposed for this function in various systems, ranging from expected economic utility to thermodynamic potential. Thermodynamic potential is interesting as a performance criteria in complex systems because even very simple chemical systems can be seen to evolve complex non-equilibrium structures using this criteria (Prigogine 1972; Nicolis and Prigogine 1977, 1989), and all systems are (at minimum) thermodynamic systems (in addition to their other characteristics) so that thermodynamic constraints and principles are applicable across both ecological and economic systems (Eriksson 1991).

This application of the evolutionary paradigm to thermodynamic systems has led to the development of far-from-equilibrium thermodynamics and the concept of dissipative structures (Prigogine 1972). An important research question is to determine the range of applicability of these principles and their appropriate use in modeling ecological economic systems.

Many dissipative structures follow complicated transient motions. Schneider and Kay (1994) propose a way to analyze these chaotic behaviors and note that, “Away from equilibrium, highly ordered stable complex systems can emerge, develop and grow at the expense of more disorder at higher levels in the system’s hierarchy.” It has been suggested that the integrity of far-from-equilibrium systems has to do with the ability of the system to attain and maintain its (set of) optimum operating point(s) (Kay 1991). The optimum operating point(s) reflect a state where self-organizing thermodynamic forces and disorganizing forces of environmental change are balanced. This idea has been elaborated and described as “evolution at the edge of chaos” by Kauffman and Johnson (1991).

The concept that a system may evolve through a sequence of stable and unstable stages leading to the formation of new structures seems well suited to ecological economic systems. For example, Gallopin (1989) stresses that to understand the processes of economic impoverishment “…The focus must necessarily shift from the static concept of poverty to the dynamic processes of impoverishment and sustainable development within a context of permanent change. The dimensions of poverty cannot any longer be reduced to only the economic or material conditions of living; the capacity to respond to changes, and the vulnerability of the social groups and ecological systems to change become central.” In a similar fashion Robinson (1991) argues that sustainability calls for maintenance of the dynamic capacity to respond adaptively, which implies that we should focus more on basic natural and social processes, than on the particular forms these processes take at any time. Berkes and Folke (1994) have discussed the capacity to respond to changes in ecological economic systems, in terms of institution building, collective actions, cooperation, and social learning. These might be some of the ways to enhance the capacity for resilience (increase the capacity to recover from disturbance) in interconnected ecological economic systems.

As discussed earlier, cultural evolution also has the added element of human foresight. To a certain extent, we can design the future that we want by appropriately setting goals and envisioning desired outcomes.

7 Creating a Shared Vision of a Desirable and Sustainable Future

Probably the most challenging task facing humanity today is the creation of a shared vision of a sustainable and desirable society, one that can provide permanent prosperity within the biophysical constraints of the real world in a way that is fair and equitable to all of humanity, to other species, and to future generations. This vision does not now exist, although the seeds are there. We all have our own private visions of the world we really want and we need to overcome our fears and skepticism and begin to share these visions and build on them – until we have built a vision of the world we want.

We need to fill in the details of our desired future in order to make it tangible enough to motivate people across the spectrum to work toward achieving it. Nagpal and Foltz (1995) have begun this task by commissioning a range of individual visions of a sustainable world from around the globe. They laid out the following challenge for each of their “envisionaries” :

Individuals were asked not to try to predict what lies ahead, but rather to imagine a positive future for their respective region, defined in any way they chose – village, group of villages, nation, group of nations, or continent. We asked only that people remain within the bounds of plausibility, and set no other restrictive guidelines.

The results were quite revealing. While these independent visions were difficult to generalize, they did seem to share at least one important point. The “default” western vision of continued material growth was not what people envisioned as part of their “positive future.” They envisioned a future with “enough” material consumption, but where the focus has shifted to maintaining high quality communities and environments, education, culturally rewarding full employment, and peace.

These results are consistent with surveys about the degree of desirability that people expressed for four hypothetical visions of the future in the year 2100 (Costanza 2000). The four visions derive from two basic world views, whose characteristics are laid out in Fig. 1.2. These world views have been described in many ways (Bossel 1996), but an important distinction has to do with one’s degree of faith in technological progress (Costanza 1989). The “technological optimist” world view is one in which technological progress is assumed to be able to solve all current and future social problems. It is a vision of continued expansion of humans and their dominion over nature. This is the “default” vision in our current western society, one that represents continuation of current trends into the indefinite future. It is the “taker” culture as described so eloquently by Daniel Quinn in “Ishmael” (1992).

Fig. 1.2
figure 2

Payoff matrix for technological optimism vs. skepticism

There are two versions of this vision, however. One that corresponds to the underlying assumptions on which it is based actually being true in the real world, and one that corresponds to those assumptions being false, as shown in Fig. 1.2. The positive version of the “technological optimist” vision was called “Star Trek,” after the popular TV series which is its most articulate and vividly fleshed-out manifestation. The negative version of the “technological optimist” vision was called “Mad Max” after the popular movie of several years ago that embodies many aspects of this vision gone bad.

The “technological skeptic” vision is one that depends much less on technological change and more on social and community development. It is not in any sense “anti-technology.” But it does not assume that technological change can solve all problems. In fact, it assumes that some technologies may create as many problems as they solve and that the key is to view technology as the servant of larger social goals rather than the driving force. The version of this vision that corresponds to the skeptics being right about the nature of the world was called “Ecotopia” after the semi-popular book of the late 1970s (Callenbach 1975). If the optimists turn out to be right about the real state of the world, the “big government” vision comes to pass – Ronald Reagan’s worst nightmare of overly protective government policies getting in the way of the free market.

Each of these future visions was described as a narrative from the perspective of the year 2100 (Costanza 2000). A total of 418Footnote 1 respondents were read each of the four visions. They were asked: “For each vision, I’d like you to first state, on a scale of −10 to +10, using the scale provided, how comfortable you would be living in the world described. How desirable do you find such a world? I’m not asking you to vote for one vision over the others. Consider each vision independently, and just state how desirable (or undesirable) you would find it if you happened to find yourself there.” They were also asked to give their age, gender, and household income range on the survey form. The surveys were conducted with groups from both the US and Sweden. The results (mean ± standard deviation) are shown in Table 1.1 for each of these groups and pooled.

Table 1.1 Results of a survey of desirability of each of the four visions on a scale of –10 (least desirable to +10 (most desirable)) for self-selected groups of Americans and Swedes

Frequency distributions of the results are plotted in Fig. 1.3. The majority of those surveyed found the Star Trek vision positive (mean of +2.48 on a scale from −10 to +10). Given that it represents a logical extension of the currently dominant world view and culture, it is interesting that this vision was rated so low. I had expected this vision to be rated much higher, and this result may indicate the deep ambivalence many people have about the direction society seems to be headed. The frequency plot (and the high standard deviation) also shows this ambivalence toward Star Trek. The responses span the range from +10 to –10, with only a weak preponderance toward the positive side of the scale. This result applied for both the American and Swedish subgroups.

Fig. 1.3
figure 3

Frequency distributions of the responses to the visions survey

Those surveyed found the Mad Max vision very negative at −8.12 (only about 3 % of participants rated this vision positive). This was as expected. The Americans seemed a bit less averse to Mad Max (−7.78) than the Swedes (−9.12), and with a larger standard deviation.

The Big Government vision was rated on average just positive at 0.97. Many found it appealing, but some found it abhorrent (probably because of the limits on individual freedom implied). Here there were significant differences between the Americans and Swedes, with the Swedes (+2.32 ± 3.48) being much more favorably disposed to Big Government and with a smaller standard deviation than the Americans (+0.54 ± 4.44). This also was as expected, given the cultural differences in attitudes toward government in America and Sweden. Swedes rated Big Government almost as highly as Star Trek.

Finally, most of those surveyed found the Ecotopia vision “very positive” (at 5.81) some wildly so, some only mildly so, but very few (only about 7 % of those surveyed) expressed a negative reaction to such a world. Swedes rated Ecotopia significantly higher than Americans, also as might be expected given cultural differences.

Some other interesting patterns emerged from the survey. All of the visions had large standard deviations, but (especially if one looks at the frequency distributions) the Mad Max vision was consistently very negative and the Ecotopia vision was consistently very positive. Age and gender seemed to play a minor, but interesting role in how individuals rated the visions. Males rated Star Trek higher than females (mean = 3.66 vs. 1.90, p = .0039). Males also rated Mad Max higher that females (−7.11 vs. −8.20, p = .0112). The means were not significantly different by gender for either of the other two visions. Age was not significantly correlated with ranking for any of the visions, but the variance in ranking seemed to decrease somewhat with age, with younger participants showing a higher range of ratings than older participants.

Much more work is necessary to implement living democracy, and within that to create a truly shared vision of a desirable and sustainable future. This ongoing work needs to engage all members of society in a substantative dialogue about the future they desire and the policies and instruments necessary to bring it about. Scientists are a critical stakeholder group to include in this dialogue.

The future, at least to some extent, is amenable to design. As when building a house, a good plan or vision of what the house is intended to look like and how it will function is essential to building a coherent and useful structure. This design process needs to be informed by the reality of the situation – the nature of the complex, adaptive systems within which we are working – but it also needs to express our shared desires. In the future our knowledge about living systems will dramatically improve and we can achieve a true consilence among all the aspects of that knowledge. This will help us understand the constraints within which the design process must work. But we also need to involve our imagination, creativity, and ability to envision in order to design as useful and beautiful a world as we can within those constraints.

8 Conclusions

In this vision of the future of science:

  • One’s discipline will be noted much as one’s place of birth is noted today – where one started on life’s journey, but not what totally defines one’s life.

  • Science research and education will balance analysis and synthesis to produce not just data, but knowledge and even wisdom. This will enable vastly improved links with social decision-making.

  • The limits of predictability of complex, adaptive, living systems will be recognized, and a “pragmatic modeling” philosophy of science will be adopted. This will allow new, adaptive approaches to environmental management and better links with social decision-making.

  • A multiscale approach to understanding, modeling, and managing complex, adaptive, living systems will be the norm, and methods for transferring knowledge across scales will be vastly improved.

  • A consistent theory of biological and cultural co-evolution will evolve and increase understanding of humans’ place in nature and the possibilities of designing a sustainable and desirable human presence in the biosphere.

  • Envisioning and goal setting will be recognized as critical parts of both science and social decision-making. We will create a shared vision of a desirable and sustainable future, and implement adaptive management systems at multiple scales in order to get us there.