Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

If science is about experiment and testing to ensure only correct hypotheses are accepted, then art might be about balancing hypotheses which either are not or cannot be tested and making choices on the basis of broad brushes in primary colours. There is a strong tradition, at least in English-speaking countries, of seeing art and science as oppositional, with different cultures and different standards [13]. Complex systems analysis can be seen as sitting between these traditions, and indeed some proponents of complex systems analysis have seen it not as a science but as a form of storytelling, which provides a way of describing the one path of history with all of its contingencies and feedbacks.

In this chapter I want to get behind seeing complex systems analysis as a form of storytelling and consider how we can use complex systems thinking to create links between what we think of as science and as art. In other words, complexity science can offer the potential to cut through between the traditions of science and art and even to join them up. In addition, I want to use this way of thinking to take forward how policy making and policy choices can be improved.

I will do so by considering three aspects of policy making and how they might be changed and developed by the application of relevant aspects of complex systems analysis. In doing so, I want in particular to expand our appreciation of what should be meant by a scientific approach to policy and how it might help create a more appropriate way of reaching decisions.

My policy experience has been largely based in the UK so I will draw on this as the basis for my argument, but it is relevant to many areas of policy decision making. The three topics I shall consider look at both concepts and at process. First, I consider the nature of proof in both science and decision making, and how this is affected in turn by history and experience. This is especially relevant to the introduction of a new scientific approach and how its proofs can be incorporated into day to day decisions. This is an underappreciated problem.

Second, I look at the question of optimisation. Policy makers want to be given a solution, not a menu. They particularly don’t want a menu of choices whose outcomes are uncertain because of potential feedbacks. The offer of optimality is a powerful one, and unintended consequences can be left for a future generation of policy makers to worry about. Even scenarios are often unwelcome. It is a real challenge to incorporate an appreciation that optimal solutions don’t exist in decision making.

Third, I want to consider the venerable tradition of ceteris paribus—other things being equal. Which other things can be allowed to be equal, and over what time period is a significant problem. The addition of complex behaviours and probabilistic rules compounds this difficulty but also allows us to think about it in a different way. In each of these areas, I give examples of policy debates and policy decisions.

2 Proof and the Force of Tradition: Transport Policy

2.1 Proof in Policy

Giving evidence one day at a Planning Inquiry into the building of a new bridge across the Thames, I was asked what standard of proof there was for my contention that increased accessibility would create jobs and encourage more residents. The planning inspector was an engineer by background and pointed out that he had standards of proof for the number of times a piece of metal could be stressed before it failed. What, he asked, was the equivalent for my models?

It has to be said that my answer did not convince him. Our models had used a combination of fuzzy logic clustering of London’s locations, densities and accessibility and related the cluster centres to create a relationship between population and employment densities and accessibility that controlled for all the unknown and unmeasured reasons that might have affected the outcomes. This approach was innovative and had not previously been agreed by any planning authorities. There was therefore a risk to the planning inspector in accepting a new way of arguing.

Second, I argued that transport was a necessary but not a sufficient condition for improved economic performance. It was therefore impossible to identify a ‘transport only’ effect of the new bridge, since development site availability, suitable skills and training policies and the availability of jobs outside the locality would all play a part in providing the outcome. This integrated way of thinking created severe puzzlement. The reductive approach to economics in the last half century or so, supported by econometrics and multiple regression analysis, creates a mind set in which the aim and object of analysis is to separate out individual effects and to control the remainder.

In two important respects, therefore, my evidence stepped outside tradition. We lost the case (not entirely for this reason) and a necessary bridge across the Thames is still to be built.

2.2 Models, Proof and Tradition

One recommendation of the planning inspector into the Thames bridge was to construct a particular kind of model, approved by the Department for Transport. This model describes the interactions between land use and transport. It therefore incorporates at least one feedback mechanism that might be important.

Such a model, known as a Land Use and Transport Interaction (LUTI) model [5], starts with the traditional transport model framework. It therefore fits into a familiar framework. This framework has been built up in the UK over a period of 50 years or so and has some important features for the way in which decisions are made. First it rests on principles of welfare economics. An underlying assumption is that public investments generate non-monetary benefits, since monetary potential will already have been captured by private profit seeking investors. This is a very strong assumption, which I will examine further in Sect. 4. At present, I concentrate on what these non-monetary benefits might consist of. First, the separation of general economic from welfare benefits allowed the analysis of the transport system to be separated from the rest of the economy. Decisions about transport investments could be delegated to a department with control over this area. Its budget would be allocated to the best of the projects and a technology grew up to estimate these.

Welfare benefits became synonymous with the time savings that travellers could make with a new transport investment. Trip demand was separated from this, since that was determined elsewhere by other economic forces and an underlying growth assumption. The bigger the time savings, the greater the benefit and these could be set against costs to generate a benefit cost ratio and a hierarchy of projects worth pursuing.

Of course, time savings need to be monetised to be set against costs in this way, and techniques of evaluation using stated preference and some observations were created to enable a standardised method of evaluating time saved for leisure, commuting and business travellers. The current values mandated for this purpose are shown in Table 1 taken from the Department for Transport’s latest guidance.

Table 1 UK department for transport guidance on values of time, 2014

In practice, it became clear that new roads simply filled up and the putative time savings seemed to evaporate. A major investigation by the Department of Transport’s Standing Advisory Committee on Trunk Road Assessment Committee (SACTRA) [4] concluded that the model was still valid as in conditions of perfect competition, time savings could be transferred into real economic benefits and thus the valuations were still relevant. Tradition could be upheld.

As a policy maker and proponent of this model once said to me; “We’ve been doing it this way for thirty years, so it must be right”. The force of tradition thus absolves it proponents from a standard of proof, so long as the assumptions, for example of perfect competition, are held to be inviolable.

In a neat twist, SACTRA recommended that the adjustment between perfect competition and the reality of imperfect competition should be 10 %, an adjustment that can be added to any benefit calculation—and therefore means nothing at all in terms of prioritising projects.

The strength of the tradition of perfect competition is worth exploring as it has particular implications for the implementation of a complex systems approach. Everyone would admit that no such actuality exists, but the search to attain it is embedded in policy in a number of places. I will return to the implications for competition policy in Sect. 3, where its implications for optimality are very important.

Here the implication is for the feedback between the transport system and the economy. If the assumptions of perfect competition are dropped, then the separation of transport evaluation and economic benefit must also be dropped. But this is very hard for policy makers to do.

2.3 Agglomeration

A good example concerns the acceptance of the principle of agglomeration. This process, first described by Alfred Marshall [10] and rediscovered and developed by scholars such as Fujita, Krugman, Thisse and Venables [6, 8], shows how co-location can affect labour market effectiveness, innovation, and productivity. These all count as externalities to the concept of perfect competition as the existence of a firm affects the existence of other firms, contrary to the principles on which perfect competition is based. So it became possible to take these into account, and it makes a significant difference to the impact of transport systems which allow such agglomerations, principally major cities, to grow.

Marshall (8th edn., p. 223) wrote ‘When an industry has thus chosen a locality for itself, it is likely to stay there long: so great are the advantages which people following the same trade get from near neighbourhood to one another. The mysteries of trade become no mystery; but are as it were in the air …if one man start a new idea it is taken up by others and combined with suggestions of their own; and thus it becomes the source of further new ideas’. Contemporary examples of this is the high-tech industry of Silicon Valley and theatre districts in cities. Marshall’s arguments included the shared use of expensive machinery, the pool of specialised labour, and other advantageous factors for co-location of similar industries. Apart from these synergies, co-location offers the befits of complementarity as seen, for example, in the modern shopping mall or the departments of a large hospital. As Marshall argued so cogently, the benefits in agglomerations go beyond the befits to a particular sectors, but also to other sectors and indeed to the whole [10].

Proponents of complex systems analysis will easily see that the concept of an externality links readily to developing rule-based behaviours which can either include or exclude particular effects. The impact of one agent’s behaviour on another’s is also a key consideration in building any complex system, in which outcomes can follow different paths depending on how these interactions emerge.

In traditional policy analysis however, these interactions are a distraction. The process of agglomeration carries uncertainties and intuition would suggest that it intensifies with scale. Indeed, an inspection of the relationship between density and wages across the Local Authority Districts of the UK shown in Fig. 1 lends itself to this conclusion.

Fig. 1
figure 1

Employment density against earnings differential; 2008–2012 average [17]

Such a conclusion undermines the traditional analysis of time savings which always and everywhere have the same relative values, and in which perfect competition—as the nomenclature implies—has perfect outcomes. It is instructive to consider how attempts to include this element of a complex system—the idea of agglomeration—have fared in the policy framework of the UK’s transport decision making system.

The framework of the idea was already present in the literature when I proposed that it should be used in the analysis of a new commuter railway in London. We produced estimates of the net additional value that could be created by being able to have more productive jobs in central London, where wages and employment density were highest. We showed that the main constraint to such job creation was the transport system.

These estimates were attacked on several grounds. The simplest was to argue that if more productive jobs existed they would be created anyway and less productive ones squeezed out so that the welfare basis for investment remained intact. The more sophisticated was to take the econometric and reductive approach. A priori, it is argued, more productive jobs will be taken by higher skilled people. These skills are independent of the location in which they are exercised and therefore we must distinguish between the return to skills and the return to the location in which they are exercised. If highly skilled people went to work in some other location, they would still reap a benefit from their expertise. A further step is to estimate these differential returns, including to individual industry sectors.

By contrast, I argued that there was symbiosis between skills and location that undermined this separation. Should the skilled and enterprising be prevented from maximising their opportunities, their skills would not remain constant but be likely to decay. In the process of agglomeration however, skills would be enhanced and rewards to qualifications increase. The reductive approach and cross section data analysis misses this.

This argument was a step too far. The reductive approach prevailed and a system of elasticities between density and output estimated. These are incorporated into guidance, just like the prescription of the values of time to be applied.

There is a twist. The traditional view, hallowed by time, has prevailed. The estimates of agglomeration are allowed as a ‘sensitivity test’. Moreover, they have been divided into two parts. One part is called ‘pure agglomeration’. This is an estimate of the impact of my productivity on yours, a density effect. It is accepted that this is not covered by the standard analysis. It can be significant but not enormous. The much larger impact is from creating new jobs which are more productive. In a recent analysis of an extension to one of London’s underground lines, the pure agglomeration effect was 15 % of the size of the impact of creating better jobs. But in the perfectly competitive world these jobs would always be created, and hence the traditional analysis still wants to ignore this. And even this impact is only a marginal one. In the traditional analysis everyone who wants to work can, and therefore it is only the net increase in productivity that can be measured.

2.4 Privileging Tradition Over Evidence and Proof

The force of tradition is clear in this story. New aspects to policy making are slow to be incorporated, and resisted. It is much easier to follow rules than to invent new ones and it requires much energy and resources to try and change the paradigm. This brings me back to the question of proof. My response to the planning inspector was that in social science, where experiment was hard to do and impacts took a long time to work out, standards of proof do not exist in the same way as they do in engineering. Proof of the traditional models is as difficult as that of newer complex systems.

The traditional approach to economics assumes that all agents have the same motivation and that as a result all available opportunities will be taken up. Such assumptions can easily slip from assumptions into articles of faith which do not require proof; they are taken as being self-evident. The decision maker is much more interested in creating a rationale for decision making than in debating the distinction between an assumption and an article of faith.

The system of producing an analysis of time savings by putative travellers in the future creates a clear ranking of projects with monetised benefit cost ratios. What could be more attractive? Hallowed by decades of practice, it becomes embedded into a handbook of guidance which would, if printed, probably be high enough to sit on. Fortunately it is now web-enabled. But this in turn creates a framework which is harder and harder to challenge as each piece of the jigsaw of regulation looks perfectly sensible on its own and only becomes worrying when you realise how the assumptions build up.

LUTI models, recognised as appropriate in guidance are a case in point. They identify relationships between land use, whether for employment or residential purposes, and the transport system. They therefore rely on the ability to measure how land use changes in response to transport changes at small geographical levels. It assumes a set of trade linkages between industries to govern employment land use. There is no direct evidence for such relationships: they are entirely assumed from national data and there is no basis for assuming that such relationships hold at a local level. Equally past changes in land use are as much governed by planning regulation as by individuals’ choices: such regulations are assumed constant in the models. Without a clear understanding of the source of assumptions, evaluation of models cannot be done. But if a modelling approach has been privileged in guidance, then the source of assumptions will be submerged into an assessment of the results, which in turn will rest on what seems plausible to a set of policymakers. This is no sort of proof of the validity of the results but is more akin to relying on judgement while calling it a model.

A similar process can apply even to the more venerable models of transport behaviour which are also used in the LUTI process and which are the basis of the time savings approach. Transport models assume that people will use the most time efficient route for their journey. They solve for the optimum trip patterns, given a set of origin and destination choices. Such models can become complicated and usually work in minutes of time. One minute saved is valued at the same rate per minute as 5 or 15 minutes. Wait times and interchange times are given their own multiples of minutes. Such penalties are based on observation, usually fairly limited. The more fundamental assumption is that people do indeed optimise their travel behaviour on the basis of time. Calibration of these optimising models is made against considerable amounts of data. If the optimising algorithm does not produce the travel pattern observed for the model’s point of time, an adjustment vector is added to ensure that the existing travel pattern is replicated.

This clearly creates a problem for testing future transport scenarios. Is the adjustment vector held constant? Should it decay? These important decisions tend to become opaque and technical rather than recognised as privileging judgement over proof. Complex systems analysis has made some headway, but very little into these decisions. It has been overshadowed by the overarching assumption of optimality and indeed a particular form of rationality. Having examined this from the perspective of transport decisions, I now turn to how it plays out in industrial and competition policy.

3 Optimality and Optioneering: Competition Policy and Innovation

3.1 Optimality, Perfect Competition, and Policy

No aspect of policy has been more governed by the concepts of optimality and perfect competition than policy towards industry. Anti-trust legislation rests on the assumption that monopoly is bad as tending to raise profits and prices, while perfect competition with lots of small firms each without the power to affect prices will drive down costs and benefit the consumer.

In the static and stable world of equilibrium economics this makes perfect sense. Markets can be researched and understood, while firms are able to know what else is happening in their marketplace. Markets are easily defined. In this stable world, prices cannot just be set for today but in fact theory shows they must be set for all future time too In the real and messier world this paradigm leads to much effort to deciding where a market stops either in product terms or in geographical terms.

I once undertook a piece of analysis on the nature of the market for diesel-powered water pumps. These are largely used on construction sites to remove water from foundations or when there had been flooding. Was this a separate market from that for electrically powered pumps which undertook exactly the same role but were less frequently used on sites? Were larger pumps, used generally for more permanent purposes, a different market again? The boundaries are always fuzzy. With enough ingenuity, it is quite often possible to show a complete continuum of competition. Regulation has tried to cut through this with the concept of SSNIP, a Small but Significant Non-transitory Increment in Price. The underlying thought experiment considers the likely consequence if a firm makes such an increase. Does competition bring this back again, or is it possible for the firm to keep its gains?

All of this assumes however a market in which it is possible for consumers to be well informed and in which all firms have similar motivations. Both firms and consumers are optimising either their profits or their utility. These economic concepts are grounded in an even more venerable tradition than transport decisions. The idea that firms seek profits and consumers seek utility go back to Adam Smith, John Stuart Mill and Jeremy Bentham.

However, the proposition that an ideal system exists in which they are maximised is of later date and was essentially created by the formalisation of the neo-classical system after the Second World War, notably by Paul Samuelson. Competition policy is based on the identification of, and imposition of, such an ideal system. The break-up of AT&T into the ‘Baby Bells’, prevention of various mergers and acquisitions, have all rested on this idealisation of profit and utility maximisation [12].

3.2 Complex Systems, Regulation and Decision Making

A complex systems approach to market decision making might tell a different story with potential for a different approach to regulation and decision making. I want to use three different aspects to illustrate this. However, in all three we are much further away from seeing complex systems approaches being operational than in transport analysis.

The first aspect I want to examine is market entry. Whether markets are contestable is a key feature in competition analysis. What are the barriers to entry? Do new firms survive? In a dynamic system, firm survival might be low, and yet the pressures exerted could still create market discipline for an incumbent operator. Ormerod and Rosewell [11] calibrated a model of market entry on the UK telecoms market. New firms had a choice about the scale of their entry and therefore their costs, and they faced an incumbent which had a variable rate of reaction to such entry. New entrants could not tell in advance how far the incumbent would react, and indeed the reaction itself could vary.

We showed that in many of the possible outcomes the incumbent maintained a large market share but that it did so only when it reacted in a competitive way. An inflexible incumbent would eventually be competed away and a new incumbent emerge.

The report was compiled to describe how market evolution could occur to the telecoms regulator which was suspicious that the incumbent was engaging in anti-competitive behaviour rather than in fact reacting competitively.

The research did not assume that firms engaged in profit maximising behaviour, but did assume that they were seeking profit. In standard theory such behaviour would lead to maximum profit if there are no economies of scale and everyone knows everything. A step is made in policy to say that such a position can be enforced. Indeed this is what central planning purported to do. Once the ideal amount of production has been decided upon this equilibrium set of instructions is sent out. The potential success of this is demonstrated by the failure of the USSR, but it lingers on, a subject to which I return in Sect. 4.

Here, I want to focus on whether firms or people are likely to be seeking either profit or utility. This is especially relevant to the regulation of natural monopolies, such as the water industry, electricity and gas distribution, and so on. Regulators who work on the assumption that their industry can be described by maximising behaviour can be surprised when their regulatory framework has surprising consequences.

3.3 The Myth of the Utility Optimising Consumer: Copying and Nudge

It is hard to know where the trade-off lies for consumers between quality and price. This is especially difficult if quality is binary. Either the water is safe to drink or it is not. Either the electricity system is robust and reliable, or it is not. A standard utility model suggests that all aspects of utility are divisible but this is clearly not the case. Moreover different groups of consumers not only might have different preferences as individuals but create communities of interest in which a spectrum of interests becomes polarised. The treatment of noise in aviation is a good case in point. Analysis of preferences suggests that noise can be valued such that it is relatively unimportant against the benefits of additional flights. However a reading of press coverage or attendance at interest group meetings would suggest something entirely different.

People who will tell you that they are not affected much by aircraft noise will also tell you that it is a very important issue. Copying the opinion of others appears to be just as significant as individual independent opinion. This immediately undermines a key assumption of standard analysis and means that we need a more complex systems view which takes this into account.

Unlike much of a complex systems approach, this insight has been both popularised and taken up by governments as the concept of nudge. Thaler and Susstein [14] produced a summary of the potential for the policy as if it were always beneficent. This is interpreted as framing policy in a way to make it more effective in changing behaviour. Curiously it has been taken up most strongly by environmental policy makers and the tax authorities. Tax collectors use variants on a theme which suggests that everyone else has paid, so you should too. Environmental policy uses a similar approach to encourage recycling and reductions in greenhouse gases. Another variant is to frame the policy choice as opting out rather than opting in: most recently to pension savings. The UK government has a Behavioural Insights Team (http://www.behaviouralinsights.co.uk) dedicated to finding new ways to ‘nudge’ its citizens towards complying with its policies.

The analysis of copying behaviour was pioneered in relation to internet behaviour, with such experiments as how music downloads were affected by knowledge of others’ choices and companies such as Google, where Hal Varian is Chief Economist, have large research departments focused on such behaviours. Nudge is, however, more a mechanism for policy implementation, rather than policy making. Once a policy is decided, then how are citizens to be made to implement it? From the wearing of seat belts, to stopping smoking, to a willingness to recycle or observe speed limits, citizen compliance and acceptance is crucial. Enforcement is one route chosen by police states, in others persuasion and social norms become more important.

However, in this chapter I want to focus more on policy making than on implementation. Whatever route is chosen to ensure compliance with a policy, the prior question is whether it is the right policy to have. What this brief discussion of behavioural management does show is that to assume that consumers—or citizens—are continuously maximising an internal utility function is misplaced. What they care about is as likely to be influenced by the experience of others as by their own internalised preferences.

3.4 Policy and the Myth of the Profit Maximising Firm

What of the profit maximising firm? This construct has been both idealised and demonised. Proponents of the standard model have shown that the ideal firm, pursuing profit or shareholder value, will create the most efficient firm and produce at least cost so long as anti-competitive forces are not allowed to stand in the way. Opponents of capitalism such as the Occupy movement (http://www.occupytogether.org), see the pursuit of profit as undermining morality, exploiting consumers, and driving pollution and tax avoidance.

The theory of the ideal firm says that the pursuit of profit leads to an elegant solution in which the marginal cost of producing an extra unit is just balanced by the additional revenue and therefore the greatest efficiency in which only ‘normal’ profit can be earned. Firms don’t actually need to know where this point is as competition will find it out by moving resources from less profitable to more profitable enterprises until it is found. In the real and complicated world, most businesses are not only unable to know what marginal costs and revenues are, they are concerned more fundamentally about survival. I once undertook a project which required the calculation of such marginal costs and revenues to determine whether a set of plants were at the lowest point on their cost curves (for policy purposes). It was almost impossible to know, and for most of them the potential economies of scale were such that it was hard to see that they would survive. They did not.

Talking with chief executives, finance directors and other managers over decades it is hard not to conclude that they are pursuing profit with anything like singlemindedness. Many other motivations intrude. Investments may be seen as potentially very profitable, but the risk is too scary for the management. Others may want a quiet life, and simply do what the regulator tells them. In a more realistic motivational world, neither firms nor consumers may behave as the limited rationality of the economist predicts. In this world, the policy maker faces unintended consequences of, for example, price regulation.

3.5 Innovation and Networks

The area of policy where motivational richness is most important is that of innovation. Innovation is inherently risky and most new ideas may never come to fruition. It is not the same as invention, which is having the new idea. Innovation is making it happen and spread in the wider world. It is crucial to the process of economic growth but remains outside the standard model, where equilibrium is a static concept which can be described by market shares and price competition for a known product. In practice, innovation is a major battleground for firms. Traditional enterprises will try to stifle the newcomer, or even innovators within their own ranks, while upstarts create new products or new processes which undermine incumbents.

The emerging digital economy illustrates how firms controlling older forms of communication have struggled to compete with the new behemoths of the digital age. Earlier, the advent of large scale computing undermined the producers of accounting machines and were themselves then forced to adapt to the introduction of the microcomputer and personal access to computer power. The fax machine became ubiquitous and has now almost disappeared again, to be replaced by emails.

The mechanisms by which innovation happens are neither well understood nor well adapted for policy. Neither science nor art but the blend between them and complex systems analysis has concentrated on this area (e.g. Antonelli [1]).

Innovation does not arise simply through individual decisions. That might happen with an invention but to turn an idea into an effective and widespread phenomenon requires networks. Networks in turn work through supply chains as well as peer groups. Indeed research we undertook in the Manchester region suggested peer groups were more likely to promote protection of an idea than its percolation. Supply chains were more likely to disseminate a new idea [16].

Networks can either facilitate or squash dissemination of an idea. If the first steps along the network do not pass it on, it dies. Quite a bit is now understood about different sorts of networks and their potential to generate a cascade in which the whole network is affected by something new. Neither science nor art has mastered how to characterise these in practice, and what mix of close and weak linkages are most likely to generate successful new ideas. Policy in this area has been particularly affected by the operation of vested interests which has tended to think in terms of peer groups. Similar firms are encouraged to form groups. But such networks may be as easily determined to stop innovation as to foster it, so that they maintain their current positions. A recent study of industrial clusters in the UK showed that no successful grouping of innovative industrial firms had been achieved by policy [2].

3.6 Innovation and Optioneering

Innovation is essentially a search and optioneering exercise; it is hard to imagine what might be meant by equilibrium in innovation, since successful innovation is by definition a disruptive phenomenon in which there will inevitably be losers as well as winners. Government has a tendency to confuse invention with innovation and to believe that inventors also make good innovators. This is not often true. It is more often the case that designers and inventors need business partners who can make practical their ideas, focusing on scaling production, finance and markets.

Optioneering can be interpreted as taking a systematic ‘engineering’ approach to the selection options where there is no clear optimum. It tries to put in place clear and structured processes for decision making and regulation. In the private sector the messy realities of there being no optimal solutions to complicated multidimensional problems make some variant optioneering essential if firms are to survive. As we have seen, the public sector remains wedded to the possibility of optimisation, despite the poor decisions that arise.

Complex systems approaches to these issues have made little inroad in policy, because they offer few simple prescriptions, either to what constitutes effective competition or how to promote successful innovation. A call to improve networks is hard to implement and has no easily visible signs of success. Competition is still bedevilled with the attractions of the word ‘perfect’ and ‘optimal’. The well-established results that these cannot be achieved are forgotten. A particularly important aspect to this is the focus on comparative statics, the comparison with a ‘do nothing’ outcome and a ‘do something’ policy. Benefits of do something are the difference between the two. What matters as much as whatever the policy might be is the ‘do nothing’ scenario with which it is to be compared. This brings us to the troubling question of ‘ceteris paribus’—other things being equal. It is to this assumption and its consequences that I now turn.

4 Other Things are not Equal: Cities, Devolution, and Growth

All modelling limits its scope. Outside the scope of the model, it must be possible to hold that no factors of importance will affect the focus of the model. In my discussion of transport models, for example, the need to make a trip is outside the model and is not affected by the provision of transport systems.

4.1 The Do Nothing Policy Option

The great advantage of a complex systems approach is that it challenges assumptions on what should or should not be included in the model. In doing so, it adds dimensions of time and space which are missing from the comparative statics approach in which it is fatally easy to believe that the ‘do nothing’ future is easy to understand.

If a social system were in equilibrium then the policy of doing nothing would leave it unchanged. The reality is, of course, that social systems are constantly evolving and the policy of doing nothing does not mean that the world will not change—it means that the world will change in ways that may be contrary to policy objectives.

The evolution of models of the macro-economy illustrates this. In their earliest formulations, such models, which I both learnt and taught in the 1970s, essentially excluded economic growth. This was independent of the general economy and indeed was a more advanced subject. Macro descriptions of the economy focused more on the relationship between consumers and spending, investors and savers, and how government filled the gap. Time lags were afterthoughts and add-ons. This fundamental way of thinking about the economy is still influential and can be seen in the writings of Paul Krugman [8], for example, where it seems obvious that if there is a gap between the putative and actual output of the economy, then a mix of interest rate adjustment and government borrowing fills the gap with no consequences.

4.2 Central Planning

This analysis is in turn the intellectual heir to the central planning movements influential in western socialist parties and in the socialist administrations of the Soviet and Chinese bloc. Central planning substitutes the model for the messy business of actual markets, firms and consumers. Capturing all necessary information in the model it should be possible to identify the ideal set of outputs to use all available resources and to produce the best possible outcome. Central planners can proclaim an end to boom and bust. In a static world, perhaps it is possible to envisage a model being able to capture all the information that could possibly exist about products, and all the information about consumers’ tastes and preferences. Even so, the mind boggles at the computer power this might require.

Out in the real world, of course, consumers not only have a wide variety of tastes and preferences, not always consistently, and moreover change their minds. What was my favourite dish last year seems boring this year. I’ve just seen someone check train times on an iPhone in a minute or so and I want this new product. The messy business of markets is the non-equilibrium process of exploration. Will a new product sell? What will happen to previously existing products? The strength of market processes is that they create a mechanism for finding out which is relatively painless. Experiment is possible and consumers have direct mechanisms for making their choices known. Markets processes certainly have flaws, but they have generally served us well in creating economies with longer life expectancies, lower child mortality and where poverty is measured more in a lack of consumer durables than in malnutrition.

In spite of its clear failure in both Soviet Russia and in China, however, central planning still has adherents. The intellectual current that believes that the man from the government will know best is very powerful, especially since it is supported by the men (and some women) from the government whose role it is to do the planning and create the policies. Quite often, the bureaucratic view that competition is wasteful and profit inappropriate dominates.

It is worth examining these views more carefully as they are very important in the policy process. In addition, they interact with the important institutional context in which policy decisions are made. A good illustration of these issues relates to development planning, the role of infrastructure and local powers.

The UK is notable for its degree of centralisation. In London, for example, the Mayor of London only controls about 5 % of his budget directly and with full fiscal control. Most spending comes through central government allocations and a multiplicity of sources. A report compiled for the Greater London Authority in 2010 by the London School of Economics [15] concluded that spending allocations were so complicated they were impossible to understand and as result it was also impossible to develop local prioritisation.

In Sect. 2 I argued that, for example, transport was necessary but not sufficient for economic development. It is at a local or regional level that the interactions between transport and other policies can be made apparent and real. This is complexity in action. Lord Deighton [3], in his task force to look at maximising the benefits from investment in High Speed rail makes effectively the same point when he argued that the local authorities around each station location should have a growth and development plan to take advantage of the new connectivity. The challenge of course is that the station locations have generally been chosen to maximise the efficiency of an operational railway, rather than to maximise the economic opportunity. Not surprisingly, the relevant authorities have been coming back to say that the locations are not well placed to be fit for this purpose.

4.3 Options Versus Optimality: The Dynamics of Policy Formulation

The transport planners, on the other hand, have taken development for granted and been unable to think in option development terms. The order in which decisions have been taken have governed the range of possible options which can then be pursued. In the case of High Speed rail, the ordering seems to have started with the fastest routes between a small group of cities. This then constrains possible station locations, and in turn what development potential there might be. A different ordering of priorities could potentially produce a very different plan, and more of a debate about the trade-off between speed, stops and the line of route. However, at the outset, no-one asked the cities what inter-city transport improvements would be most effective in generating improved economic performance. That priority was only identified later. Lock in has then taken place as the drawing board is never a blank slate.

I learnt this when building the case for economic development generated by a new cross-London railway. At the outset I challenged whether this was the ‘right’ railway. I was still learning how complex decision making works in practice! I was advised to keep quiet about this question. I was told ‘this might not be the ideal railway, but it was one we are able to build. Stop this and it will be another twenty years of re-engineering and even then it won’t be ‘right” I learnt this lesson well. The best can be the enemy of the good, and the time frames for decision making, procurement and project management should not be underestimated if a project is to stick to budget.

Everybody has a tendency to think that what they do is hard, while others have it easy. Good project design and management is about the understanding that it is all hard and each discipline has its challenges. Meeting these challenges takes time and exists in locations. The right people in a meeting with the right information will make a very different decision, driving a project in a better direction than either the wrong people or inadequate information will be able to do.

The case for Crossrail indeed rested on this idea—that getting people together makes a difference to outcomes for the economy as well as for individual projects. In turn the dynamic that makes this possible includes the motivation to deliver that drives decision making. Competition and profit are part of this dynamic. In a static world, then competition is indeed wasteful and profit unnecessary. But in a world of change then competition and profit are part of a discovery process about what works and what can do well. The world is a world of options not of optimality.

This brings us to the challenge of understanding what makes a difference—what is additional, and what happens in any case.

5 Conclusion: The Additionality Bugbear

A complex, non-equilibrium approach to policy is not an easy one. The standard model, in which an equilibrium solution exists and can be found, is much more comforting. It is particularly noteworthy that even so, this requires setting aside the results of second best solutions. Lipsey and Lancaster [9] showed many years ago that if any part of an economic system is sub-optimal in standard terms, then making an apparently positive shift in one part cannot be guaranteed to improve the fitness of the system as a whole.

A non-equilibrium approach, by contrast, requires the policy maker to start from first principles with a description of all the relevant aspects of the problem in question. There is no presumption at the outset about maximising behaviour, and elements of the system can only be ignored if they can reasonably thought to be of minor importance.

This produces both a different description of a situation and also potentially a very different description of the impact a policy might have. This is the additionality problem. What is the difference between the outcome without the policy change and that with it? What additionality can be ascribed to the policy change?

In the standard model, where there is continuous optimisation, additionality is very hard to achieve. All profitable investments will be undertaken, and additions can only be made in terms of welfare which then has to be given a monetary value, e.g. reductions in obesity are valued in terms of additional years of life, which is then monetised according to earning capacity and healthcare costs.

In a non-equilibrium world, there is no reason to believe that all profitable investments will be undertaken. This is the most important conclusion to reach. However, this is not the same as the ‘Keynesian’ syndrome in which full employment is a matter of arithmetically adding up spending and if there is insufficient employment just adding the government ingredient. That is just as static an approach as the neo-classical one in which we are always at the only possible equilibrium.

In a non-equilibrium world, something is always changing and the do nothing scenario does not really exist. At the very least, it is necessary to consider whether the future is following the same path as the past. Nor is it necessary to construct complicated models to understand this. When I first became Chief Economist for the Greater London Authority, the task was to consider long term employment prospects. The narrative was simple.

Over the previous 20 years, employment in the business services sectors had grown at a steady pace, while manufacturing employment had massively fallen away. Essentially a million jobs in manufacturing had been replaced with rather more service sector posts. Since manufacturing employment was now a rump, the question became one of whether this long term service sector trend could and would persist. We concluded that London’s position in the world economy and the forces of globalisation means that it could persist, so long as there were no external constraints. The most significant of these was the transport system, followed by poor school quality.

Do nothing, in other words, and there was a risk that the relatively recent upward trend in total employment could come to a halt. Address the underlying infrastructural problems, and there was a good chance that total employment could continue to rise. Policy in London focused strongly on both aspects, and increased transport investment and improved school results followed. So too did increased employment, which then followed the trend that had been identified.

This narrative remains a strong one, but it is important to note that it rests on probabilities. Can I be sure that the policies created the framework without which employment would have stalled? This returns me to the question posed at the beginning of this chapter because the answer is no. In a complex, non-equilibrium world, all policy impacts are on the balance of probabilities. For that matter, all investment impacts are on the balance of probabilities too. No investor knows in advance that her analysis of the likely outcomes is correct and bankruptcies are common.

Risk is not only inherent ex ante but cannot be entirely mitigated. Things do actually go wrong and portfolios contain poor investments as well as, hopefully, good ones. However, the fact that things have gone wrong does not mean that the judgement was bad in the first place. Risk free investment, and risk free policies are not possible in a complex world. What matters is to have a strong story, backed up by strong evidence on the main elements of the story. Then take a bet.