Economics is a science of thinking in terms of models joined to the art of choosing models that are relevant to the contemporary world.

-John Maynard Keynes, 1938

1 Introduction

In November 2015, after weeks of bitter political rhetoric and much discussion, the UK’s Conservative government reversed their decision to eliminate tax credits for the poorest. While much was made of this reversal and struggles fought over who deserved responsibility for the change, a more subtle politics was going on behind the scenes. In justifying the u-turn, the primary reason given by the Chancellor George Osborne was that tax revenues in forthcoming years were now expected to be much higher than before. Yet, a deeper look into this justification revealed that 55% of those extra revenues (£29.1 billion out of £52.2 billion) came from “modelling changes”. Suddenly, the discursive sparring over tax revenues found its material embodiment in the computers of the Office for Budget Responsibility. Such macroeconomic models are pervasive in economic policy-making, and just as influential as this small example would suggest. Yet, this has remained a largely unexplored area, despite its significance and despite attention being paid to similar models in financial markets.Footnote 1

The aim of this paper is to undertake a preliminary investigation into central bank macroeconomic modelling as a key site of sociotechnical reasoning: where technology and cognition intermingle in close and crucial ways. Central banks form an important point, where ‘cognition in the wild’ takes place (Hutchins 1995) and provide a rich context for thinking through the relationships between machines, reasoning processes, and the production of governability. This is not least, because, increasingly, central banks are experimenting with unorthodox approaches to making the economy visible—using Google search data, for instance, or machine-learning algorithms. As AI techniques become adopted into central banks, they will be well-positioned to give us insights into how humans and machines can think together, and how the state will deploy these technologies to make the economy legible. I intend in future work to develop these ideas in more depth, but this piece will begin the project by interrogating a major shift in central bank modelling in recent decades: the move from structural econometric models to dynamic stochastic general equilibrium (DSGE) models. This change, as I hope to show, has brought about an important shift in the epistemic influence of models on the reasoning process that determines monetary policy-making.

The first section will examine some of the literature relevant to the discussion of sociotechnical reasoning within central banks. While very little research has been done on this specific topic, Mary Morgan’s work in particular can help us consider how thinking with models works in this context. Whereas her work has focused on the academic use of models, this piece is aimed at the policymaker use models. The second section will outline how central banks deploy models as part of a larger sociotechnical reasoning process involving communities of experts and ecosystems of models. It will attempt to show how forecasting uses models as anchor points, while policy analysis uses them as artificial laboratories. The third section will then look at how the types of models have changed in central banks, focusing particularly on the most recent period when DSGE models became dominant. Finally, the penultimate section will show how this shift has brought about a change in the epistemic influence of the models, and how this may help explain why central banks missed the 2007–2008 crises. In particular, DSGE models bring about a new emphasis on theoretical consistency, but as a result render unmodelled empirical data more obscure within the broader reasoning process.

2 Extending cognition

Central bank models have not yet been studied as a site of extended cognition and have only received specialist (i.e., policymaker) attention in most of economics,Footnote 2 but a variety of literatures can help to gain a grasp on the dynamics at play. Most relevant here is the study of extended mind theory—an object of study that has been taken up in different ways by philosophy and sociology (Clark and Chalmers 2008; Menary 2010). We have already alluded to Edwin Hutchins’ work on ship navigation, where he examines cognition as something that happens across technical tools and collectives. Yet, similar approaches can be found in Ronald Giere’s analysis of the Hubble telescope; (Giere 2006, Chap. 3) William Clancey’s work on the Mars rovers; (Clancey 2012) and Karin Knorr Cetina’s examination of particle accelerators (Cetina 1999). In all of these cases, distributed cognition carries out various processes—it creates perceptions (as in the Hubble telescope example), it carries out research (as in the Mars rover example), and it coordinates large technical systems (as in the ship navigation example). As Mary Morgan’s work shows, with the use of models, distributed cognition comes to take on a function of reasoning.

Models, at their most basic, are a making explicit of implicit intuitions and hunches. They operate by combining various elements together in a partially flexible way—they link together analytical definitions, empirical regularities, local contexts, universal principles, disciplinary laws, and concepts into some sort of consistent whole (Morgan 2012, 80). While contemporary economics primarily concerns itself with mathematical models, they have also historically taken the form of visual diagrams (such as the Tableau Économique drawn by François Quesnay), ideal-type models (such as David Ricardo’s model farm), physical models (such as Bill Phillips’ hydraulic model), and computational models (such as the Federal Reserve’s large-scale FRB/US model). Any given model is constituted (and constrained) by two sets of rules for manipulation (Morgan 2012, 26–27). First, the rules of the material are determined by the matter making up the model, whether physical (e.g., models that attempt to build scale replicas of the phenomenon in question) or ideational (e.g., models built in algebraic language or a particular computer language). In both cases, one is bound by the rules of how one can manipulate such material. The second broad constraint is the rules of the subject matter itself—the theoretical concepts and their interrelations that the model builders have implemented into the technology. In macroeconomic modelling, both have channelled thinking about economics down particular paths. The limits of computation, for instance, have led to economics focusing more on stable linear relationships, rather than destabilising non-linear relationships (Buiter 2009). Likewise, the focus of contemporary economic theory on market-clearing has led to a general avoidance of the question of involuntary unemployment and disequilibrium more generally.

A consequence of the two sets of rules imposed by modelling is that they create a set of interrelations, and therefore, one can have a precise pathway for following a chain of consequences. On the basis of this, what gives contemporary computational models their peculiar power is their capacity not only to organise but also to outsource cognition. While organising cognition can be a virtue in itself (for example, double-entry bookkeeping), within models, the rules of the model are outsourced into a computational medium that enables them to step-by-step delineate the consequences. With such a technology in hand, one can allow the calculative and inferential processes to expand far beyond any human capacity. Models, therefore, solidify certain rules of thought, and this consolidation of a particular state of knowledge is the source of both their power and their limits.

In addition, models perform a certain function—namely, to convince. In this regard, their construction is also a means to transform a purely conceptual argument into a medium for propagation and persuasion. This is more than just the widely-known argument that numbers give an illusion of certainty and precision. The point is rather that the very path of reasoning is altered and made more palatable (or not) depending on the medium through which it is made. To give an example, Morgan recounts David Ricardo’s efforts to argue with Thomas Malthus about the potential for economic stagnation (Morgan 2012, Chap. 2). Through the use of narrative and table-based reasoning, Ricardo consistently failed to make his case. However, the creation of a model farm linked together by reasoning based on accounting principles formed a key role in persuasively making his argument. A more recent example can be found in central banks’ use of simple(r) models (such as overlapping generations models) to communicate with policymakers and the public. The vagaries of more complex models can distract or obscure the essential aspects, while simpler models can enable them to focus on the pertinent economic relationships and render them more persuasive. Models must be seen as intimately social, rather than a purely neutral expression of objective features.

Therefore, the point to be taken from Morgan’s work is that models condense a set of inferential and material rules into a medium that also alters the persuasiveness of the reasoning. Their ability to deduce the consequences of long chains of inference makes them powerful tools for understanding complex and interconnected systems. As such, it is no surprise to them gain significant traction (and financial backing) within central banks and their effort to make the economy intellectually tractable. The economy (along with nature) is one of the preeminent complex systems of the modern world, and models are essential to render them intelligible.

3 Sociotechnical reasoning

While Morgan’s sensitive and patient reconstruction of model use throughout the history of economics is illuminating in many ways, it remains focused on academic economists. When we turn to central banks, it remains to be seen whether models are used in the same ways, or whether policy-making circles deploy models in their own unique manner.Footnote 3 The argument here is that there are a number of key differences between academic and policymaker use of models, and the aim of this section is to outline some of these differences. In what follows, I try to answer this question by drawing upon evidence from the Riksbank, the Bank of England, and the Federal Reserve, since they are among the most transparent of central banks. They are also, in many ways, leading innovators within the field and provide a good basis for understanding best practice.

How, then, do central banks think and how do they use models within this process? In their everyday dealings, central banks undertake a variety of tasks ranging from overseeing the payments system, regulating banks, collecting statistics, and of course, deciding on monetary policy. It is the latter aspect which will interest us here, as it is the site in which computational modelling and political decision-making most closely intertwine. In examining this decision-making system, it is impossible to locate the process of reasoning in either a specific individual or a specific model. Instead, the process behind monetary policy decisions involves a sociotechnical reasoning system that spans across humans and technologies (Hutchins 1995, Chap. 9). In the first place, monetary policy decisions are typically (with New Zealand being a notable exception) made by committee rather than by an individual. Research has shown that the group structure of monetary policy committees leads to a variety of epistemic tendencies (such as a propensity towards policy inertia) (Blinder and Wyplosz 2005). This means that the assumption of central bank decisions being made by a single agent misses the committee form and the deliberative and sociological dynamics that go on within them. Policy decisions are community decisions.

Equally important are the interactions between human and nonhuman modes of reasoning. In their interaction with models, for instance, subjective judgments are often used to supplement forecasts with short term and new information that is yet to be aggregated into the traditional statistical measures like the national income product accounts (Sims 2002, 21; Fawcett et al. 2015; Rosenberg 2008, 4). The Bank of England, for instance, incorporates more frequent information (such as business surveys and other leading indicators) via the judgment of members of the Monetary Policy Committee, rather than directly through the core model itself (Hatch 2001, 140). Likewise, the Federal Reserve relies on sector experts to provide subjective forecasts, sometimes as inputs into the core model (Sims 2002, 4). In addition, the Riksbank has begun carrying out company surveys to get more real-time information about the state of the economy, which is then fed into the core forecasting effort (Rosenberg 2008, 4). DSGE models require parameterisation—the setting of certain variables. The parameters are usually set by historical data (along with the uncertainty range of the parameter). However, key information is also derived from expertise, microeconomic research, and Bayesian statistics (Dotsey 2013, 12). Therefore, human reasoning has an important role to play here as well; model-based reasoning is ultimately inseparable from a larger sociotechnical form of reasoning.

Turning to the computational modelling side, central banks deploy models for a variety of functions and, therefore, rely upon an ecosystem of different models. This is particularly important, since no single model is a perfect representation and there is model uncertainty. In part, this uncertainty is minimised using multiple models—similar to the recent shift to ensemble modelling in the climate change sector (Collins 2007). If models agree on the implications of a shift, then it can be considered that there is little uncertainty, and vice versa.Footnote 4 Within this ecosystem, a first type of model is sectoral models, which are typically small though very detailed, and used to explore areas in more depth. Oftentimes, they are based heavily on the knowledge of sectoral experts who attempt to map the model as closely as possible to the data. A second type occurs with experimental models, which are used to test and introduce new innovations. At the Federal Reserve, for example, these models are used “for proto-typing specifications and structures that would be too difficult and expensive to incorporate in [the core] FRB/US without a reasonable sense that such alterations are warranted and useful.” (Stockton 2003, 9) One can think of these models as exploratory probes that test out new intuitions of the modellers. Third, a variety of cross-check models can act as a test on the results of the core model. They may be simple models designed to ensure key intuitions about the economy are upheld, or more complex ones designed to track the data as closely as possible. Finally, the most prominent type is what is known as core models—large-scale, all-encompassing models that attempt to represent the entire economy and provide the foundation for discussions.

It is these core models which we will focus on for the remainder of the article. These core models are used for two key tasks (though there are others as well): forecasting and policy analysis. In the first instance, models are used to offer predictions about where the economy as a whole is going (Sims 2002, 2; Stockton 2003, 9–10), as indicated by key variables like inflation, broad risks, the financial sector, the international context, and the domestic economy (Rosenberg 2008, 5–6). These forecasts are produced in an iterative manner over the span of a number of meetings, but modelling remains central to all of them, and “in recent years, models have played an increasingly important role in the work on forecasts” (Hallsten and Tägtström 2009, 73; Hatch 2001). Models are particularly significant in forecasting long-term trends, because human expertise weakens over long-term projections, while models can maintain the interdependent inferential connections between elements of the economy (Sims 2002, 21). On the level of reasoning, forecasting involves channelling particular intuitions and inferences about the relationships between economic variables down a set path. This is one aspect that distinguishes them from human reasoning, in their capacity to extend inferential chains to a much more expansive range than the human mind is capable of. Morgan’s image of models as comprised of two sets of rules is most prominent in this function, particularly as models extend what might be intuitions that can be embodied in expert judgment into long-term forecasts that reach far beyond human capacities alone. This unique reasoning capacity means that models act as anchor points around which discussion centres (Stockton 2003). In the Riksbank, for instance, the overall forecast provides the baseline from which sectoral experts can then go and substantiate the conclusions in more detail (Rosenberg 2008, 3).

The second function of core models—policy analysis—also looks into the future, but does so by modifying various elements of the model in an attempt to simulate the effects of policy interventions. Under these circumstances, as the former central banker Alan Blinder notes, “some kind of model—however informal—is necessary to do policy, for otherwise how can you even begin to estimate the effects of changes in policy instruments?” (Blinder 1998, 7). Relatively theory-free models (such as vector autoregression) simply attempt to generate equations that fit the historical data, and predict a future based upon that. Yet, in testing out policy alternatives, these models fail, since they have no place for theoretical considerations about how monetary policy may affect the economy (Adolfson et al. 2007, 113). In other words, they excel at extrapolating from existing trends, but falter when those conditions shift due to policy decisions. At this point, models based more on theory, like DSGE models, become essential. Models are deployed here more as artificial laboratories rather than as extrapolations of existing trends. This function most closely approximates the academic’s use of models, as they both deploy models as artificial worlds within which the users can manipulate and create experiments (Morgan 2012, Chap. 7). For instance, both academics and policymakers may ask questions about what happens to the economy if interest rates are maintained instead of raised, or what happens if government bonds are purchased in open market operations. In any case, for policymakers, models are essential due to the ways in which they provide a consistent framework for understanding possible outcomes.

4 A history of macroeconomic modelling in central banks

Maintaining the focus on core models, this section will present a schematic history of central bank models to draw out a significant shift in central bank modelling that has occurred in the past 20 years. Broadly speaking, there has been a move from more ad hoc and empirically-oriented models to more theoretically consistent and abstract models. As we will see in the next section, this change has had an impact on how central banks think with models, and how knowledge is produced.

The history of modelling within central banks begins in parallel to the history of academic macroeconomics. As a discipline, macroeconomics is widely seen as emerging with Keynes’ work in the 1930s, particularly his treatise The General Theory of Employment, Interest, and Money (Keynes 2007). With this text, Keynes separated out the study of macroeconomics from microeconomics, and research became focused on the aggregate entities that comprised the macroeconomy. Keynes’ own work, however, is difficult and it is unlikely that it would have gained nearly as much traction if not for his successors producing a simplified formal model of some of Keynes’ insights (though, crucially, leaving out many of his key ideas and forcing his system into a general equilibrium framework). Particularly significant here was the IS-LM model formulated by John Hicks in 1937, which gave Keynes’ ideas about the effectiveness of fiscal and monetary policy a clear, albeit simplified, framework.Footnote 5 At the same time, the first formalised models of the economy were being built by the Dutch modeller Jan Tinbergen (Tinbergen 1939). Based upon Keynes’ ideas and consisting of 32 stochastic equations, this original model was already oriented towards policy—having been designed to answer whether the Dutch central bank should leave the gold standard and devalue their currency (a decision was eventually taken to devalue, in line with the model’s simulated suggestions) (Taylor 2016, 2–3).

These two origins—Keynes as the conceptual framework for understanding the macroeconomy, combined with the econometric work necessary to estimate values for the parameters of the model—would eventually merge into the macroeconomic consensus of the 1950s and guide central banks and academics during the so-called golden age of capitalism (Glyn et al. 1990). Yet, the calculations involved were laborious and modelling remained peripheral to monetary policy-making decisions in the early decades. In the early 1960s, for example, the Bank of Canada would routinely send boxes of computer cards to a nearby computer centre to be processed overnight by teams of workers, to have them returned in the morning with the solutions (Helliwell 2005, 30). Much like climate change modelling, macroeconomic modelling emerged into its own in the late 1960s with the increasing prevalence and power of computer technology (Pescatori and Zaman 2011; Evans 1999, 14). The Federal Reserve’s first major model—the MPSFootnote 6—began operating in 1970, and contained about 60 behavioural equations in its initial form (Brayton et al. 1997, 2). This model represented the economy through the IS-LM approach, and was explicitly designed to provide a conceptual platform for the government to stabilise the economy. It was, in other words, consciously designed as a tool to manipulate the economy. In line with the dominant theoretical and political perspective of the time, the model focused heavily on fiscal policy levers. Monetary policy played only a small role, effectively operating on short-term interest rates (Brayton et al. 1997, 3).

For both real-world and conceptual reasons, this consensus fell apart in the 1970s. The stagflation (i.e., high unemployment combined with high inflation) of the decade was supposed to be impossible in the IS-LM framework that included a Phillips curve which saw a trade-off between unemployment and inflation. From the academic side, the consensus was attacked by a number of criticisms centred around the model’s macroeconomic assumptions. Crucial here was Robert Lucas’s critique that took to task the inability of the models to change in response to changes in the policy environment (Lucas 1976). The models of the time derived their parameters from the past data under a given policy regime, but this was insufficient to predict what would happen under alternative policy regimes. What was needed, argued Lucas, was an understanding of the deep foundations of the economy—which he argued were the optimising behaviour of individual agents—rather than the macroeconomic aggregate outcomes. If models were to be effective at policy analysis, they would have to be ‘microfounded’ on such individual agents.

Lucas’ alternative approach came to be known as the DSGE revolution, or the ‘rational expectations’ revolution—so-called for the assumption that the agents of the model would have model-consistent expectations about the future. In this approach, supply and demand always reach equilibrium in every market, and individual agents always optimise their plans given their constraints and the economic environment. Importantly for our purposes, the implementation of these models also ran into technological hurdles and required a series of simplifying assumptions to become computationally tractable. Yet, as the former President of the Federal Reserve Bank of Minneapolis notes, the fact that these models were simplified in such a way as to make government action ineffective appears “almost coincidental” (Kocherlakota 2010, 10–11). Technological constraints, in other words, justified a political choice about the value of free market economics in these models. Similar issues concerning technological limits also meant that these models adopted ‘representative agents’. Unable to compute multiple heterogeneous agents across the economy, these models took a single average agent to represent households, firms, and governments. This means that the interactions between households are, by design, absent from these models—a design choice which came to have major consequences during the 2007–2008 crises, and which also meant that distributional issues between agents were obscured.

Despite the rapid academic success of the DSGE revolution, surprisingly, it had little (immediate) effect on central bank modelling. As Andrea Pescatori and Saeed Zaman write,

The rational expectations revolution of the 1970s created a temporary disconnect between academia and central banks. Economists at universities started working on developing a modelling framework that did not violate the Lucas critique. Monetary policymakers meanwhile continued to work with existing large-scale models since they were the only available framework for policy analysis (Pescatori and Zaman 2011).

Therefore, a divergence emerged between the policy models and the academic models.Footnote 7 This divergence remains somewhat mysterious to this day, given the significance attached to Lucas’ critique for generating policy-relevant models. Without pretending to exhaust the explanation here (which will require going into the sociology of central bank members, and which future research aims to examine), we can suggest two initial reasons for this divergence. One reason was that the first generation of DSGE models was notable for concluding that government policy was impotent (Sargent and Wallace 1975; Kydland and Prescott 1982). These models posited that the economy was “fundamentally ungovernable” (Braun 2014, 63). As rational expectations means that the agents of the model have the same knowledge of the model as the modeller, any changes in government policy would also be immediately understandable by the agents. For example, increased spending by government today will be seen as causing increased taxes in the future—therefore, negating the effects. Or an increase in the money supply today will mean increased inflation in the future—also negating the attempted stimulative effects. If the model predicts that policymaker actions are useless, it is no surprise that policymakers were not exactly clamouring to take them on board. A second major reason for the delayed uptake by central banks was that the forecasting record for these models was poor and policymakers preferred models that tracked the data more reliably, even if at the cost of theoretical consistency (Edge and Gürkaynak 2010; Gürkaynak et al. 2013). As highly abstract and highly theoretical tools, these models did not offer much help in the crucial task of forecasting.

Both these problems began to dissipate as economists continued to develop what became known as New Keynesian, or second-generation, DSGE models. These models (re)introduced price and wage rigidities into the system which meant that equilibrium was no longer immediately attainable. As a result, monetary and fiscal policy could have significant effects on the real economy in the short-run, even if much of this would disappear as the economy returned to equilibrium in the long run. The now standard Smets–Wouters model also minimised the second problem by introducing Bayesian estimation to DSGE modelling (Smets and Wouters 2003, 2007). With these new techniques, Smets and Wouters produced a DSGE model that was able to track the time series of data as well as more traditional models, making it an attractive candidate for forecasting and policy analysis. Their theory-centric nature also means that these models can be relatively easy to communicate with as their conclusions are readily translatable into economic concepts (as opposed to the pattern-matching of more econometric models which may or may not track established concepts) (Stockton 2003, 5). With their major hindrances overcome, and with the mathematics for DSGE modelling becoming more widely understood, the 2000s saw a major increase in their use by central banks.Footnote 8 While some central banks still use large-scale macroeconometric models of the Keynesian variety as their core model, this has been rapidly changing over the past two decades as new DSGE models take over (Benes et al. 2009; Burgess et al. 2013; Christoffel et al. 2008; Dorich et al. 2013; Smets and Wouters 2003).

5 Thinking with models

Central bank modelling has, therefore, seen a major shift from structural econometric models to DSGE models. It is this particular shift which is of interest to us here, given the prominence accorded to DSGE approaches prior to the 2007–2008 financial crises. This penultimate section will proceed by emphasising the differences between classic structural macroeconometric modelling and DSGE modelling, and arguing that the latter have brought about a transformation in how reasoning with models occurs in central banks. In particular, they have shifted the nature of consistency within modelling and brought about a new type of obscurity within the reasoning process.

At the level of the reasoning process, models have a unique epistemic power compared to other knowledge-producing devices (such as surveys, intuitions, or expert knowledge). Analytically, we can posit two polar opposite situations. On one hand, models could be taken as the truth and their forecasts accepted without change. In this case, the entire reasoning process would be embodied within them and we could plausibly eliminate central bankers as decision makers. On the other hand, models could have no more epistemic power than any other piece of information. In this situation, they would circulate as objects of reasoning, but not constitute anything beyond another data point. Neither situation accurately describes how models are engaged in practice though. Simply put, they act as reasoning devices that embody and inflect the inferences of monetary policy committees and outside data, and their outputs are accorded a certain authority because of this role. More than just information collected from surveys or measurements, models are a tool to trace through the connections within an economy. At the same time, models are not mere deterministic outputs, as they contain the capacity to surprise their creators, as well as having subjective judgments included among their inputs.Footnote 9 Moreover, as we have seen, their results are subject to interpretation and negotiation amongst monetary policymakers. In the Federal Reserve, Bank of England, and Riksbank, a private core forecast is produced early in the decision-making process, which then provides the starting point for discussions about monetary policy (Stockton 2003, 5; Rosenberg 2008, 8; Fawcett et al. 2015). From there, depending on the subjective beliefs of the individual committee members, this forecast is interpreted in various ways and eventually transformed into a final, public, forecast. Within central banks, therefore, models are neither truth-producers nor data points. Their epistemic power is greater than other pieces of information. They act as loose framing devices: they set the initial parameters of discussion and form an anchor point around which other interpretations circulate.

With the shift to DSGE models, this epistemic power changes the dynamics of the broader sociotechnical reasoning process. Older models in the Keynesian structural macroeconometric lineage were more ad hoc and disjointed. The economy was broken up into sectors, and the equations were often solved individually—i.e., the outcomes of one sector would not necessarily affect other sectors. This ad hoc and looser structure meant that they could more closely match the data, giving them the flexibility to adapt and be moulded to the realities of the economy. Modellers aimed to be consistent with the data, and new features could be added on to bring about results that tracked the world. However, as we saw earlier, these models were subject to the Lucas critique, and eventually left behind by academic economists. By contrast, the newer DSGE models place a premium on theoretical consistency and are solved simultaneously. In practice, the result is that these models impose a discipline of consistency on the conclusions. This occurs internally, with models unable to produce contradictory conclusions. Their position as core models also means consistency is imposed externally, as they provide a framework that helps to synthesise conclusions from different sectoral experts and different models. A core model, for instance, may be used to ensure that inflation predictions align with exchange rate predictions—or that the components of GDP match up to the aggregate prediction for GDP. The shift from IS-LM-type models to DSGE models has, therefore, meant a shift from a parallelism between the model and the world to internal consistency within the model.

This emphasis on the consistency has the consequence that empirically relevant elements can fade into the background. Older models were happy to adapt to the data, but DSGE models impose strict conditions on how new phenomenon can be included. With them, the rules of the model only allow for certain transformations to occur. Elements that are difficult to fit into its principles—e.g., the necessity to be microfounded—are simply left out, whereas with older models these could be added in an ad hoc, but more empirically accurate, way. We see here confirmation of Morgan’s thesis about the rules of models. They impose a particular language, and phenomenon only receive technical recognition if it can be written in that language (not only mathematical, but a specific set of conceptual assumptions). While earlier models allowed for much more flexibility in the construction of the model, newer DSGE types impose strict conditions. Economists may know why a model does not fit the data, but if it cannot be expressed in the language of the model (i.e., finding the appropriate equations), then it cannot be included. This is not to deny an awareness of elements that are missing in the models, but these newer models make it more difficult to appreciate phenomenon which are not written in their language and meet their standards of internal consistency. The errors of the model must instead be adjusted for by relying on the judgment of the user, rather than through the model. Importantly, given the centrality of core models as anchor points, this has arguably made it more difficult for unmodellable phenomenon to become salient in the sociotechnical reasoning process that guides monetary policy-making. Contradictory information from other sources must pass higher thresholds to become visible. If a central banker disagrees with the core model, they must articulate a justification (in the appropriate language) for their own divergence. By contrast, if the model produces a forecast in line with their own intuitions and political leanings (e.g., whether they are dovish or hawkish about monetary policy), they are in good epistemic standing with the model to back them.

Such obscuring can be insignificant during normal times, but it means that the build-up of crises can be missed—and this may help explain why warning signs about the economy were ignored prior to 2007–2008. Indeed, the case for blaming DSGE models for the inability of policymakers to foresee the crisis has some significant expert support. William Buiter, a founding member of the Monetary Policy Committee for the Bank of England, has said that “The Bank of England in 2007 faced the onset of the credit crunch with too much Robert Lucas, Michael Woodford and Robert Merton [all key figures in the DSGE revolution] in its intellectual cupboard.” (Buiter 2009) The current Bank of England Chief Economist, Andy Haldane, cites the obscuring function of these models: saying that the neglect of financial bubbles lies in DSGE models; “because these models were built on real-business-cycle foundations, financial factors (asset prices, money, and credit) played distinctly second fiddle if they played a role at all.” (Haldane 2012) These are two well-informed and experienced policymakers laying significant blame for the surprise of the crisis on the models that have come to dominate many central banks. Yet, others have pointed out that these models were not the only source of information for policymakers, and that, therefore, blame cannot solely lie with them (Wren-Lewis 2012). The argument here is that central banks should have recognised the warning signs through other means, even if their core models prevented the key questions from even being asked. However, as we have just seen, the shift to DSGE models has introduced a new obscuring function, rendering unmodelled information as increasingly less visible. Supporting (though far from conclusive) evidence for this position arises from the fact that rising and risky levels of credit never featured in the Bank of England’s Inflation Reports leading up to the crisis (as late as August 2007, the risks to the economy were deemed to be balanced) (‘Inflation Report, August 2007’ 2007). The alternative information that was available did not convince policymakers that a bubble was afoot or that financial interconnections may be a problem.Footnote 10 The emphasis on consistency obscured those elements that could not be articulated within the rules of the model.

6 Conclusion

From this discussion, we can see that models form an important factor in monetary policy decisions. Through forecasting, core models provide an anchor point for discussion, and in policy analysis, they provide an artificial laboratory. In general, core models impose a consistency on other pieces of data and take an important framing role in terms of how other information is perceived. With the shift to DSGE models, this has become focused on theoretical consistency to the detriment of unmodelled empirical data that now faces a more difficult task of becoming salient in discussions. While human policymakers undoubtedly are responsible for monetary decisions, this article has attempted to show that the entire sociotechnical system must be taken into account when trying to understand the image of the economy that underpins these decisions. The conceptual creation of the economy does not occur in any one place—either human or nonhuman—but is instead intricately distributed across an entire assemblage. With the rise of DSGE models in central banks since the 2000s, this thinking has been formatted into a particular structure that has privileged theoretical and internal consistency over external validity, and which has involved a series of simplifying assumptions that preclude analysis of key economic phenomena like finance, animal spirits, and involuntary unemployment.

Understanding the nature of monetary policy-making as part of a distributed cognitive system that carries out a sociotechnical process of reasoning is, therefore, important for gaining insight into the perils and flaws of central bank decision-making. Models are increasingly important devices in these institutions, and the crisis has led to a flurry of new research attempting to correct the errors of earlier DSGE models. Yet, the basic DSGE framework remains popular, and relatively little critical attention has been paid to how models are used within the decision-making process and the formation of images of the economy. This paper has attempted to provide a first approximation of how to conceive of this process and how to understand the role of models, while also illustrating how extended mind theory can help to understand events of political significance.