pp 1–16 | Cite as

Economics, Equilibrium Methods, and Multi-Scale Modeling

  • Jennifer JhunEmail author
Original Research


In this paper, I draw a parallel between the stability of physical systems and that of economic ones, such as the US financial system. I argue that the use of equilibrium assumptions is central to the analysis of dynamic behavior for both kinds of systems, and that we ought to interpret such idealizing strategies as footholds for causal exploration and explanation. Our considerations suggest multi-scale modeling as a natural home for such reasoning strategies, which can provide a backdrop for the assessment and (hopefully) prevention of financial crises. Equilibrium assumptions are critical elements of the epistemic scaffolding that make up multi-scale models, which we should understand as a means of constructing explanations of dynamic behavior.

1 Preliminaries

A common criticism (even from economists) is that economics is not useful—nor, even more surprisingly, does it aim to be. Even our most sophisticated “formulas, or models, are only pale reflections of the real world, and sometimes they can be woefully misleading” (Freedman 2011, 76). Economist Steve Keen (2015) chastised the Fed for its failure to foresee the global financial crisis of the late 2000s, and “the fact that [t]he Fed was caught completely unawares by the crisis should have led to some recognition that maybe, just maybe, its model of the economy was at fault,” targeting in particular the use of equilibrium models—models that stipulate that the economy is at a state in which it has no tendency to change absent outside influences. At any given time, things are not at equilibrium, so our standard economic models are unrepresentative of actual behavior. In what sense do such methods help us understand goings-on in the economic world? Economics is not even the only science that posits equilibrium assumptions or their ilk; what is their explanatory role when what we’re really concerned about are the dynamic behaviors of complex systems?

It is, of course, unsurprising that what occurs in the world often fails to align with our theoretical equilibrium calculations. To criticize economics simply on that basis is to react to an oversimplified picture of economics and of science in general. These misunderstandings are quite pervasive, even in fields such as physics. In fact, there are significant methodological parallels between equilibrium reasoning in well-established subfields of physics—in this paper I focus on thermodynamics—and in economics. By properly appreciating equilibrium reasoning strategies, we’ll be able to ultimately motivate them as central fixtures in a larger explanatory framework that departs from more traditional kinds of scientific explanation.

Section 2 first sets down some ground by considering the recent 2007–2010 subprime mortgage crisis in more detail, drawing out the explanatory and policy problems the economist faces. What kind of phenomena are financial crises? Why do they occur, and how do we prevent them? It turns out that, like economics, thermodynamics also deals with complex systems that exhibit (we hope) stability properties. In Sect. 3, I correct the misunderstanding that equilibrium methods are, in some way, erroneous. Most philosophical assessments of scientific theory are inclined to treat every system as one that evolves autonomously. As a result, they overemphasize the virtue of predictive power, conceiving of models as representations of dynamical systems that generate descriptions of future states given initial and boundary conditions. Rather, equilibrium methods in both thermodynamics and in economics aim at identifying salient scale-sensitive causal variables. Section 4 further diagnoses the failure to appreciate the appropriate role of equilibrium reasoning in the tendency to favor “bottom–up” model building—the idea that we can understand and model complex systems by building up from the details of their micro-constituents.

These attitudes are now on the wane. The persistent difficulty, though, is that in both economics and physics when it comes to thinking about dynamic behavior the relationship between the micro- and macro-scales of a system (and models thereof) is less than straightforward. However, I insist that the old-fashioned conception of equilibrium analysis can still be of use here; no need to throw the baby out with the bathwater. I introduce what I call “scale-linked equilibrium” states. Systems can thus be describable as, for instance, stable at the macro-scale though their micro-constituents are not at true equilibrium.

Systems in such states are ubiquitous. For instance, amorphous solids like polymers, which are slowly but ever so surely relaxing towards a final goo-like state. While the system as a whole may be at equilibrium for our intents and purposes (e.g. on a short enough time scale), it may contain defects and structural features that prevent the occupants of lower-order micro-scales from immediately settling into their own respective equilibrium states. These can even be desirable features, giving rise to those highly important stability properties that are crucial in manufacturing contexts. Analogously, thinking about the economy in terms of scale-linked equilibrium states will allow us to make sense of an economy that is resilient despite inevitable imperfections at the microscale that might, for instance, be due to heterogeneity.

Dealing with non-equilibrium, non-ideal behavior will therefore look different from what most philosophers might expect, requiring some care. It requires a framework to accommodate such interesting, and important, structural details, yet we need not discard our traditional modeling tools. The multi-scale framework, already put to use in other fields such as engineering and biology, is an apparatus in which we can usefully embed equilibrium methods.

2 The Last Recession

Suppose I wish to buy a house and so, need to take out a mortgage loan. I go to my local bank, which decides whether or not to let me buy a house depending on their expectation that I will pay back the amount (plus interest) reliably over a fixed amount of time, compared to using it as a rental property that earns income over that same period of time. We’d expect the bank to refuse me a mortgage if the former is not expected to be at least as profitable as the latter. If I receive a mortgage loan, I agree to pay the principal of the loan back at some interest rate. However, I may default on the loan or may pay off the loan early—meaning that the bank might not receive the entire intended loan amount plus interest back.

In order to keep loaning out money, this bank needs to replenish its funds. It does so by selling this mortgage to a bigger bank in return for a fixed amount of money, ridding itself of the risk that comes with the mortgage loan itself and enabling itself to make further loans.

Instead of refusing mortgages to homebuyers who were not expected to repay, a bank could simply unload the risk of that loan to another bank. Prior to the crisis in the 2000s, housing prices were expected to continue increasing; in the worst-case scenario, a bank could just repossess a borrower’s house. Continuously bought and sold, loans traversed an upward path to bigger and bigger banks. Investors with more money available to invest than things to invest in turned to mortgage backed securities.1 Mortgage-backed securities are simply bundles of shares of mortgage loans, and demand for them exploded. As a result, mortgages were handed out to borrowers with shaky income, histories of debt, and bad credit.2

Moving the risk upstream meant that the individual bank selling the loan rid itself of the risk involved, but it also created incentives to lie in order for mortgage bonds to continue being generated in order to meet demand. Leading up to the crisis, those loans (so-called “toxic loans”) targeting high-risk borrowers often involved fabricating crucial information such as how much income they earned. At the level of individual mortgages, however, everything looked safe since the house serves as collateral even if the borrower defaults.

Mortgage loans were chopped up and bundled together in various ways as they made their way upwards. Individually risky loans found themselves in collateralized debt obligations (CDOs) with high ratings from credit ratings agencies using outdated housing data. From all appearances, these risk-laden packages of mortgage loans seemed on the surface like good bets. A CDO is itself divided into “tranches,” which vary in their risk profiles and are ranked accordingly: senior, mezzanine, and equity (unranked). This ranking determines who gets interest and principal payments first and bears the brunt of default losses. The more junior tranches are affected by losses first, for instance, while senior tranches holding most of the debt actually tend to retain their capital value unless there is a widespread crisis. An investor picks which tranche to buy into depending on how they assess risk and reward—higher risk and higher reward seeking investors would buy into the junior tranches, while low risk but low reward seeking investors bought into the senior ones.3

The intuition is that the bundling is meant to be a buffer against loss by distributing risk, both default and prepayment risk.4 A mortgage might come along with an adjustable rate, so that one pays a low interest rate the first few years of the loan but a high one afterwards. The borrower is likely to pay off the loan those first few years, but less likely those later ones. If I prefer low returns with low risk, rather than high returns with high risk, in that case, I prefer the tranche that contains that less risky bit of the loan. A tranche can even chop up an individual mortgage like that with an adjustable rate to separate the riskier from the less risky portions.

Tranches became incredibly complex. Financial intermediaries such as hedge banks often mediated buyers and financial products. Consequently, most individuals probably did not know what they were buying into. Risky assets were being bought and sold by individuals that had incomplete knowledge of how risky they were. Mortgage loans, tranches in CDOs, etc. were also generally treated as behaving independently of one another, so correlations were often not modeled explicitly if properly at all. As one economist put it, one major problem with the models being used was that “they usually do not incorporate the financial sector. This is all the more surprising since they were used by Central Banks and advisers to policy makers in monetary policy” (Garcia 2011). Measuring credit correlation was in any case, a terribly difficult and tedious enterprise. Even the information that would have been necessary for it was often times limited in availability (corporate balance sheets are only published quarterly, for instance).

Eventually, borrowers began to default as the price of their homes exceeded their incomes earned (which were not rising). Homes flooded the market, and as the supply of houses went up their value went down. Panic ensued, and insurance policies backing credit defaults couldn’t make good on their guarantees when homeowners starting defaulting on their loans, and individuals betting against the subprime mortgage market all contributed to the forces exerting pressure on the market.

Mortgage bundling, which was meant to spread out the risk of defective loans, merely masked those defects; if the riskiness of the loans being bundled were independent of one another (which is unlikely), such overall risk would be reduced. So, despite highly rated tranches of mortgage-backed securities, these interrelated risks stayed trapped in these packages, affecting their behavior in unanticipated ways. It was not particular individual loans, but rather the organization of the mortgage market that contributed to the crisis. Ultimately, the Federal Reserve had to bail out these big banks—by injecting liquidity into the system.

In this case (but not generally) securitization—in particular competitive securitization—was one of the prime causes of the mortgage crisis, giving rise to informational asymmetries and principal agent problems. Securitization is a process by which assets, like debts, get pooled together, and third parties can buy the cash flows resulting from interest and principal payments. But not all agents in the mortgage market had incentives to be honest with one another about the product (e.g. bad mortgage bundles).5

Given these sorts of market structures and imperfections, it’s not clear what the best way to model the economy is. Building a model from the bottom–up would be unsatisfactory, since the amount of detail we would have to procure would be impossible (and computationally intractable to boot) and it’s unlikely we would be able to straightforwardly infer any interesting structural information from details about individual agents. And yet, working from the top-down would miss some of those crucial details entirely—a quantity such as aggregate expenditure share in housing is clearly mute on the subtler imperfections at other scales (e.g. information asymmetries between particular agents). Macroeconomic models typically use concepts such as the “representative household” or the “representative firm” that stipulate equilibrium assumptions in addition to homogeneity assumptions. So, it might seem, our current modelling strategies do not offer possible ways of incorporating information at and across different scales. But we need not be this pessimistic; a closer inspection of such idealizing assumptions reveals a background methodology aimed at uncovering causal structure, and this methodology can indeed complement an analysis that attends to the multi-scale aspects of some system of interest.

3 Equilibrium Methods in Thermodynamic Reasoning

In this section, we will investigate the ways in which equilibrium assumptions are useful. But first consider why one might think that economics can fail to be informative about the actual economy. Some philosophers of science are inclined to think that in the end, questions about system dynamics boil down to dynamic initial value problems.6 So the job of science is to discover the laws from which we can derive the evolution of a system as it autonomously unfolds, given initial and boundary conditions. Economic “laws” seemingly fall short of this.7

But not even all well-established disciplines of physics satisfy this demand, and here we will uncover the ways in which the laws of thermodynamics and the laws of economics can be similarly interpreted. Thermodynamics, which is central to dealing with everyday materials in engineering contexts, does not issue laws that describe and predict the future behavior of systems either. Jhun (2018) argues that thermodynamics uses some of the same idealizing strategies that economics does. For instance, positing quasi-static processes induced by an external controller driving a system from one equilibrium state to another is a crucial reasoning tool. Quasi-static processes are infinitely slow, so slow that the system in question is arbitrarily close to equilibrium at any given time. So, we can treat it at any given time as if it were at equilibrium and the process it traverses as a sequence of equilibrium states.8 To use an evocative image from Dupree (2014), we can think of lowering the temperature of water in the following way: “suppos[ing] that the ice cubes are infinitesimally small, and after each one melts [after putting it in a cup of water] we measure the thermodynamic parameters of the system. This would create a series of equilibrium states that seem to approximate a continuous process.” (64).

The assumption of quasi-static processes is implicit in formulations of even the fundamental laws of thermodynamics. Consider the Second Law of Thermodynamics: the entropy of a system never decreases. First, the change in entropy is the change in heat divided by temperature (dS = dQ/T). Yet, talking about a change in entropy, i.e. interpreting the differential dS (as well as the change in heat dQ) requires assuming that the processes in question are quasi-static—that progress arbitrarily close to equilibrium (like the cup with the infinitesimally small ice cubes melting) and so does not produce waste.9 Second, this law puts constraints on real world phenomena—for instance, if I want to construct a heat engine, the amount of heat I put into an engine places a limit on how much work that engine can do. Take the following simple example: imagine a piston cylinder full of compressed ideal gas, sitting in a warm water bath. We then let the gas expand, pushing against the cylinder. Since it is sitting in a heat reservoir, it remains at the same temperature. During this time, the piston cylinder is undergoing an isothermal expansion. Then suppose we remove it, thermally isolate it from its surroundings, and let the gas continue to expand. Now it undergoes an adiabatic expansion, exchanging no heat with its environment.

The illustration below traces out the trajectory of pressure–volume changes of a Carnot engine, a theoretical construct that represents the most efficient a heat engine can ever be. The processes I have just described compose half of the cycle; the other half returns the piston cylinder to its original state. The first leg, the isothermal expansion, is represented by the line from point 1 to 2. The second leg, the adiabatic expansion, is represented by the line from point 2 to 3.

Here we can see the pure causal relation between pressure and volume—and only pressure and volume—of our piston cylinder. Such transitions posit an external controller to ensure that the processes are quasi-static, and while those processes cannot be reproduced in the real world, they do tell us the maximal limit for how pairs of variables will in principle change in tandem with one another.10 The Carnot engine does not generate more entropy. Real engines do, because some of the energy dissipates into waste heat rather than being converted to work. In real systems, however, frictional effects will result in waste heat—a feature we idealize away in the Carnot engine.

Laws in economics that rely on equilibrium assumptions are analogous to state equations in thermodynamics, which will track inextricably linked parameters such as temperature and volume. In economics, such relations will hold ceteris paribus, i.e. we are using idealizing strategies that “hold all other things constant” as we investigate how changes in one causal factor affects one another. Thus, we can see that there is a robust analogy between equilibrium reasoning in economics and the drawing of these tightly controlled thermodynamic trajectories. First, the equilibrium concept in economics is also closely connected to the notion of efficiency: equilibrium states represent efficient allocations of goods, where no one could be made better off without making another worse off. For a single commodity economy, we have reached equilibrium if I produce just the right amount of product at just the right price, such that this is the amount demanded by consumers at the highest price they are willing to pay (i.e. the intersection of the supply and demand curves for a good). Second, static equilibrium states anchor our understanding of an economy’s dynamic behavior. This is apparent in comparative static exercises, which articulate theoretical, but generally empirically impossible, causal processes within the economic realm. To assess the effect of downsizing my firm in the near future, I would see how the equilibrium price and quantity of my good would change by theoretically subjecting an economy to a shock and analyzing its effects without worrying about the effects of feedback or the surrounding economy. The new state represents the system after having settled down from the disturbance. Drawing the line representing the isothermal or the adiabatic process is just like tracing out the price-quantity changes we can investigate in comparative static exercises. And, like quasi-static reasoning, this line of thought allows us to formulate the laws of supply and demand, e.g. “An increase in supply of a commodity will (ceteris paribus) lead to a decrease in its price.” The ceteris paribus clauses allow us to causally isolate sections of the economy in order to characterize the characteristic variables—call these salient causal variables—relevant to a particular market (or markets) over some time period of interest.11,12

Like thermodynamic systems, real economic systems do not behave in accordance with idealized behaviors. Departures from ideal behavior, i.e. away from equilibrium behavior, is typically due to structural impediments that produce dissipative effects. In mechanical systems, it is energy (e.g. waste heat). Economic systems contain and propagate risk, as we saw in Sect. 2. Thus, there is room for equilibrium reasoning in economics to play an analogous role to thermodynamic reasoning in the Carnot cycle. It is a methodology for delineating the scope of relevant variables—and identifying the limiting relations between those variables that, though empirically unrealizable, set theoretical boundaries on what can be done empirically. We can assess our actual circumstances against this benchmark, and we may wish to do so against more than one scale. To ask, “if both thermodynamic and economic idealizations via equilibrium strategies are doing the same thing, why is one (thermodynamics) seemingly more successful in practice than the other (economics)?” is misleading; characterizing the relation between pressure and volume during a quasi-static process doesn’t tell us any more about real engines than supply–demand models tell us about the real economy. But there are plenty of circumstances in which we can use these ideal behaviors as a proxy for real behavior; it’s the job of the scientist to specify what those circumstances are. It is not economics that has lost its way; rather, we have been looking at things from the wrong perspective.

4 Scale-Linked Behaviors

Given the centrality of and similarities between quasi-static reasoning in thermodynamics and comparative static reasoning in economics, the scientist must see the system as one that she can theoretically control—a perspective from which we think about which variables change in response to changes in other variables.

But departures from the ideal benchmark, as I mentioned, are often due to structural imperfections at scales other than the most macro. How to incorporate this? There is another tempting, but mistaken tendency that is easy to give into: the tendency towards reductionism. Suppose we’re interested in the behavior of an ordinary object moving through space such as a block sliding down an incline. All of these objects are conglomerates of smaller things—at bottom, particles. One might think that the most fundamental scientific theory should have the resources to describe those smallest constituents. For a system that consists of a bunch of particles, then, all we need is the position and velocity of each particle (plus the laws of mechanics). So analogously, for complicated economic problems, one might think that we just need to add in more details, such as individual details for our heterogeneous agents, in addition to our usual laws of supply and demand.

Though now less popular, this view betrays a proclivity for thinking that once we have accumulated all the details at a micro-level, we will be able to infer or deduce information about what goes on at the macro-level. Perhaps, then, the goal of science is to build a theory about the smallest constituents of a system such that we can deductively infer what kinds of higher-level phenomena will occur given all the information about those micro-constituents. So a bottom–up methodology would aim to achieve a picture of the whole economy first by gleaning information about its constituents and then somehow putting all that information together.13 It’s common, though unwarranted, to assume that such a strategy is possible in the physical sciences, i.e. though we do not need to specify the individual constituents of a body in order to determine its behavior, in principle we could.14

A number of authors have already noted that for physical systems, the bottom–up attitude will give patently wrong answers when it comes to most of the questions we ask about the real world. For instance, one everyday kind of problem we might worry about is how to manufacture materials in order to construct sturdy buildings that can sustain some wear and tear. Consider a steel beam:

If we engaged in a purely bottom–up lattice view about steel, paying attention only to the structure for the pure crystal lattice, then we would get completely wrong estimates for its total energy, for its average density, and for its elastic properties. The relevant Hamiltonians require terms that simply do not appear at the smallest scales. (Batterman 2013, 268).

Knowing all the details about the microstructure of the steel beam won’t help us at all. Looking at just the macro-scale won’t do it either; a steel beam looks fairly homogeneous from that perspective, too. Even in this fairly mundane case, it turns out that structural features crucial to steel’s ability to withstand shocks are located neither at the micro- nor the macro-scale, but at meso-scales in between. Giving rise to these features are dislocations, structural imperfections, “non-equilibrium defects” that contain trapped energy even though the overall structure of the material they appear in may be at equilibrium. They form in a material during deformation or via solidification (when a material crystallizes). At the microscopic level, they appear as small irregularities that will move given external pressures. Zoom out a little more, for instance at 100 nm, you are likely to see that steel is actually crystalline in structure, and at that scale the steel grain is visible (these are planar defects, as opposed to line defects as dislocations are):

When external pressure is applied, the dislocations move through the structure they reside in. Shear occurs in a material when molecular bonds gradually break and reform along these irregularities, finally resulting in a slightly deformed state.

Grain tends to stop the movement of dislocations. This is certainly preferable to brittle behavior, which a perfect crystalline structure would exhibit. This property we are interested in, elasticity, is nonetheless epistemically inaccessible at the molecular level. Bulk materials will come to rest at a different equilibrium than would be predicted had we simply used bottom–up calculations on the information gleaned from the molecular crystalline lattice structure alone. What counts as equilibrium at one scale may not be at all what counts as equilibrium at another scale. The characterization of the equilibrium state itself does not say anything about the dynamical behavior of an actual thermodynamic system for the same reasons a comparative statics exercise fails to tell us what exactly will occur to a market given a disturbance. All that an equilibrium analysis elucidates are in-principle relationships between variables of interest, and those variables are indexed by a specific scale. To put it in Wilson (2018) terms, the dominant behavior of a system—behavior that we think of as salient, like elasticity—is sensitive to scale too.15 These are states that we treat as scale-linked equilibrium states—states at particular scales that can be treated as stable.

Information across scales may be impossible to integrate into an axiomatic, deductive system. If one asks, “What do all the individual particles of a stable structure look like?” there’s a good chance that the question is not a productive one to ask in the first place, even if we ignore problems of tractability. The same difficulty arises when one asks: “what do all the individual actors in a stable economy look like?” Even the most elementary concepts in microeconomics fail to yield a straightforward transition from individuals to aggregates.

For example, the law of consumer demand tells us that an increase in the price of a commodity will (ceteris paribus) lead to a decrease in the demand of that commodity. Intuitively, the more something costs, the less willing I am to buy it. One might try to investigate the applicability of the law to a group of individuals by aggregating all the individual demand curves of a population together. But significant restrictions must be in place for a smooth transition from individuals to the aggregate, and the end result is not a useful one even when so. The 1970’s Sonneschein–Mantel–Debreu (SMD) theorem is the formalization of this difficulty.16 It states that the rationality assumptions that apply to individual demand functions do not carry over to the aggregate, so that familiar utility maximization assumptions on individuals do not lead to any testable, interesting properties of the population as a whole.17 That is, there is no general result on what the aggregate demand function looks like. If the micro-details cannot imply that the aggregate has a desirable property like stability, then what does? This suggests two possibilities: that we require non-standard assumptions on individual agents, or that more macroscopic properties should be introduced from the get-go.18,19

Multi-scale modeling is a strategy that combines what’s intuitively appealing about both these misguided attitudes without being ad hoc. Those features of thermodynamic thinking are often masked by an excessive focus on cases that involve only simple (often ideal) gases (in what are called “standard” conditions). For other more complicated objects, however, such as solids that we encounter all the time, we require a framework that functions as a bit of corrective architecture that buttresses strategies like positing quasi-static processes (and equilibrium states), so that they count as useful modes of reasoning. The multi-scale framework can function as this corrective architecture; equilibrium methods are a way to coherently articulate salient scale-dependent information without prioritizing any scale over another.

The behavior of systems that are complex with interacting (heterogeneous) parts behave differently from what is theoretically conjectured due to frictional forces.20 As mentioned, we do not expect our thermodynamic calculations to hold in the empirical world. One might think the obvious way to improve the model’s predictive accuracy and explanatory power is to treat idealized models as useful base cases, from which we could build up theories incorporating those features initially neglected like frictional forces. That is, we just make our way to a more comprehensive and realistic theory by adding those extra details. But this is to give into the two received attitudes I rejected in Sect. 3—the first was that we had to think about systems as evolving autonomously rather than ones that we could control, and the second was the bottom–up approach to understanding in both physical and economic systems. According to my view, scientific laws do not necessarily just state the ways in which the world actually behaves. They might express equilibrium benchmarks behaviors, pure causal relationships between variables of interest against which we do assess real behavior. But scientific explanation and understanding do not merely consist in being able to state how actual behavior simply departs from this benchmark. We need to be able to tell when we can (and can’t) treat actual behavior as if it is equilibrium behavior (i.e. at which scales we can apply such laws) and tell a story about how dominant behaviors at one scale can affect behavior at another. While this is going to concern obstructions to equilibrium behavior like frictions, it is not merely a claim about the fact that there are frictions which will upset smooth behavior.

Frictions and structural imperfections may seem like theoretical inconveniences, but in practical engineering contexts they are both useful and necessary. Manufacturing materials requires us to consider the behavior of objects at different scales but does not presuppose some privileged scale as fundamental. As I noted earlier, the presence of dislocations (and grain boundaries) and the micro- (and meso-) scales is what allows materials to bear stress and strain. We observe a similar lesson from economics: not all the behavior we are interested in is at the level of individual agents nor derivable from it. The strategies we use to intervene on and stabilize our economy—namely via fiscal and monetary policy—require our understanding scale-relative behaviors and frictions in the way that an engineer understands and successfully manipulates bulk materials.

When we manufacture steel beams for construction, what we aim for in part is the production of a material that resides in a scale-linked equilibrium state—in particular, that exhibits stability at the macroscale. And despite being non-equilibrium features, dislocations are to an extent desirable defects in steel beams; we can still reliably handle them and even bring them about. We purposefully inject such frozen order into the structure of materials at the meso-scale so that shocks will disperse differently throughout the micro-scale. In metals, we can increase their ability to bend by heating them to very high temperatures—a process called “annealing,” by which we manipulate the grain structure—and then “freezing” that structure in place by quenching them in cold water. The resultant material is both manipulable and stable, appearing at least at the macro-scale to remain as it is for long periods of time and displays predictable behavior under some range of external shocks.21

There is an analogous phenomenon in economics that gets called “friction.” Frictions can help stop financial contagion by slowing it down; most of this has to occur at the institutional level of banks, which we can treat as a meso-scale phenomenon in between the micro- (individual interactions) and the macro-scale.22 Now, banks can and do fail; we would like to prevent failure at the macro-scale that effects the whole economy at large due to spillover effects. A bank fails due to insolvency: it is unable to hold up its end to its depositors and creditors. While usually a fairly liquid asset (easily exchanged on the market without too much change in price), money becomes increasingly illiquid during a bank panic as a bank becomes less and less able to give out the necessary funds to its depositors.

But this kind of failure can sometimes be truncated and absorbed, so that cascading effects do not reach individual customers. In 2008, Washington Mutual failed—the “largest bank failure in US History” (Sidel et al. 2008). However, JP Morgan Chase stepped into buy all of its banking operations in exchange for a payment to the Federal Deposit Insurance Corporation (FDIC), thus absorbing the shock. All Washington Mutual customers then effectively became JP Morgan Chase members, but to individual customers the change was only nominal. In fact, like in the materials sciences, it is now common practice to conduct stress tests on the economy. When the US Federal Reserve conducts stress tests, one thing it checks for is whether the banking sector can absorb large shocks and yet continue operating normally. If the banking sector is able to withstand economic shocks, disastrous effects on individuals (and other regions of the economy at large) can be curtailed.

The individuals at the micro-scale of the economy are those individuals engaging in exchange with one another—and some of those exchanges might involve giving out a bad mortgage. Analogously, in a steel beam, (most) individual crystalline atoms will settle into some kind of local equilibrium state, though sometimes an atom will be out of kilter with other atoms, in a similar way in which an exchange may not actually have been what it would have at equilibrium (e.g. at a price that all parties agree to given full information). And despite the fact that steel’s granular structure helps stop the movements of traveling dislocations, dislocations can pile up and the grain boundary will eventually break down under stress. Meso-scale institutions like the banking system can stop risk from travelling too far up (or down) the ladder, but not for an unlimited amount of risk. A lack of meso-scale structure, however, makes for a precarious economy that is unlikely to withstand shocks.

A number of factors contribute to the properties of ductility in steel; including, for instance, whether the micro-scale is homogeneous or heterogeneous, how big the grain boundaries are, etc. Some of these features we can control; for instance, we can manipulate grain size via heating and cooling. Analogously, while we probably can’t control for all the details at the level of individual interactions, we can control for ease of liquidity in a financial system by ensuring that banks have enough capital (or by stopping spread by isolating a particularly “contagious” sector of the banking industry, or any number of other things). These sorts of controls are at the institutional level—at the meso-scale, somewhere in between the micro and the macro. Such measures are what help prevent economic collapse.

5 Conclusion

I have argued that equilibrium methods in economics are rationalizable, and importantly useful, idealizing strategies that have analogous uses in other sciences where they are not subjected to criticism (in particular in thermodynamics). Characterizing equilibrium states helps us characterize the system of interest by helping us delineate causal relationships between salient, scale-dependent variables. Philosophers of science have underappreciated equilibrium reasoning because they often assume that a final scientific theory should contain laws of nature that describe the most micro-scale entities of a system. Then, according to this picture, from those micro-scale details we can then determine the behavior of an aggregate over time. On the other hand, this misrepresents actual scientific practice—both physical and social—because scientists often do not privilege the most micro-scale, nor do they aim to reduce analysis to talk of just one unique scale. The equilibrium method is most at home in a multi-scale framework, wherein we can still gain an understanding about the dynamics of systems even though we may use static methods with the aim to design systems that are in scale-linked equilibrium. This framework is thus especially useful for investigating whether a system has desirable properties like stability, making it apt for analyses of financial crises.


  1. 1.

    The other option would have been to invest in US treasury bonds, but investors wanted high-return low-risk investments and Greenspan wasn’t keen on raising the federal interest rate at that time.

  2. 2.

    For more details, see Davidson and Blumberg (2008).

  3. 3.

    See Hamilton (2009) for a summary of tranches.

  4. 4.

    For a discussion of risk distribution and absorption, see DiMartino and Duca (2007).

  5. 5.

    In the US economy, government sponsored enterprises (GSEs) are the largest of these institutions that sets the underwriting standards. Stable economies in Europe have a high market concentration of securitizers—that is, a small number of firms control the industry overall—and fairly rigid underwriting practices. That means these institutions were less able to offer things like subprime mortgages.

  6. 6.

    There are some, however, who don’t think this—Earman et al. (2002) think that differential equations that describe system evolution are themselves not laws. On the other hand, they also admit of classical thermodynamics as delivering real laws, because they claim that thermodynamics reduces to statistical mechanics.

  7. 7.

    Extensive literature on the status of ceteris paribus laws is testament to this (see Earman and Roberts 1999; Fodor 1974; Fodor 1991; Pietroski and Rey 1995; Schurz 2001; Woodward 2003 for a non-exhaustive discussion). In fact, the more general field of literature simply devoted to asking the question of whether or not there are social science laws is enough to indicate that there seems to be a well-accepted demarcation between laws of physics and other fields.

  8. 8.

    This makes our analysis convenient, because it requires using fewer state variables to describe our system when it’s at equilibrium.

  9. 9.

    More specifically, we would require the processes to approximate quasi-static loci—the pathway on the surface of a hyper-surface representing the fundamental equation that describes the system in configuration space (Callen 1960, 59).

  10. 10.

    This implies that a system will undergo changes, and thus spend time out of equilibrium, which classical thermodynamics cannot handle with its own resources. In particular, in classical thermodynamics all the equations we see are state equations. For example, one familiar state equation is the following: PV = nRT, the ideal gas law. In order to specify the state variables that characterize the system in question, it’s presupposed that the system is at equilibrium. We simply do not have the vocabulary in classical thermodynamics to talk about a system that is out of equilibrium, though other extensions—such as disequilibrium thermodynamics—attempt to do so.

  11. 11.
    That is, there is a robust formal analogy—though not necessarily a substantive one—between the two fields in terms of how they employ equilibrium analysis.

    There is really nothing more pathetic than to have an economist or a retired engineer try to force analogies between the concepts of physics and the concepts of economics…However, if you look upon the monopolistic firm hiring ninety-nine inputs as an example of a maximum system, you can connect up its structural relations with those that prevail for an entropy-maximizing thermodynamic system. Pressure and volume, and for that matter absolute temperature and entropy, have to each other the same conjugate or dualistic relation that the wage rate has to labor… (Samuelson 1970).

  12. 12.

    See Jhun (2018) for a lengthier defense of this claim. One could understand this interpretation of equilibrium idealization as providing a methodological backdrop to Cartwright’s (1989) account of causal capacities and the nomological machines (1999, 2007) that isolate them.

  13. 13.

    Whether mechanistic perspectives escape this is hard to say, and depends on the particular view in question. Certainly, views that do not distinguish between mechanistic, organized behavior and merely aggregative behavior will fail to do so. See Machamer et al (2000), Bechtel and Abrahamson (2005), Craver (2007) and Kaplan and Craver (2011). The danger, even with mechanistic perspectives, is failing to take seriously inter-level relations.

  14. 14.

    See Butterfield and Bouatta (2011) for this view, and Batterman (2013) for an argument against. For discussion of multi-scale systems and modeling, see also Bokulich and Oreskes (2017), Bursten (2016), and Batterman and Rice (2014). Green (2013) and Winsberg (2006).

  15. 15.

    We can think of the “dominant behavior” of the scale, says Wilson as that manifested by: “central physical processes normally witnessed at [a system’s] characteristic scale length” (19).

  16. 16.

    For the aggregate function to depend only on societal income and the price (in addition other usual restrictions, e.g. that demand curves all be independent) the preferences of the individual need to be identical and quasi-homothetic. Then there’s the further problem of whether equilibria can be unique. In a two-agent, two-commodity exchange economy, if agents view commodities either as perfect substitutes or as perfect complements, there is no unique equilibrium. Rather there is a continuum of equilibria.

  17. 17.

    The only things that do seem to carry over is homogeneity of degree zero, continuity, and obeying Walras’ Law (excess market demand/supply is zero) and boundary behavior when prices approach zero.

  18. 18.

    Efforts in this direction do not, thus far, seem especially successful or satisfying. Grandmont (1992) and Hildenbrand (1994) are two well-known attempts that recognize the role of meso-scale details, in particular population heterogeneity, in achieving stable equilibrium.

  19. 19.

    One might worry that physics and economics are disanalogous because the latter includes self-conscious agents. For instance, as per the Lucas Critique, the economy is receptive to the policies it sets for itself and will adjust accordingly. While we agree that this is, indeed, a problem for economists, it is not troublesome for our methodological point. Suppose we agree that even in principle, most physical systems of interest could not be explained bottom–up. Properties such as elasticity require structural features that simply cannot be seen at the microscale, despite depending on there being one. But so long as goings-on at the microscale can be thought of as responsive to (even if not adaptive in the same kind of way) other scale goings-on, then it seems like the moral still stands: that equilibrium reasoning helps us identify relevant scales when it comes to the purposes of modeling because it tells us something about what kinds of behaviors we do (or don’t) expect to see, and that multi-scale modeling is a route to integrate different scales together (which may exhibit different dominant behaviors). In fact, this may very well be a promising way of addressing the Lucas Critique. We thank an anonymous referee for pressing on this point.

  20. 20.

    As a disclaimer that not all physical phenomena that we usually associate as lagging behavior, off-equilibrium behavior, and the like, need be due to be frictional forces strictly speaking. For example, one might expect lags to occur when two bodies exert force against one another without sliding (think normal force). This derives from the gravitational force multiplied by the mass of the object itself. These may not be frictional forces, but different kinds of interactions. Thus we ought to be cautious in drawing any substantive analogies in economics to one of “friction” in physics.

  21. 21.

    In engineering this is what they call “metastable” rather than just merely stable.

  22. 22.

    Note that there is no such thing as the meso-scale, so there may be many structures spread over several scales.



  1. Batterman, R. (2013). The tyranny of scales. In Robert Batterman (Ed.), The oxford handbook of philosophy of physics. Oxford: Oxford University Press.CrossRefGoogle Scholar
  2. Batterman, R. W., & Rice, C. C. (2014). Minimal model explanations. Philosophy of Science, 81(3), 349–376.CrossRefGoogle Scholar
  3. Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanistic alternative. Studies in History and Philosophy of the Biological and Biomedical Sciences, 36, 421–441.CrossRefGoogle Scholar
  4. Blumberg, D., & Davidson, A. (2008). The giant pool of money. Radio broadcast. This American life, Episode 35.Google Scholar
  5. Bokulich, A., & Oreskes, N. (2017). Models in geosciences. Springer handbook of model-based science (pp. 891–911). Cham: Springer.CrossRefGoogle Scholar
  6. Bursten, Julia R. (2016). Smaller than a breadbox: Scale and natural kinds. The British Journal for the Philosophy of Science, 69(1), 1–23.Google Scholar
  7. Butterfield, J., & Bouatta, N. (2011). Emergence and reduction combined in phase transitions. Manuscript.Google Scholar
  8. Callen, H. B. (1960). Thermodynamics. New York: Wiley.Google Scholar
  9. Cartwright. (1989). Nature’s capacities and their measurement. Cambridge: Cambridge University Press.Google Scholar
  10. Craver, C. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Clarendon Press.CrossRefGoogle Scholar
  11. DiMartino, D., & Duca, J. (2007). The rise and fall of subprime mortgages. Federal Reserve Bank of Dallas Economic Letter, 2(11), 1–8.Google Scholar
  12. Dupree, M. (2014). Duhem’s balancing act: Quasi-static reasoning in physical theory. Pittsburgh: PhD Dissertation, University of Pittsburgh.Google Scholar
  13. Earman, J., & Roberts, J. (1999). Ceteris paribus, there is no problem of provisos. Synthese, 118, 439–478.CrossRefGoogle Scholar
  14. Earman, J., Roberts, J., & Smith, S. (2002). Ceteris paribus lost. Synthese, 57(3), 281–301.Google Scholar
  15. Fodor, J. (1974). Special sciences, or the disunity of science as a working hypothesis. Synthese, 28, 97–115.CrossRefGoogle Scholar
  16. Fodor, J. (1991). You can fool some people all of the time, everything else being equal; hedged laws and psychological explanations. Mind, 100, 19–34.CrossRefGoogle Scholar
  17. Freedman, D. (2011). A formula for economic calamity. Scientific American, 305(5), 76–79.CrossRefGoogle Scholar
  18. Garcia, N. (2011). DSGE macroeconomic models: A critique. Economie Appliquee, 64(1), 149–171.Google Scholar
  19. Grandmont, J.-M. (1992). Transformations of the commodity space, behavioral heterogeneity, and the aggregation problem. Journal of Economic Theory, 57, 1–35.CrossRefGoogle Scholar
  20. Green, S. (2013). When one model is not enough: Combining epistemic tools in systems biology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(2), 170–180.CrossRefGoogle Scholar
  21. Hamilton, R. B. (2009). The road to America’s economic meltdown. Bloomington: Authorhouse.Google Scholar
  22. Hildenbrand, W. (1994). Market demand: Theory and empirical evidence. Princeton: Princeton University Press.CrossRefGoogle Scholar
  23. Jhun, J. S. (2018). What’s the point of Ceteris Paribus? or, how to understand supply and demand curves. Philosophy of Science, 85(2), 271–292.CrossRefGoogle Scholar
  24. Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical models. Philosophy of Science, 78(4), 601–627.CrossRefGoogle Scholar
  25. Keen, S. (2015). The fed has not learnt from the crisis. Forbes. Available online at: Accessed 1 Oct 2015.
  26. Machamer, P. K., Darden, L., & Craver, C. F. (2000). [MDC], Thinking about mechanisms. Philosophy of Science, 67, 1–25.CrossRefGoogle Scholar
  27. Pietroski, P., & Rey, G. (1995). When other things aren’t equal: Saving ceteris paribus laws from vacuity. British Journal for the Philosophy of Science, 46, 81–110.CrossRefGoogle Scholar
  28. Samuelson, P. (1970). Maximum principles in analytical economics. Nobel Prize Lecture. Available online at: Accessed 1 Oct 2015.
  29. Schurz, G. (2001). Pietroski and Rey on ceteris paribus laws. British Journal for Philosophy of Science, 52, 359–370.CrossRefGoogle Scholar
  30. Sidel, R., Enrich, D., & Fitzpatrick D. (2008). WaMu is seized, sold off to J.P. Morgan, in largest failure in U.S. banking history. Wall Street Journal. Accessed 1 Dec 2015.
  31. Wilson, M. (2018). Physics avoidance: Essays in conceptual strategy. Oxford: Oxford University Press.Google Scholar
  32. Winsberg, E. (2006). Handshaking your way to the top: Simulation at the nanoscale. Philosophy of Science, 73(5), 582–594.CrossRefGoogle Scholar
  33. Woodward, J. (2003). Making Things Happen. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Philosophy, Durand Art Institute, #100Lake Forest CollegeLake ForestUSA

Personalised recommendations