1 Introduction

Even the most rudimentary training from any introductory course in economics starts with demand curves going down and supply curves going up. They are so ‘natural’ that they sound even more obvious than the Euclidian postulates in mathematics. But are they? What do they actually mean?

Consider demand curves. Are they hypothetical ‘psychological constructs’ on individual preferences? Propositions on aggregation over them? Reduced forms of actual dynamic proposition of time profiles of prices and demanded quantities? Similar considerations apply to “supply curves”.

Alan Kirman is among the very few who has asked this type of subversive questions (another one has been Werner Hildenbrand).

In a shorthand, my argument, fully in line with Alan Kirman, based on Dosi (2023) and especially the chapter by Kirman and Dosi therein, is that the forest of demand and supply curves is basically there to populate the analysis with double axiomatic notions of equilibria, both ‘in the head’ of individual agents, and in environments in which they operate.

They are one of the three major methodological stumbling blocks on the way of progress in economics—the other related ones being ‘utility functions’ and ‘production functions’. The discussion which follows entails their abandonment together with ‘demand curves’ and ‘supply curves ‘. This is a ‘vast program’ as De Gaulle once said in another context, but getting rid of them is a major step toward making economics more similar to all empirically based ‘sciences’—as Herb Simon (1997) advocated—and more distant from theology.

In Sect. 2, I shall discuss the status of demand curves in partial (dis)equilibrium settings. Section 3 will address supply curves. Section 4 will discuss some more macroeconomic implications. Finally, Sect. 5 will recap some proposed ways forward.

2 Demand and supply curves: what are they really?

Let us start with demand.

Here, one must carefully distinguish between the questions of what individual agents (or individual firms) actually do, on the one hand, from what are the shapes of demand schedule, at whatever level of aggregation and what determines them, on the other.

In order to illustrate this point, let me just recall the very basics which most undergraduates learn in Introductory Microeconomics.

When dealing with demand, one starts with the intuition that when prices of any one commodity are higher, demand is lower, and, conversely, when prices are lower demand is higher. Next, one easily draws on the blackboard a standard demand curve relating prices and quantities with its familiar downward slope, and that remains as one of the most profound imprints of the discipline thereafter.

But, on second thoughts, what does that demand curve mean (even in a partial equilibrium setting)?

After all, at any point in time, one only observes one actual combination between a certain price and a certain quantity of a good or a bundle of them. Keeping to the static framework, the curve must necessarily imply some sort of counterfactual experiment, namely what would have happened if prices were higher or lower (holding everything else constant—including initial endowment and preferences).

In turn, that counterfactual exercise either applies at the level of the individual consumer or, alternatively, of collections of them. In the former case, the hypothetical experiment basically concerns the degrees of coherence in microeconomic preference structures. This belongs to the first domain of analysis mentioned above. So, for example, we know—from Samuelson (1938) all the way to Varian (1982)—that ‘revealed preferences,’ under different consistency restrictions, may be, so to speak, ‘mapped back’ to an underlying, and unobservable, utility function of a maximizing consumer (cf. also Sippel, 1997).

Consider first the ‘individual demand’.

The story for the beginner is with a soup with you very hungry. For the first spoon you will pay a lot, for the second somewhat less, etc. However, if one thinks twice about the metaphor, one realizes how childish and misleading it is. Put it in terms of bowls of soup. You will be ready to pay a certain amount for the first bowl (reasonably bowls, nor spoons thereof!), but if you have some money left you are likely to go for some bread, next for, say, some butter, next perhaps some meat etc.

Notice that the foregoing proposition is different from saying, e.g. “if I had more money I would go on vacation, but with what I earn I cannot “, and also different from the proposition “if I had more money I would go on vacation twice instead of once “. Both propositions have to do with the budget constraint and not with any “utility function”, whatever that means.

A quite different question is whether, by aggregation, the latter propositions imply some “well behaved “demand functions. Below we shall discuss it.

Let us start, however, with individual consumption processes.

Here, my general proposition is that purchase decisions tend to be lexicographic, that is hierarchically ordered, and shaped by budget constraints.

In Dosi (2023) we discuss at much greater length the evidence on consumption decisions. In brief, the following properties emerge.

  1. 1.

    The coherence criteria prescribed by decision-theoretic models are systematically violated by empirical agents (i.e., by most of us human beings) even under utterly simple experimental circumstances.

  2. 2.

    Consumption acts (as well as other economic behaviours) are nested into cognitive categories and ‘mental models’ of the actors.

  3. 3.

    The relationships between ‘mental models’, preferences and consumption behaviours are to some extent implicit and, possibly, also partly inconsistent with each other.

  4. 4.

    Habits, routines and explicit deliberative processes coexist to varying degrees as determinants of most consumption acts.

  5. 5.

    Consumption habits and routines, and, dynamically, their formation and acquisition, are embedded in the processes of socialization and identity-building.

  6. 6.

    Habits and routines-formation hold varying and precarious balances with search and innovation.

  7. 7.

    (Imperfect) social adaptation, learning—on both preferences and consumption ‘technologies’—and search, all entail path-dependencies (at the very least at individual level).

  8. 8.

    Micro-consumption patterns are likely to be characterized by: (a) complementarities among multiple goods within lifestyle-shaped consumption-systems; and (b) (roughly) lexicographic patterns of consumer’s selection over hedonic attributes and goods

Under these conditions, pushing it, for simplicity, to the extreme, assume that a generic individual orders its wants on goods as \(1\succ 2\succ 3\dots\) irrespectively of absolute and, more so, relative prices. Roughly speaking, anyone will first try to eat, then dress, then find a shelter, etc.

Thus, its consumption basket will be only determined by the budget constrain, that is its disposable income y, relative to the price of any good and its position in the lexicographic ordering. Further assume for sake of illustration that each good is notionally demanded for one unit only.

Thus, for good 1, the ‘demand schedule’ will look like as in Fig. 1.

Fig. 1
figure 1

Demand schedule

For good 2, the graph will have \({q}_{2}=1\) for \({p}_{2}\le {y}_{i}-{p}_{1},\) and so on for lower ranking goods.

Note the striking difference between the shape of this demand schedule and the standard textbook curve. Here there is no downward sloping of any kind. And all ‘the action’ is basically due to the budget constraint.

This is not to say that that the believer in max Utility (…, …, …) may not rationalize whatever observation on the ground of a theory which indeed so sloppy that it is even in principle not falsifiable: contrary to a common belief, if the analyst has the freedom to choose both the functional form and the argument of the function, any ensemble of observations may be rationalized on the grounds of such a theory.

However, even with reference to the revealed individual demand for a single commodity conditional on different prices, well-behaved demand curves hardly appear, as shown by the seminal works of Alan Kirman (see Kirman, 2010; Haerdle & Kirman, 1995, and the chapter by Kirman and Dosi in Dosi, 2023).

One of Kirman’s path-breaking market studies addresses the Marseille fish market. There, one is dealing with a collection of heterogeneous agents who interact regularly and from this interaction certain aggregate behaviours emerges. However, trying to model the behaviour of the individuals in isolation will give a very poor picture of the overall evolution of the market. For example, in Haerdle and Kirman (1995) one showed that if one plots the quantities purchased by an individual against the price at which they were transacted, the result seems almost random. This is illustrated in Fig. 2, where the purchases of one individual of one species of fish are shown.

Fig. 2
figure 2

Source: Haerdle and Kirman (1995), republished in Dosi (2023, p. 454)

Transactions of one buyer for one species of fish.

Thus, there is little to suggest that there is a monotone declining relation between the price paid and the quantity purchased. This is, of course, but one illustration of the many thousands of such relationships that they analysed but there was no consistent evidence of the sort of behaviour that theory might lead us to expect.

However, if we now examine the aggregate data for a single fish, we see something like a proximate monotonic relation emerging. This, even though one used a non-parametric estimation which is much more exacting than fitting a pre-determined functional form. This can be seen in Fig. 3.

Fig. 3
figure 3

Aggregate Price Quantity Relation for one fish species (the small graph displays the data points used to derive the aggregate price-quantity relation). Source: Haerdle and Kirman (1995), republished in Dosi (2023, p. 455)

One can go one step further and aggregate over all types of fish the result is even more striking. This can be seen in Fig. 4.

Fig. 4
figure 4

Source: Haerdle and Kirman (1995), republished in Dosi (2023, p. 455)

Price Quantity Relation aggregated over all fish species (the small graph displays the data points used to derive the aggregate price-quantity relation).

What we see clearly is that the aggregate relationship is not the sum of many similar individual behaviors but has characteristics resulting from the aggregation itself.

The fundamental question here regards what determines prices as we see them, and the processes leading to them. This Kirmanian question, which I totally share, is completely different from the rationalization of the observed price/quantity relations as equilibrium ones.

In turn, the answer, in which demand curves have no say, basically relate too.

  1. 1.

    the architectures and mechanics of interaction among the market participants;

  2. 2.

    the identities of the actors;

  3. 3.

    their system of beliefs and their evolution;

  4. 4.

    the objects of exchange and their conditions of production, if any (a little more below);

  5. 5.

    the broader institutions in which markets themselves are embedded.

Phenomena like incomplete and asymmetric information are ubiquitous. And this, indeed, has been an extremely fruitful field of investigation, by scholars like Joe Stiglitz and George Akerlof: see among a vast number of contributions, e.g. Akerlof (1984), and Stiglitz (2000). However, the sole acknowledgement of them is largely insufficient to characterise how markets work, which seem to bear only relatively loose links with the information agents can access. In this respect, the reader is invited to compare an information-scarse market as the bazar economy (Geertz, 1978) and the information rich security market (Beunza & Stark, 2004).

Not surprisingly, they are quite different in many respects. However, what they have in common is that the arbitrage opportunities crucially depend on the dynamics and distribution of beliefs—not of information as such, and, especially in the case of the bazaar, the very identities of the agents.

This goes well beyond the so-called ‘beauty contest’ problem, as made famous by Keynes (‘whom do you think the other people believe the most beautiful woman to be?’, as different from the standard exercise of ‘estimate of the fundamentals’, in economics equivalent to ‘who do you believe the most beautiful woman is ‘). The ‘beauty contest’ is already a major advancement concerning especially financial markets, but it is not enough.

Further, the challenge is about constructing reciprocal identities—in the case of the bazaar—and innovating in cognitive frames—in the case of securities.

In all cases it is the institutional set-up of the market itself which shapes how it works and its dynamics. In that, the tools of trade including the ‘theories’ agents use profoundly impact market outcomes.

They might be the rituals of Geertz ‘peddlers and princess’ (Geertz, 1963) or Black and Scholes models of ‘optimal’ portfolio management. As the ‘sociology of finance’ emphasise, at least in this domain models do not describe, but construct the very reality of markets (MacKenzie et al., 2007; MacKenzie, 2008; Cetina & Bruegger, 2002).Footnote 1

While theorists struggle with the details of solving a formal model, those who participate in, who regulate, or study actual market mechanisms have a very different view of the problem. For example, Aboulafia argues that markets are essentially social institutions in his well-known study of financial markets. Indeed, he says,

‘Markets are socially constructed institutions in which the behavior of traders is suspended in a web of customs, norms, and structures of control...Traders negotiate the perpetual tension between short-term self-interest and long-term self-restraint that marks their respective communities.’ (Aboulafia, 1997).

Indeed, in order to account for the determinants of prices, one must start with reference to two fundamental institutional and technological conditions under which prices are set. They regard, first, the nature of the networks of interaction among sellers and buyers; and second, the conditions under which the object of pricing is produced, if at all.

  1. 1.

    Type of network structure (We discuss this at much greater length in the chapter by Dosi and Kirman, in Dosi, 2023), concerning:

    1. (a)

      seller-buyer relationships; and

    2. (b)

      seller-seller relationships (basically, the types of competitive interaction, if any);

  2. 2.

    Degrees of reproducibility at the time scale at which purchases occur:

    1. (a)

      no reproducibility at any time scale (e.g., Picasso paintings; a cabin, alone, on the Galapagos islands, that is, more generally, ‘positional goods’ à la Hirsch, 1976);

    2. (b)

      reproducible, under roughly constant returns, at a time scale slower than the one at which purchases occur (from fish to vegetables to corn to oil to copper… to used cars), often with lags in supply adjustments;

    3. (c)

      reproducible under non-decreasing returns at a time scale faster than purchases (from cars to TVs …);

    4. (d)

      ‘immaterial goods’ with zero or almost zero marginal costs and infinitely reproducible (or infinitely expansible, cf. Quah, 2003).

Basically, modern capitalism has developed around markets of the types 2(b) and 2 (c) – modern manufacture mostly under 2(c)—and possibly contemporary capitalism is heading toward 2(d).

Above, we considered the case of the fish market, falling under 2(b). There of course, there is neither a ‘demand curve’ nor a ‘supply curve’—as supply on the time scale of market transactions is fixed. Under the same taxa, to repeat, fall all markets where producers are price-takers, the commodity is reproducible under conditions of non-decreasing returns, but on a time scale different from that at which prices are set. Here sometimes prices may affect quantities but with a lag.

This is the case of many agricultural products and breeds, typically following recurrent ‘cycles’. Figures 5 and 6, from the classic Ezekiel (1938), and Fig. 7, from the more recent Rosen et al. (1994) illustrate the point.

Fig. 5
figure 5

Source: Ezekiel (1938, p. 271)

Hog-corn price ratios and hog marketings.

Fig. 6
figure 6

Source: Ezekiel (1938, p. 270)

Purchasing power per head of milk cows and cattle other than milk cows, 1875 to date.

Fig. 7
figure 7

Source: Rosen et al., (1994, p.469)

Stocks of beef cattle, 1875–1990.

This quite widespread phenomenon has led to so-called cobweb models.

The basic story is simple. At a certain time\(\tau\), the available quantity \({q}_{\tau }\) is fixed and given whatever demand schedule, if any at all!, this determines the price\({p}_{\tau }\). In turn, the latter influences the quantities that shall be offered in the following period \({q}_{\tau +1}\) and so on. Complicating it a bit, in line with Rosen et al. (1994), suppose that at \(\tau\) we have a stock of beef cattle which can be either put to breeding or slaughtered to be consumed as meat. Given the latter quantity, meat prices are determined. This model differs from the previous one in that producers must make expectations on future prices which will determine the part of the stock that will be ‘invested’ for breeding.

The simplest model has been typically rationalized in terms of movements across invariant supply and demand curves of the type depicted in Fig. 8A, B. This is in fact a pseudo-dynamics postulated across two unobservable entities.

Fig. 8
figure 8

The cobweb pseudo-dynamics

To repeat the point above, paraphrasing Joe Stiglitz on the ‘invisible hand’, the good reason why these curves are unobservable is because they do not exist!

In fact, such rationalization unfortunately began, with all caveats forgotten in subsequent treatments, with Kaldor (1934) and Ezekiel (1938). It is a misleading rationalization, which tries to squeeze a genuine dynamical system into a study of, and comparison between, equilibria. With that would come, later on, the baroque econometric industry concerning identification techniques, instrumental variables, etc., so common nowadays in this and many other domains.

Indeed, the issue is much simpler, as one has to track a relatively simple dynamical system of the form:

$$\begin{gathered} q\left( t \right) = g\left[ {q\left( {t - 1} \right), p\left( {t - 1} \right), \ldots , p\left( {t - \Omega } \right), \varepsilon_{\tau } } \right] \hfill \\ p\left( t \right) = f\left[ {p\left( {t - 1} \right), q\left( t \right), \ldots ,q\left( {t - \tau } \right), \varepsilon_{t} } \right] \hfill \\ \end{gathered}$$
(1)

where \(p\) are the prices, \(q\) the quantities, \(\tau\) and \(\Omega\) are time lags that define the order of the dynamical system, and \({\varepsilon }_{(\bullet )}\) are exogenous shocks.

Here there is no need to invoke demand and supply curves, and even less so restrictions on the modes of expectation formation. In fact, Rosen et al. (1994) show that persistent cobweb-type fluctuations emerge even in presence of rational expectations.

Contrary to the conventional wisdom, the whole ‘action’ is not driven by the nature of the expectations, whether ‘adaptive’ or ‘forward looking’ (whatever that means in human set-ups where expectations about the future must necessarily derive from past experiences, ruling out direct communication from God).

Rather, the typical ‘cobweb’ persistency of fluctuations is basically the outcome of the very, physical or biological, lag structure in the adjustment of supply to whatever endogenous or exogenous shock. It takes one harvest period to get from seeds to edible corns. And it takes roughly three years to get from the insemination of a calf to birth to a mature slaughterable adult.

This intuition was already clear to Kaldor (1934) and Ezekiel (1938) but got subsequently blurred by the obsession with identification of the ‘equilibrium’ intersection of the mythical supply and demand curves.

In fact, Rosen et al. (1994) show how, in the beef case, estimated ARMA dynamics track remarkably well the empirical dynamics in total stocks, breeding stocks and beef consumption in the USA, under assumptions of constant returns to scale and elastic supplies to the industry (p. 476).

It is our conjecture that simulations with a multitude of agents adjusting their breeding stocks and meet supplies according to the simple heuristics (more on the latter in Dosi, 2023, ch. 4) would reveal plausible and robust generating processes for the observed time series.

From a statistical point of view, the task is basically the estimation of the Eqs. (1). Conversely, starting from the theoretical construct of supply and demand just implies making life unnecessarily complicated for the analyst, as it basically means to start from the notion that there is a continuum of conjectural equilibrium combinations price/quantities, so to speak, ‘in the head of the supplier’ and ‘in the head of the customer’ (even leaving aside all problems of aggregation, of which below). Indeed, even the cobweb itself is not particularly in tune with a ‘purist’ analysis in terms of supply and demand curves: rather, each observation ought to be interpreted in principle as the equilibrium combination of the foregoing optimal combination price/quantities in ‘the heads’ of both supplier and customer. Whence come also all the problems of ‘identification’: if one observes over time a change in the price/quantity combinations, is it due to movements along the curves, or movements of the curves ?.

Indeed, one may well try skilful virtuosos in this perspective on market data on fish markets, very similar to those analysed by Kirman, and ultimately end up with the Nobel Prize: see Angrist et al. (2000).Conversely, with a much lower probability of getting the Prize, but also a lower distance from reality, this is a problem that one simply does not face if one just econometrically estimates dynamical systems such as (1), or, for that matter, any dynamical systems in which there is no panglossian presumption that each observation is an equilibrium one, and it is also an ‘equilibrium’ of some kind ‘in the heads ‘of the actors involved, whatever that means.

This of course has implications also for the economists’ obsessive search for ‘causality ‘. Even simple interactions between supply and demand, when they occur, are dynamically coupled processes, implying an intrinsic ‘bi-directional causation’ which is impossible to get rid of. Go and ask biologists whether it is the gazelles which ‘cause’ the lions, or the lions which ‘cause’ the gazelles. They will simply reply that you are drunk! Of course, one may fruitfully try to parametrize such predator–prey dynamics (e.g., with some form of Lotka–Volterra systems), but no biologist in the right state of mind would rationalize the issue in terms of ‘supply and demand curves’ of lions and gazelles and their “causal structure”.

Consider now commodities reproducible under non-decreasing (often increasing) returns, i.e., most of industrial goods whose production occurs on time scales similar or shorter than those on which demand is expressed.

Here, in most cases, producers are price-makers, and the typical pricing heuristic on the side of the sellers is of the kind

$${P}_{i}\left(t\right)=UV{C}_{i}\left(t\right)\cdot \left(1+{\mu }_{i}\left(t\right)\right) $$
(2)

where the unit price of firm \(i\) is a mark-up \({\mu }_{i}\left(t\right)\) over ‘normal’ unit variable costs UVC, often calculated as made of unit intermediate inputs \(INT\left(t\right)\) and unit labour costs, i.e., wages \(w\left(t\right)\) divided by the labour productivity \(\pi\) of that firm \(i\) at \(t\). That is,

$${P}_{i}\left(t\right)=\left({INT}_{i}\left(t\right)+\frac{w\left(t\right)}{{\pi }_{i}\left(t\right)}\right)\left(1+{\mu }_{i}\left(t\right)\right) $$
(3)

An overwhelming, old and new, empirical evidence supports the widespread use of such a heuristicFootnote 2: an (incomplete) list of pricing heuristics is in Dosi (2023).

Note, in this respect, that even heuristics which prima facie might not appear cost-plus might indeed be such with the levels of the mark-ups themselves influenced by market penetration strategies. This is the case of the pricing of new products, whereby—as shown in Dosi (1984a) in the case of semiconductors—the costs over which the marking is applied may well take into account learning curve dynamics.

Of course, this is only the skeleton of heuristics, whose actual parametrizations are influenced by the technological and competitive conditions of the industry and of firm \(i\) within it.

The levels of the mark-up \({\mu }_{i}\left(t\right)\) are likely to depend, among other factors, upon

  1. (i)

    the capital intensity of the industry;

  2. (ii)

    the barriers to entry into the industry itself;

  3. (iii)

    the relative competitiveness of firm \(i\) vis-à-vis the other sellers and in particular the leaders of the industry.

An interesting case of (iii)—illustrated empirically with reference to a supermarket—is the path-breaking study by Cyert and March (1963), where the firm has two explicit and possibly conflicting objectives, namely, first, profit margins, and second, sales volumes. To them correspond two, loosely connected, heuristics:

Mark-up pricing—in their case, ‘divide unit costs by 0.6 (= one minus the mark-up) and move the result to the nearest $0.95’; while, if sales fell in the near past, move it down, as a ‘lower-level heuristic’;

Mark-down pricing—roughly, take the outcome of a mark-up heuristic and lower it by a percentage depending on the success of mark-down heuristics in the same or similar products.Footnote 3

The predicting success of this simple heuristic model in terms of the actual behaviour of the concerned supermarket is striking, and more so given the rudimentary computer power of the time.

As for factor (ii), the conjecture shared by an older breed of industrial economics (Bain, 1959) and by the ‘Kaleckian’ approach to income distribution (Kalecki, 1971) is that, at least on average, mark-ups grow with industry concentration. It is a conjecture still awaiting robust empirical corroborations (in fact, in Dosi, 2023, we discuss the unfortunate lack of support of the point, but I am well open to evidence to the contrary.)

Moreover, as Sylos Labini (1962) suggested, it may well be that the heuristic above is just a prerogative of the market and technological leader(s) of the industry, while laggards approximately anchor their prices on the latter and calculate their mark-ups residually.

Indeed, Sylos Labini, almost half a century in advance over all Statistical Offices, but well in tune with the few who ever visited industrial plants and firms, was well aware that they were, and are, widely heterogeneous in their level of labour productivities (much more in Dosi, 2023), and sceptical on any ‘Total Factor Productivity’ measure.

Suppose that the distribution is that depicted in Fig. 9.

Fig. 9
figure 9

Relationship among production efficiencies, price and market shares

Firm 1 is the technological (i.e., productivity) leader and \(n\) is the marginal firm. Suppose further that the industry produces a homogeneous commodity and price \(p\) (the horizontal line in the Figure) is set by the leader (Firm 1) according to mark-up rule as in the equation above. Its total profits shall be the striped area between the p-line and the total variable costs of Firm 1. The latter can be obtained by multiplying the unit variable costs \(UV{C}_{1}\) of firm 1, by the quantity of good offered \({x}_{1}\).

That is, in absence of intermediate goods:

$${R}_{1}=\left(p-UV{C}_{1}\right){x}_{1}=\left[UV{C}_{1}\left(1+{\mu }_{1}\right)-UV{C}_{1}\right]{x}_{1}={\mu }_{1}\cdot UV{C}_{1} \cdot {x}_{1} $$
(4)

In words, the equation above describes the product between the firm’s markup \({\mu }_{1},\) its unit variable costs \(UV{C}_{1}\) and the quantity of product offered \({x}_{1}.\)

In such a setup the followers are price-takers—from the leader—and their mark-up is calculated residually.

Granted that, what determines \({\mu }_{1}\)—i.e., the margin of the ‘leader’?

Sylos Labini proposes that it is a limit price, that is the price able to keep out of the market potential entrants (\(n+1\)) etc. as from Fig. 9.

In my view, this is a bit ‘too-rationalist’ view of pricing dynamics, implying the leader knowing not only the full distribution of the productivity of incumbents but also that of potential entrants.

My conjecture, largely awaiting empirical testing, is that due to largely path-dependent, adaptive reasons, the levels of the mark-ups of the leader(s), and possibly the averages of an industry, depend also on the width and the skewness of the distribution of labour productivities, as in Fig. 9, and their history.

In these circumstances, again, as we shall detail below, we do not have a ‘supply curve ‘, as, plausibly, supply, in normal circumstances (that is except in cases of embargos, wars, etc.), is perfectly elastic to demand. How this demand is distributed among the firms is a totally different matter, which has to do with the dynamics of the competitive process: again, I have to refer to Dosi, (2023, chapter 9). Indeed, the just mention “exceptions”—like in 2021–2023—are extremely interesting in that they also reveal the fragility of any ‘competitive process’, and the easiness of implicit collusion and of phenomena of ‘profit inflation’; but, there is no “shift in the supply curve” or “movement along it”, but rather a more mundane widespread exploitation by producers of temporary shortages (I have talked above about a price: note that this is just a very rough approximation: there are generally distributions even in homogeneous commodities, see also below.)

In turn, demand is determined by the levels of income and its distribution and shaped by the social processes we have recalled above. Again, no ‘demand curve’ to speak of.

Note that this property would apply even if one did not have any evolution over time of either decision patterns or incomes. And even more so it applies when the latter evolve.

The basic point here is that in a first, but robust, approximation concerning at the very least manufacturing, demand levels determine quantities, and supply conditions determine costs and prices. No need to have curves going up and down, on either side.

3 What about supply conditions?

It should already be abundantly clear that standard upward sloping ‘supply curves’ are not there. As already mentioned in modern economies, we typically find non-decreasing returns.

At that sectoral level an impressive historical example of quantities skyrocketing and prices exponentially falling, admittedly from a sector characterized by extremely rapid technical change are semiconductors (see Dosi, 1984, and also the long list of activities characterized by learning curves and thus dynamic increasing returns discussed in Dosi, 2023).

Thus, for sure, under conditions of increasing returns, if one imagines ‘supply curves’, they are bound to look downward (except of course if some powerful idiots put an embargo on some goods making microprocessors similar to cocaine!). But under roughly non- decreasing returns, from cars to shoes we have never seen something like supply curves going upward!

At microeconomic level, a (non-upward sloping) ‘conjectural’ schedule might indeed be somewhat more plausible: that is, ‘… if I could sell more I might afford to decrease my prices …’. But of course, this has nothing to do with standard theory, but rather with some reasonable pricing heuristics.

However, there is hardly any limit to the theorist imagination in its efforts to reconcile decreasing returns at firm level with constant return at industry level. This is indeed a must if one wants to be sure that purely competitive conditions (and thus an atomless size of firms) coexist with non-decreasing returns at industry level (incidentally how farfetched was the idea had been already highlighted in the unjustly neglected critique of Sraffa, 1926). In order to do that, it is enough to add another (invisible) envelop of curves, alike Fig. 10, which is basically what one finds in standard micro textbooks. There is an infinite number of zero measure notional firms, of which a fraction actually enter at each time in a number just sufficient to always guarantee industry level equilibria. Indeed, an elegant formal exercise apt to keep together pure competition, industry-level constant returns and convexity of the production possibility set, which however, empirically, belong to the genre of pure science fiction.

Fig. 10
figure 10

Constant return industry supply curve with decreasing return firms

Indeed, empirically, one observes distributions of firms characterized by wide persistent heterogeneity in their productivities and costs, irrespectively of the level of disaggregation, and facing the same input prices ( a detailed discussion is in Dosi, 2023, ch. 5).

More precisely:

  1. 1.

    In general, there is at any point in time one or very few best-practice techniques which dominate the others irrespectively of relative prices.

  2. 2.

    Different firms are likely to be characterized by persistently diverse (better and worse) techniques.

  3. 3.

    Over time the observed aggregate dynamics of technical coefficients in each particular activity is the joint outcome of the process of imitation/diffusion of existing best-practice techniques, of the search for new ones, of the death of some others and of the changing shares of the incumbent ones over the total (these processes of course might or might not correspond to a similar dynamics in terms of firms which are so to speak the carriers of these techniques: see below).

  4. 4.

    Changes over time of the best-practice techniques themselves are likely to display rather regular paths (i.e., trajectories) in the space of input coefficients.

Let us further illustrate the previous points with a graphical example.

Suppose that, for the sake of simplicity, we are considering here the production of a homogeneous good under constant returns to scale with two inputs, x1 and x2 (think of them in the usual metaphor as labour and capital).

At each time, in general, in the space of unit inputs, micro-coefficients are distributed somewhat as depicted in Fig. 11. Suppose that at time \(t\) the coefficients are \({c}_{1},\dots ,{c}_{n}\), where \(1,. . ., n\) are the various techniques labelled in order of decreasing efficiency at time t. It is straightforward, for example, that technique \({c}_{1}\) is unequivocally superior to the other ones no matter what relative prices are: it can produce the same unit output with less quantities of both \({x}_{1}\) and\({x}_{2}\). The same applies to the comparison between \({c}_{1}\) and\({c}_{n}\), etc.

Fig. 11
figure 11

Microheterogeneity and technological trajectories

Suppose now that at some subsequent time \(t^{\prime}\) we observe the changed distribution of micro-coefficients \({c}_{3}^{\prime}; . . . ; {c}_{m}^{\prime}\). How do we interpret such a change?

The empirical story is roughly the following. At time t, all below-best-practice firms try with varying success to imitate technological leader(s). Moreover, firms change their market shares, some may die and other may enter: all this obviously changes the weights (i.e., the relative frequencies) by which techniques appear. Finally, at least some of the firms try to discover new techniques, prompted by the perception of innovative opportunities, irrespectively of whether relative prices change or not (for the sake of illustration, in Fig. 11, the firm which mastered the technique labelled three succeeds in leapfrogging and becomes the technological leader while m is now the marginal technique).

Statistically, it is rather easy to represent the foregoing dynamics by what we could call evolutionary accounting.

The fundamental evolutionary idea is that distributions (including, of course, their means, which end up in sectoral and macro statistics) change as a result of (i) learning by incumbent entities, (ii) differential growth (i.e., a form of selection) of incumbent entities themselves, (iii) death (indeed, a different and more radical form of selection), and (iv) entry of new entities. Favoured by the growing availability of micro longitudinal panel data, an emerging line of researchFootnote 4 investigates the properties of decompositions of whatever mean sectoral performance variable, typically productivity of some kind, of the following form, or variations thereof:

$$\Delta \Pi_{t} = \mathop \sum \limits_{i} s_{i} \left( {t - 1} \right)\Delta \prod_{i} \left( t \right) + \mathop \sum \limits_{i} \prod_{i} \left( {t - 1} \right)\Delta s_{i} \left( t \right) + \mathop \sum \limits_{e} s_{e} \left( t \right)\prod_{e} \left( t \right) + \mathop \sum \limits_{f} s_{f} \left( {t - 1} \right)\prod_{f} \left( {t - 1} \right) + \;{\text{some interaction terms}}$$
(5)

where \(\Pi\) are the productivities (or, for that matter, some other performance variables), \(s\) are the sharesFootnote 5 of each firm in the industry total, while \(i\) is an index over incumbents, \(e\) over entrants, and \(f\) over exiting entities.

The first term stands for the contribution of firm-specific changes holding shares constant (sometimes called the within component), the second one captures the effects of the changes in the shares themselves, holding initial firm productivity levels constant (also known as the between component) and the last two take up the effect of entry and exit, respectively.

However, the standard theoretical story is quite different. One can always take the mean at t over all the firms in the industry, C, and analogously take the mean at t’, C’. Then, it is always possible to draw two immaginary “isoquants” \(I\) and Iʹ passing through these means. Further assume that C and C’ are equilibria (what else could they be ? ! ?). Then, of course, we may call their shift “technical progress” and try to tackle the equally imaginary question whether the observed changes in the mean are, again, movement along the imagined curve or movements thereof. Again the self-inflicted identification problem.

All this argument does not mean that the dynamics of means does not matter. On the contrary, it is the latter which basically determine the dynamics of average prices. Concerning the underlying patterns of secularly increasing productivities, suffice to recall the classics, from Kuznets to Maddison, to C. Freeman, and a few seminal others.

Indeed, the links between supply conditions and prices are extremely robust.Basically, unit variable costs, in primis unit labour costs, secularly drive prices. See Figs. 12, 13, and 14 showing the dynamics of the indices of unit labour costs—that this, current labour costs divided by a proxy of labour productivityFootnote 6, and the producer price indices in manufacturing and in some illustrative manufacturing sectors even over much shorter time spans.

Fig. 12
figure 12

Own elaboration using STAN OECD.Stat

Fig. 13
figure 13

Own Elaboration using STAN OECD.Stat

Fig. 14
figure 14

Own Elaboration using STAN OECD.Stat

In fact, we are fully back to the classics—Smith, Ricardo, Marx, etc.—with a cost-based interpretations of price levels and dynamics, with no reference to any imaginary psychological construct either on preferences or on notional supply schedules.

It is crucial to emphasize that all the foregoing sector-level observed statistics are averages over quite heterogenous micro entities.

That is

$${P}_{j}=\sum_{i}{P}_{i}=\left[\sum_{i}{INT}_{i}+\sum_{i}\frac{{w}_{i}}{{\pi }_{i}}\right]\left(1+\sum_{i}{\mu }_{i}\right) $$
(6)

where the levels of prices, unit costs and margins of sector j are weighted averages over widely dispersed micro values in i, the different firms.

On productivities and margins, see the evidence discussed in Dosi (2023), chapter 6 and 9.

Also the distributions of prices, even in relatively homogeneous commodities, are a far cry from any ‘law of one price’: rather they tend to be distributed as log-normal (the lower bound) or Pareto distributions (the upper bound): cf. Coad (2009). A striking illustration is the distribution of a bottle of Coca Cola across countries in US: cf. Figure 15. More generally, on the micro evidence on product-level price distributions see, among others, also Syverson (2007), Roberts and Supina (1996), Beaulieau and Mattey (1999). The evidence is extremely robust: the reader is invited however to go through the attempts in a few of the contributions to rationalise it as ‘equilibrium departures’ from ‘the law’ (?) of one price and judge their success in doing so.

Fig. 15
figure 15

Source: Elaborations by Luigi Campiglio on data from US Bureau of Census and American Chamber of Commerce Association (ACCRA)

Price of bottles of Coca Cola across US counties, 1993.

Does anyone find them convincing? Do we need such rationalisations at all? What more do they tell us?

It is very important to notice that all the argument so far has nothing to do with the observation that over time price variations may correspond to demand variations of the opposite sign. Plausibly, price variations due, to e.g. changes in the conditions of production, may yield variations in the demand of the product whose price has changed, essentially due to variations in the budget constraints across populations of heterogeneous agents/consumers. However, the interpretation of such dynamics does not involve demand and supply curves of any kind. (“May” here is the crucial verb. Indeed one may observe also prices and quantities moving in the same direction: just think of oil prices and quantities since the ‘70 s. The standard interpretation is: “easy !, it depends on elasticities of substitution…”, but the trouble is that they are nowhere to be seen as behavioural or technological processes.)

4 Aggregation and multi-commodity economies

So far, we have basically dealt with partial disequilibrium dynamics (at least in the sense given by standard theory to the notion of ‘equilibrium’).

But what about the properties of multi-sector multi-product economies?

It is not possible to discuss here macroeconomics at any length. Let me just make a few general remarks (see however appendix A for some technical points).

If anything, at macro level, aggregate demand and supply curves are even more farfetched.

The ‘benchmark model’ in economic theory is the so-called ‘perfectly competitive economy’, formalized in the General Equilibrium model (see the classic Arrow & Hahn, 1971). In this framework a large number of actors interact through a price mechanism by specifying what they are willing to buy or sell at any given price. Irrespectively of whether  we are talking about the market for one good or vector of prices if we are talking about the market for several goods. On the side of those who consume the goods in this simple setting the participants have well defined preferences over any imaginable quantity of the good in question, or in a situation where many goods are being produced and sold, over any imaginable ‘bundle’ of goods that could be proposed to them. The prices of the goods that are consumed and those of the goods that go into producing them, are known to everyone and what constrains people and firms are the prices of all these goods including the price of labour. In the simplest version of this model, people who are referred to as consumers, sell their labour to firms who produce the goods which individuals then purchase and consume. The firms wish to maximise their profits, and individuals wish to purchase the ‘best’, according to their preferences, bundle of goods available. When the prices are just such that the quantities of all the goods demanded by the consumers are equal to the quantities supplied by the firms then this is referred to as a ‘market equilibrium’.

In such a market people and firms only choose their actions on the basis of the market prices and have no influence on how those prices are determined. The prices are announced but it is not specified in the simple model, who exactly announces them, and how they are calculated so that the market finds itself (?) in equilibrium.

The argument that was used in the past, to explain the ‘Invisible Hand’ was that the latter is the synthetic name of the magic that will get a market or an economy, through competition or some other process, from an out-of-equilibrium state to an equilibrium one. However, despite centuries of efforts one cannot show that there is a process that would lead a market from any state to an equilibrium. The search for such a process died out in the 1970’s with the famous results of Sonnenschein (1972), Mantel (1974) and Debreu (1974), who showed that any ‘natural’ adjustment process for prices could not be shown to be bound to reach an equilibrium.

Basically, underlying the ‘Invisible Hand’- which, to repeat, is invisible because it is not there -, there is an even more invisible ‘excess demand function’, which cuts across any purported fix point of a purported general equilibrium, cutting it downward or upward from basically anywhere!

Still worse, Saari and Simon (1978) showed that any process which would lead to the general equilibrium would require an infinite amount of information. Even in the most limited and abstract model of a market the Invisible Hand could not do its job.

And all this concerns economies characterized, on the production side, by non-increasing returns with standard convex production possibility sets. Conversely, in any economy wherein information and knowledge play any role, the standard equilibrium notions lose any relevance. Even neglecting the features of technologies which are different from pure ‘information’ (on which more in Dosi, 2023, chapter 3), the nonrival use, upfront generation cost and increasing returns of its use thereafter, and indivisibility characteristics of the latter bear far-reaching implications for any theory of economic coordination and change. As Arrow (1996a) emphasizes:

‘[c]ompetitive equilibrium is viable only if production possibilities are convex sets, that is do not display increasing returns,’ but … ‘with information constant returns are impossible’ (p. 647). ‘The same information [can be] used regardless of the scale of production. Hence there is an extreme form of increasing returns.’ (p. 648)

Needless to say, a fundamental consequence of this statement is the tall requirement of providing accounts of economic coordination which do not call upon the properties of competitive equilibria, and even less so on even more mysterious conjectural aggregate curves.

But, given these extremely robust theoretical results, what do statistics tell?

For sure, we know that in general aggregation cancels out any isomorphism between micro behaviours and aggregate dynamics: for a simple and quite powerful result on agents all characterized by stationary behavioural rules whose aggregation yield a seemingly autoregressive dynamics, see Lippi (1988).

Granted that, under which conditions aggregation still yields ‘well behaved ‘notional demand curves?

In a multiple commodity economy, this is formally explored by Hildenbrand (1994)

One of the basic ideas here is that if the distribution of preferences—irrespective of how they formed, or, for that matter, of how coherent they are—, is sufficiently homogeneous across income cohorts, one can establish sufficient conditions to guarantee non-upward-sloping notional demand curves (at each t), whose fulfilment can be detected from the statistical properties of actual demand conditional on different income classes. And these conditions are quite demanding indeed (more in Appendix A).

The other side of the macro, the supply side, is typically represented via aggregate production functions. They are, in my view, an even more poisonous and misleading construct (even leaving aside any issue concerning the ‘measurement of capital’, central to the so-called ‘capital controversy’ of the ‘60s and 70’: for the younger scholars who probably never heard of it, see Cohen & Harcourt, 2003).

We have already seen, when discussing micro “supply curves”, the implausibility of anything resembling conventional ‘production functions’. Rather, micro, heterogeneous, coefficients are likely to be fixed in the short term. At any given time, a firm is bound by its capabilities in its input combinations, and it is, so to speak, “stuck with them”. Over time, technical progress in each industry proceeds along quite ordered trajectories driven by the opportunities and technical constraints associated with each technological paradigm. These trajectories, in turn, are largely invariant to levels and changes in relative prices. Anyone who has visited some factories can appreciate it. To trivialize, in order to produce steel, you need a lot of capital and large amounts of energy, irrespectively of relative prices.

Of course, one may always re-write the imaginary curves II and I’I’, from Fig. 11, as equally imaginary concave ‘production functions’, or worse still, put them together as the ‘aggregate production function’. Needless to say, all this just further confuses the interpretation of what is actually going on, while reassuring the well trained economists that whatever observation one sees happen to be an equilibrium after all.

Consider the “dual” implications of all that, in terms of input prices. In the real world, one cannot obtain them by taking the partial derivative of output to inputs either at micro or industry level. And even less so at an aggregate level.

Putting it another way, there are no curves either linking quantities and costs or relative quantities and input prices.

The determinants of the price of inputs—different types of labour, energy, intermediate inputs, machinery, etc.—have to be found outside their combinations in the production of individual firms, and also collection of them. It is easy to find the determinants of all material inputs: their cost of production. It is more challenging to identify the cost of (different types) of labour and the determinants of wages in general (nested in macroeconomic and social factors). However, the reader should be certain of one thing: in no way input prices should be used, so to speak, “backward” as proxies of elasticities of inter-input substitution in any imaginatively constructed “production functions”.

Do not be misled by the goodness of fit and the significance of “production function estimates”. In fact, the way they are built, if distributive shares were perfectly constant and learning rates identical across firms, the correlation coefficients should be exactly one! (recall Shaikh (1974), Felipe & Adams, 2005; Felipe & McCombie, 2006, 2015).

All this has far reaching implications also in terms of purported “biases” of technical change. When one makes the foregoing exercise of separation between the movements along and the movements of production functions, if the resulting relative intensities undergo a disproportionate change, that is taken for a bias in the shift of the production function in favour or against the input under consideration. Nowadays, it is very fashionable to discuss, for example, about “skill-biased” or “routine-biased” technical change.

Just notice that, if our argument is correct, there should be no close link between the dynamics of input prices—including, of course, that of different types of labour—and the relative input intensities, neither in terms of price levels nor of price changes. And this is indeed a proposition which is easily testable whenever one abandons the production function straitjacket.

There are some general lessons here, as already emphasized: production functions cannot faithfully represent either a firm’s production plans or industry dynamics. Therefore, it makes no sense to derive from them— on the ground of the firm’s purported optimizing behaviour—either the input demands, or their changes with respect to changes in the inputs’ prices. And they do not have any direct implication for income distribution either.

But then what are we left with?

Well, of course, one may continue business as usual, like in the old joke of the drunk man searching for his car key under the streetlamp, knowing that he had lost it somewhere, but that was the only place where there was some light …

Even leaving aside any consideration on the scientific soundness of such an attitude, my claim is that there is a lot of light elsewhere, in places where car keys are more likely to be found.

5 Some ways forward, by ways of a conclusion

At the levels of firm-, industry-, and market-dynamics, I have argued above, there are quite simple ways to account for prices and quantities, without invoking any curve going up or down.

More generally, from both the empirical and the modelling points of view, it is imperative to be disciplined by the empirical evidence on the actual working of the markets under consideration. Here and elsewhere, beginning with the assumption that there ‘are’ (?) demand and supply curves might well be highly misleading. Rather, it may be more fruitful to start from the stylization of agents’ actual behaviours of people and, especially, organizations, and of their rules of interaction. Next, we ought to study which properties emerge out of the interactions themselves. In that we should finally meet H. Simon (1997) plea for an empirically grounded economic discipline.

Microeconomic learning and collective processes of selection are the fundamental drivers of industry evolution, which one ought to both characterize statistically and many explore via evolutionary Agent Based models (as in the genre from Nelson & Winter, 1982, to Dosi et al, 2017a).

Together, I suggest pursuing the investigation of coordination with evolution on the grounds of even higher dimensional, phenomenologically much richer agent-based models, at the level of both industries and whole economies. Concerning the latter, one of the possible template entails refining and developing upon the family of ‘Schumpeter-meeting-Keynes’ models (Dosi et al., 2010, 2013, 2015, 2017b, 2018, 2021a, 2021b). Here, suffice it to flag their main features.

Such a family of models clearly meets Solow’s (2008) pleading for microheterogeneity: a multiplicity of agents interact without any ex ante commitment to the reciprocal consistency of their actions.Footnote 7 These models bridge Keynesian theories of demand generation and Schumpeterian theories of technology-fuelled economic growth. Agents always face opportunities of innovations and imitation, which they try to tap with expensive search efforts under conditions of genuine uncertainty (so they are unable to form any accurate expectations on the relationship between search investment and probabilities of successful outcomes). Hence, (endogenous) technological shocks (the innovations themselves) are unpredictable and idiosyncratic.

This family of models builds on evolutionary roots and is also in tune with several insights from the ‘economics of information’ (see Stiglitz & Greenwald, 2014) and from ‘good New Keynesianism’ (cf. e.g. Stiglitz, 1994). It tries to explore the feedback between the factors influencing aggregate demand and those driving technological change. By doing that, it begins to offer a unified framework jointly accounting for long-term dynamics and higher frequencies fluctuations.

The models are ‘structural’ in the sense that they explicitly build on a representation of what agents do, how they adjust, etc. In that, our commitment is to ‘phenomenologically’ describe microbehaviours as close as one can get to available micro-evidence. Akerlof’s (2002) advocacy of a ‘behavioural microeconomics’, we believe, builds on that notion. In fact, this is one of our fundamental disciplining devices.Footnote 8

In such models, prices and quantities are emergent properties stemming from a multiplicity of out-of-equilibrium interactions. Again, nothing to do with demand and supply curves.

Indeed, a synthetic representation of prices in such a multi-sector economy is some extension of Eq. 5:

$${{\varvec{P}}}_{{\varvec{j}}}\left(t\right)=\left[{{\varvec{P}}}_{j}\left(t-1\right) \cdot {\varvec{A}}\left(t-1\right)+{{\varvec{l}}}_{j}\left(t-1\right) \cdot {{\varvec{w}}}_{j}\left(t\right)\right]\left(1+{{\varvec{\mu}}}_{j}\right) \: $$
(7)

where P, l, w, \({\varvec{\mu}}\) are the vectors of all prices, unit labour coefficients, wages and margins of sector j, which, to repeat are averages over heterogeneous micro entities; A is the matrix of intermediate input coefficients (the explicit multi-sector version of INT in Eq. 5). Input coefficients are lagged, for the simple reason that techniques at t are what they are,Footnote 9 inherited from the past, even if, of course, they change through the processes of idiosyncratic learning, mentioned above, that we extensively discuss in chapter 5, 6 and 9 of Dosi (2023).

Conversely, the realised output stemming from such techniques is divided between wages and profits by processes which bear also fundamental macro dimensions—concerning, e.g., the institutions governing labour and product markets, the degrees and modes of social conflict over income distribution, and foreign exchange policies affecting the international competitiveness of domestic firms.

Again, also at this general disequilibrium prices and quantities are not linked by anything which looks like supply and demand curves. The price levels (better, price distributions) are approximately determined by production conditions, while quantities are driven—on the consumer side—by the socio-economic factors, briefly discussed at the beginning of this essay, and by the technical conditions of production—on the producer side -. And both are shaped by macroeconomic conditions, including the levels of activity of the system (i.e. the “Keynesian” aggregate demand) and the determinants of income distribution.

Note that the foregoing statistical and modelling exercises do not replace but complement the analyses of how markets works, their architecture and the actual rules of behaviours of the actors therein. In fact, the forgoing broad statistical regularities ought to be understood precisely as emergent properties out of the latter structures of interaction, which are indeed their microfoundations.

In all that, of course, there is not any fatwa against ‘curves’, but rather an imperative to use them, when appropriate, to describe actual patterns of whatever phenomenon under observation instead of sheer rationalizations of equilibria stemming from the fervid imagination of the theorist itself.

Admittedly, it is a grand, old and noble, ‘evolutionary’ research programme which links the classics (Smith/Ricardo/Marx) with contemporary microfoundations (Simon + Nelson/Winter + Kirman…), to macro dynamics (a la Keynes/Kaldor/Kalecki …).

Indeed, a grand program !