1 Introduction

It’s 1953 and economists in the Central Bank of Guatemala are topping up the water tank in their recently purchased Phillips-Newlyn machine, a system of pipes and reservoirs with water flowing through it. The land reform act passed in Guatemala the previous year had redistributed unused land to local farmers. US corporation Wrigley’s, one of the largest buyers of Guatemalan chicle gum, had announced that it would stop imports from Guatemala in protest to the land reform. The economists in the Central Bank wanted to know what effect a decrease in these foreign purchases would have on the national economy. They adjusted the machine to account for the macroeconomic conditions that were taking hold in Guatemala and let the machine reach equilibrium. They then turned the valve marked ‘exports’ to the ‘closed’ position and watched what happened. The flow marked ‘income’ started falling, and the water level in a tank marked ‘surplus balances’ rose, which in turn caused a fall in a graph marked ‘interest rates’.Footnote 1

Wait. The economists turned a valve in a hydraulic machine that pumped water from reservoir to reservoir. How could this tell them anything about the Guatemalan economy? The practitioner’s reply is that the Phillip-Newlyn machine is a model that represents the Guatemalan economy, and this is why information about the economy can be extracted from the machine. This is correct as far as it goes, but it does not go far enough. Per se the Phillip-Newlyn machine is just a collection of reservoirs connected by pipes. How does such a system become a model and in virtue of what does it represent the Guatemalan economy?

The Phillip-Newlyn machine is not an isolated case. Plasticine sausages are used as models of proteins, oval shaped blocks of wood serve as models of ships, mice are used as models of humans, balls connected by sticks function as models of molecules, electrical circuits are studied as models of brain function, autonomous robots are used as models of insect cognition, the camera obscura is proffered as a model of the human eye, metal cylinders filled with hardened magma are investigated as models of volcanoes, a basin with pumps and hoses serves as a model of the San Francisco Bay’s water system, and fluid ‘dumb holes’ model gravitational black holes.Footnote 2 In all these cases (and many others that we cannot list here) a material object is used as a model that represents a certain target system. Our question about the Phillip-Newlyn machine is therefore an instance of a general problem: when is a material object a model and in virtue of what does it represent something beyond itself?

Our investigation will be aided by a precise formulation of the problem. The challenge is to fill the blank in ‘M is a scientific representation of T iff ___’, where M is a material model and T a target system.Footnote 3 For reasons that will become clear soon we call this the Epistemic Representation Problem. The aim of this paper is to present an answer to this problem and to give a general definition of a model. We focus on material models – physical objects that are used symbolically. Material models are an important class of models, but not all models are of this kind. We briefly discuss other kinds of models in Section 8 and suggest that mutatis mutandis, the account of representation we develop can be carried over to these.

An acceptable answer to this problem will have to satisfy the following two conditions.Footnote 4 The first is the Surrogative Reasoning Condition: models represent their targets in a way that allows scientist to generate hypotheses about them. Many investigations are carried out on models rather than on reality itself, and this is done with the aim of discovering features of the things models stand for. An acceptable theory of scientific representation has to account for how reasoning conducted on models can yield claims about their target systems. This condition motivates the choice of the term ‘epistemic representation problem’. The second is the Possibility of Misrepresentation Condition: an account of scientific representation has to make misrepresentation possible. If M does not accurately represent T, then it is a misrepresentation and not a non-representation. Accuracy is not a part of the concept of representation.

A variety of different positions are available, with Griceanism, similarity accounts, isomorphism accounts, inferentialism, and fictionalism being the most prominent proposals. We refer the reader to (Frigg and Nguyen 2017) for a detailed critical review of the sizeable literature on scientific representation. We take away from this review the message that none of the current accounts provides a satisfactory answer to our problem. Our task here is to formulate a novel account of representation that offers a satisfactory answer to the epistemic representation problem. To put our endeavour into perspective we comment on how our account compares to related accounts, in particular Contessa’s (2007), Weisberg’s (2012, 2013) and Giere’s (1988; 1999; 2010). But for want of space we cannot embark on a critique of the full spectrum of currently available approaches here.

Our constructive endeavour takes as its point of departure Nelson Goodman and Catherine Elgin’s notion of representation-as. Originally introduced in aesthetics (Goodman 1976), Elgin (2010), Hughes (1997) and van Fraassen (2008) have suggested that it is also the way in which models function in science. We agree. However, current formulations of the claim are only signposts indicating a direction of travel and a fully systematic account of how to use Goodman and Elgin’s tools to answer the epistemic representation problem has not yet been formulated. The aim of this paper is to provide a nuts-and-bolts account of how representation-as works in the case of scientific models, thereby providing the resources to analyse how models function representationally in scientific practice. This requires extending existing accounts in a number of ways. In particular, we provide a general definition of a model based on the notion of an interpretation, generalise the notions of a Z-representation and of exemplification to meet the needs of scientific modelling, and introduce the notion of a key, which connects exemplified properties to ones that are imputed to the target.

The structure of the paper is as follows. In Section 2 we introduce Goodman and Elgin’s notion of ‘representation-as’, which serves as the point of departure for our discussion. In Section 3 we adapt the notion of a Z-representation to the context of scientific modelling and offer a general definition of a model. In Section 4 the notion of exemplification is reconsidered in the context of scientific modelling, and in Section 5 the notion of a key is introduced. In Section 6 we qualify the role of denotation in our account. In Section 7 we draw the loose ends together and formulate a new account of how models represent. We call this the DEKI account, indicating its key components: denotation, exemplification, keying up, and imputation. In Section 8 we offer a few programmatic remarks about how this account could be carried over to non-material models.

2 Representation-as

Many works of art represent their subjects as thus or so. A famous caricature of Winston Churchill represents him as a bulldog. But this type of representation is not limited to caricature: Rembrandt’s Self-Portrait with Two Circles represents the artist as wearing a white hat, and the bronze statue of Margot Fonteyn represents the ballerina as standing on her tiptoes. Goodman and Elgin term this sort of representational relationship representation-as (Elgin 2010; Goodman 1976, 27). In its general form, a representational vehicle X (e.g. a picture or statue) represents a target or subject Y (e.g. a politician or a ballerina) as Z (e.g. a bulldog or a ballerina standing on her tiptoes). Goodman and Elgin develop this notion in a string of publications (both joint and single authored). When referring to views shared by both authors, we use the acronym ‘GE’ to refer to them jointly.

Talk about ‘representation’ is ambiguous between two kinds of representation. The caricature represents Churchill as a bulldog; at the same time it is a representation of Churchill. An account of representation has to individuate these notions and avoid equivocating on ‘representation’. To this end observe that his passport photograph, the name ‘Winston’, and his nickname ‘The British Bulldog’ are also all representations of Churchill. We add a hyphen to the words and call this relation representation-of. GE submit that representation-of is analysed in terms of denotation: X is a representation-of Y iff X denotes Y. A name is a representation-of its bearer because the name denotes the bearer, a picture is a representation-of its subject because it denotes its subject, and so on (we return to denotation in Section 6).

Representation-of is distinct from representation-as. In fact representation-of is a necessary but insufficient condition on X representing Y as Z (Elgin 2010, 2; Goodman 1976, 28). It is necessary because it establishes that representational vehicles are about their subjects. Denotation picks out the subject and ensures that the vehicle points to it. The caricature represents Churchill because it denotes him. But denotation is insufficient for representation-as. The caricature does not just denote Churchill; it represents him as a bulldog, and nothing in the concept of denotation would help explain how the caricature does so.

If denotation is a necessary condition, what are we to say about pictures that fail to denote? Böcklin’s Isle of the Dead represents an islet dominated by cypress trees with a boatman rowing a white figure into the cove. But there is no such islet, and thus there is no such islet to be denoted. Are we to deny that they are representations at all? GE respond by distinguishing between being a picture of a soandso, and being a soandso-picture. A picture showing a unicorn is a unicorn-picture but not a picture of a unicorn, and Böcklin’s painting is an islet-picture despite not being a representation-of any islet. So we have to distinguish between being a Z-representation and being a representation-of a Z. The former is an unbreakable one-place predicate; the latter is a two-place relation that holds between a representational vehicle and its subject.

There is a complete disconnect between what kind of representation X is and what X is a representation of: the kind of X does not determine what X denotes, and the denotation of X does not determine its kind. Not every islet-representation denotes an islet, and islets can be denoted by representations that aren’t islet-representations. Such representational practices are common in different contexts. In Dutch still life a snail-picture denotes humility and in Bollywood movies a two-intertwined-roses-representation denotes the couple being intimate.

What does it take to be a Z-representation? In the case of pictorial representation this is a much-discussed issue. So-called perceptual accounts hold that a picture X portrays a Z if, under normal conditions, an observer would see a Z in X (Lopes 1996). GE take a different route and explain Z-representation in terms of what they call genres (cf. Elgin 2010, 2–3; Goodman 1976, 23). But how pictures represent need not occupy us here. Our problem is how scientific models work, and theories of pictorial representation do not carry over, at least in any straightforward manner, to the scientific case, irrespective of what these views are. In the next section we develop an account of scientific Z-representations that is independent of anything one would (or wouldn’t) want to say about pictures.

Let us now introduce the concept of exemplification, which is crucial to understanding representation-as. An item exemplifies a property P if it at once instantiates P and refers to it. To instantiate P without referring to it is merely to possess P, and to refer to P without instantiating P is to represent P in a way other than by exemplifying it. An item that exemplifies a property is an exemplar (Elgin 1996, 171; Goodman 1976, 53).Footnote 5 Straightforward examples of exemplification are the sample cards supplied by commercial paint companies. These cards instantiate various colours, and refer to the colours instantiated (Elgin 2007, 39).

Instantiation is a necessary condition for exemplification. But the converse does not hold: not every property that is instantiated is also exemplified. Exemplification is selective (Elgin 1983, 71; 2010, 6). The chip card exemplifies redness, but not rectangularity or being an inch long even though it instantiates these properties. Only selected properties are exemplified. But there is nothing in the nature of an object that effects that selection; no properties are intrinsically more important than others.

In the case of the Phillip-Newlyn machine, the model exemplifies how the water flows through the machine, and the relative height of the liquid in its various tanks through time. It doesn’t exemplify being made of Perspex or being 2.5 m tall. And it’s not just properties that are irrelevant to the workings of the machine that are not exemplified. In order for the water to move around the model at all, it requires a motor that pumps water from the floor level tank up to the top of the machine, and the force of gravity that draws the water downwards through the various pipes and reservoirs. Although these aspects of the machine are essential to its workings, they do not correspond to any economic feature (Morgan and Boumans 2004, 386). They are not selected as relevant features of the machine in the context of using it as an economic model, and so are not exemplified.

Turning an instantiated property into an exemplified one requires an act of selection, which usually depends on the relevant context. The same sample card can exemplify rectangularity if used in geometry class. The Phillips-Newlyn machine could exemplify how its pump and gravity combine to generate a circular flow of liquid, if, for example, it was used in a plumber’s showroom to illustrate a new type of pump. The specifics vary from case to case, but at the level of a general theory nothing depends on these details. One aspect, however, is crucial: exemplars provide epistemic access to the properties they exemplify (Elgin 1983, 93). So to be exemplified a property not only has to be selected; it also has to be epistemically accessible. We say that property that satisfies these criteria is highlighted. These considerations can be summarised in the following definition:

Exemplification: X exemplifies P in a context C iff(i) X instantiates P, and(ii) P is highlighted in C. P is highlighted in C iff(α) C selects P as a relevant property, and(β) P is epistemically accessible in C.

A sample card exemplifies, say, a certain shade of red because it instantiates it and, in the context of a paint shop, that shade of red is selected as relevant and is epistemically accessible (a sample card too small to see with the naked eye would not exemplify redness).

Exemplification requires reference to a context C. A rigorous definition of a context is beyond the scope of this paper (indeed we doubt there is such a definition). For our purposes it is sufficient to think of a context as a certain set of problems and questions that are addressed by a group of research scientists using certain methodologies while being committed to certain norms (and, possibly, values). These factors determine which of X’s epistemically accessible properties are representationally relevant.Footnote 6

A key insight on the way to a definition of representation-as is that Z-representations can, and often do, exemplify properties associated with Zs. The Churchill caricature is a bulldog-picture and it exemplifies bulldog-properties like aggressiveness and relentlessness. The Fonteyn statue is a dancer-representation and it exemplifies the dancer-properties grace and being on tiptoes.

Some objects do not literally instantiate the properties they exemplify. A caricature does not literally instantiate relentlessness (it’s piece of paper) and a statue cannot stand on tiptoes (it’s a piece of metal). GE acknowledge this and say that these are examples of metaphorical exemplification (Elgin 1983, 81). A painting can literally instantiate greyness; it can metaphorically instantiate sadness (Goodman 1976, 50–52). Metaphorically instantiated properties can be exemplified in the same way in which literally instantiated properties are: by being highlighted. In the next section we replace metaphorical instantiation by the notion of instantiation under an interpretation, which is better suited to analyse scientific models.

For X to represent Y as Z it is not enough for X to denote Y and also be a Z-representation exemplifying certain Z-properties. To represent Churchill as a bulldog it is not sufficient that the caricature denotes Churchill and is a bulldog-representation exemplifying certain bulldog properties. On top of that the caricature has to impute these properties to Churchill (Elgin 2010, 10). Thus we arrive at the following definition of representation-as:

Representation-As (RA): X represents its subject Y as Z iff:(i) X denotes Y.(ii) X is a Z-representation and exemplifies Z-properties P 1 , …, P n .(iii) P 1 , …, P n are imputed to Y.

Scientific models and pictures both represent their targets as being thus or so. Indeed pictures and statues meet the conditions of adequacy on scientific representation provided earlier: they can be used to reason about their subjects and they can misrepresent them. This observation suggests that RA would double as an account of scientific representation if we take X to be a model, Y a target system and Z a specification of what kind of model X is. Analysing the Phillips-Newlyn machine in these terms yields: (i) The machine denotes the Guatemala economy (ii) it is an economy-representation exemplifying properties like a decrease in exports leading to a decrease in income (P 1) and interest rates (P 2); and (iii) P 1 and P 2 are imputed to the Guatemalan economy.

This is a good start. But each of the three conditions stands in need of either articulation or revision (or both) in order to operate successfully in the context of scientific modelling. We now discuss how the conditions have to be overhauled to meet the needs of scientific modelling. In Section 7 we pull the threads together and formulate what we call the DEKI account of representation.

3 Z-representations and scientific models

It is a crucial element of RA that the Phillips-Newlyn machine is an economy-representation. But in the scientific context it is by no means clear how we can categorise objects as Z-representations. Unlike in the case of photographs or paintings, an appeal to what objects look like under normal conditions is a non-starter, and reference to genres at the very least requires unpacking. Reservoirs and pipes, plasticine sausages, blocks of wood, mice, balls and sticks, electric circuits, robots, the camera obscura, fluids travelling faster than the local speed of sound,  and metal cylinders are not in any obvious way classified as belonging to a particular genre of representations, and most objects of this kind don’t function symbolically at all. In the context of material models an alternative approach to understanding Z-representation is needed.

We call the material substratum of a model the ‘base’ O. The base of the Phillips-Newlyn model is the system of pipes they built.Footnote 7 We now use the letter ‘O’ rather than ‘X’ to emphasise that we are dealing with a material object, and we refer to properties of O as O-properties. The question then is: what turns O into a Z-representation? An appeal to O’s intrinsic features does not help. There is nothing in water pipes or electric circuits that makes them economy-representations or brain-representations, and the mouse running through the kitchen isn’t a representation at all. In fact O’s intrinsic characteristics do not regulate how the object functions symbolically. One might say that someone using O as such is what turns it into a Z-representation. There is a grain of truth in this, but it merely pushes the question one step back: what does it take to use an O as a Z-representation?

To answer this question it is illustrative to see how Phillips describes (a precursor to) his machine when introducing it to the wider economics community:

‘the production flow of a commodity is represented by the flow of water into a tank. This flow is controlled by a valve […] The production flow goes into the tank containing stocks, from which is drawn the consumption flow, controlled and measured by a second valve similar to the first […] Price is assumed to be determined at any instant by the quantity of stocks, represented by the quantity of liquid in the tank, and the demand schedule for them, represented by the capacity of the tank at different levels’. (1950, 284)

And then later in the paper, when describing how, given the dimensions of the tanks and valves, scales for relevant quantities are constructed, he describes the relationship between O-properties and Z-properties as follows:

‘Assume that the price scale is so chosen that the required relation between stocks and price of a commodity is reproduced on the model when one cubic inch of water is made equivalent to one hundred tons of the commodity’. (1950, 285, emphasis added)

So Phillips turns a pipe system into an economy-representation by taking properties of the machine to ‘represent’, or be ‘equivalent to’, economic properties. In other words, he turns the pipe system into an economy-representation by interpreting certain selected O-properties as Z-properties. Morgan and Boumans (2004, 383) specify the physical properties of the machine that Phillips interpreted as economic elements. For example, the flow of water is interpreted as the production flow of a commodity; the capacity of tanks is interpreted as the quantity of stocks; and so on.

‘Interpretation’ is a flexible term that can mean different things to different people. It is therefore important to give an exact definition of what we mean by interpretation in the current context. Let \( \mathcal{O}=\left\{{O}_1,\dots, {O}_n\right\} \) and \( \mathcal{Z}=\left\{{Z}_1,\dots, {Z}_n\right\} \) be sets of relevant O-properties and Z-properties respectively. One could then define an interpretation is a bijective function \( I:\mathcal{O}\to \mathcal{Z} \). While correct in principle, this definition is not easy to handle in practice because it does not explicitly distinguish between quantitative and qualitative properties.Footnote 8 Properties like ‘being a reservoir’ are qualitative properties: they are all-or-nothing properties in that they either are or aren’t instantiated. By contrast, properties like ‘the flow of water is x litres per minute’ are quantitative properties. In that case we need to distinguish carefully between the property and its values. To make this distinction explicit we refer to the property as the variable and to a specific quantity as the value. We denote the former by upper-case letters and the latter by lower-case letters. We furthermore adopt the convention that the members of \( \mathcal{O}, \) and \( \mathcal{Z}, \) are either qualitative properties or variables. So O 1 could be ‘the flow of water through the second valve’, and o 1 2.1 litres. This suggests the following definition of an interpretation:

O-Z-Interpretation: Let \( \mathcal{O}=\left\{{O}_1,\dots, {O}_n\right\} \) and \( \mathcal{Z}=\left\{{Z}_1,\dots, {Z}_n\right\} \) be sets of O-properties and Z-properties respectively, whereby all members of either set are qualitative properties or variables. An O-Z-interpretation is a bijection \( I:\mathcal{O}\to \mathcal{Z} \), Z i  = I(O i ), for i = 1, …, n so that:(i) Properties are mapped on to properties of the same kind (that is, qualitative properties are mapped onto qualitative properties and variables onto variables).(ii) For every variable \( {O}_i\in \mathcal{O} \), and I(O i ) = Z i , there is a function f i  : o i  → z i associating a value of Z i with a value of O i .

In specific cases one may want to impose further restrictions on allowable functions. In case of the Phillips-Newlyn machine the function associating water flow with commodity flow is assumed to be linear. However, such restrictions are idiosyncratic to the context and should not be built into a general definition. We are now in position to define a Z-representation.

Z-representation: A Z-representation is a pair 〈O, I〉 where O is an object and I is an O-Z-interpretation.

Colloquially one can call an O a Z-representation and as long as it is understood that there is an interpretation in the background no harm is done. It is important, however, that in a final analysis a Z-representation is a pair 〈O, I〉. What kind of representation O is crucially depends on I, and different interpretations produce to different representations. One could, for instance, interpret the reservoirs as schools and universities and the flow of water as the movement of students through the system. Under that interpretation the same machine would be an education-system-representation.

A model, then, is simply a Z-representation such that O is chosen, in a certain context, to be used as a base.Footnote 9

Model: A model M is a Z-representation: M = 〈O, I〉, where O is an object that is used as a base in a certain context.

An immediate consequence of this definition is that models need not have a target. Far from being an unwelcome eccentricity, this is an advantage of our account. It provides a natural answer to how models without targets represent: they are Z-representations that are not also representations-of a target. The Phillips-Newlyn machine would be an economy-representation even if it had never been used as a representation of an actual economy (Guatemalan or otherwise) just as an architectural model of Gaudi’s Hotel Attraction is Hotel-Attraction-representation even though the hotel has never been built. In fact, Phillips and Newlyn’s own motivations for building the machine were not necessarily to represent any particular economy per se. Rather charmingly, Phillips (1950, 283) describes building the machine in order to help ‘students of economics who, like [himself], are not expert mathematicians’ understand the mathematical equations that were increasingly being used in macroeconomics at the time.Footnote 10 But of course nothing prevented the machine from then being used to represent a particular economy (in the same way in which the mathematical model could have been used).

Sometimes the base is a ready-made. Worms, mice, and electric circuits predate their use as models. Other times the base is tailor-made for the situation, as in the case of the Phillips-Newlyn machine. The choice of a base is a creative act. It may be informed by the interpretation that one would like to impose on an object, but it is in no way determined by it. In principle any O can be any model.Footnote 11

Bases can be chosen freely and our notion of interpretation imposes no restrictions on the choice of either O-properties or Z-properties. Did we open the floodgates to arbitrariness? No. Modellers will chose a base that exhibits interesting behaviour. Hughes rightly observes that (what we call) the base is a ‘secondary object that has, so to speak, a life of its own’ and which has ‘an internal dynamic whose effects we can examine’ (1997, 331). The Phillips-Newlyn machine is a case in point. It has a highly complex behaviour that economists study. Even though they could have built the machine differently, or they could have chosen to study another object altogether, once the choice is made, what they see is far from arbitrary. Likewise for interpretations. While one is initially free to choose O-properties and Z-properties freely, once a choice is made, representational content is constrained. If there are three litres of water in the tank and the interpretation says that the tank holds foreign-owned balances and one litre of water corresponds to a trillion pounds, then the model says that there are three trillion pounds held outside of the UK. Free choices, once made, are highly constraining. This is why models are epistemically useful. Scientists study this constrained behaviour and thereby gain insight into their subject matter.

A few observations about Z-representations are in order. First, Z can be a concept, a notion, an idea, or a phantasy – anything that can belong to a certain domain of discourse. The important point is that Z need not be a rendering of a target system. Indeed Z need not be realistic at all. A drawing can be minotaur-representation, a movie can be a Darth-Vader-Representation, and a model can be a perpetual-motion-machine-representation or a superluminal-beam-travel-representation, yet none of these exist. There are no limits to the choice of Z; anything that makes sense in a certain context is in principle acceptable. This frees Z-representations from the dictate of targets. In this our account differs from Contessa’s (2007, 58), who uses the term ‘interpretation’ to describe a one-to-one association of the relevant objects, relations, and functions, found in models and their targets, as well as Weisberg’s (2013, 39–40), who introduces the notion of a ‘construal’ which includes an ‘assignment’ (denotation relations between a model and its target) and ‘intended scope’ (which specify which aspects of models are intended to be taken seriously in terms of their targets) to capture an idea similar to Contessa’s.

Second, our definition does not require that all of O’s properties are collected in \( \mathcal{O}; \) neither does it require that contains a complete list of Z-properties. All that is required is that there is at least one property in each set.

Third, interpretations aren’t set in stone. In different contexts the properties that feature in and (and the interpretation function itself) may change, and existing interpretations can be extended. The Phillips-Newlyn machine often leaked water onto the floor when it was run. Originally this was seen as technical problem with the machine. However, at some point economists realised that this was actually an interesting feature and interpreted it as the flow of money from the regular economy into the black economy (Morgan and Boumans 2004, 397 fn. 14).

4 Exemplification revisited

In Section 2 we have seen that an item exemplifies a property P iff it at once instantiates P and P is highlighted in the context under consideration. What we are steering at is an account in which a model M = 〈O, I〉 exemplifies certain properties. But M does not seem to accommodate exemplification: the instantiation condition is mostly unattainable, while highlighting seems mostly trivial. This situation needs to be rectified.

Let us begin with instantiation. The problem is that if OZ, then the model base O does not, at least in general, instantiate properties associated with Z, and thus cannot exemplify them.Footnote 12 The Phillips-Newlyn machine instantiates water flows but not commodity flows, and so it can never exemplify the latter. This unfortunate conclusion can be avoided by noting that nothing that is important about exemplification makes it necessary that an item literally instantiates P.

An interpretation establishes a one-to-one correspondence between O-properties and Z-properties and so we can introduce the concept of instantiation-under-interpretation-I (I-instantiation for short):

I-instantiation: Let O be an object and I an O-Z-interpretation. A model M = 〈O, I〉, I-instantiates a Z-property P iff O instantiates an O-property P’ which satisfies the following condition: P’ is mapped onto P under I, and if P and P’ are variables then I contains a function f such that p = f(p’) for all values p’ of P’.

The idea that models can exemplify properties that they I-instantiate raises an interesting question about the truth conditions of claims like ‘the Phillips-Newlyn machine exemplifies a falling interest rate after a decline in exports’. Since the model, as a hydraulic machine, only I-instantiates this economic property it appears that it cannot be strictly speaking true that it exemplifies it. One way to deal with this would be to appeal to something like Sainsbury’s idea of truth relative to a presupposition (2010, 143–151; 2011). Even through it is, again strictly speaking, false that the model exemplifies this property, it is true relative to the interpretation in the same way that it is true relative to a presupposition that Holmes lives at 221b Baker Street. Alternatively, one might appeal to Walton’s (1990) notion of pretense. In a game of make believe with the interpretation as a principle of generation we are prescribed to imagine that the model has the economic property, in the same way in which we are prescribed to imagine that a detective lives at such an address. Both of these options provide us with a notion of truth relative to an interpretation that captures conditions under which sentences like ‘the Phillips-Newlyn machine exemplifies a falling interest rate after a decline in exports’ are correct.

For our purposes it doesn’t matter how this issue is resolved, so we don’t commit to a particular approach. It is sufficient to liberalise the definition of exemplification to allow objects that I-instantiate P to exemplify P (simply by replacing instantiation by I-instantiation). We can now say that the Phillips-Newlyn machine I-instantiates commodity flows, and it exemplifies particular flows if the particular flow is highlighted.

The worry with instantiation was that it was too hard to come by; the opposite problem seems to beset highlighting. The concern is that all of the properties in have been selected as relevant, and thus all are exemplified. But this would trivialise the notion of exemplification. Fortunately this objection is based on a misapprehension of the workings of an interpretation. An object can exemplify only properties that are covered by an interpretation, but this does not imply that every property covered by an interpretation is ipso facto exemplified. This is because interpretations can, and often do, cover O-properties we are unaware of or uninterested in. There is a small white plastic pipe in the lower right corner of the machine. The flow of water through it is invisible and we haven’t paid any attention to it. This flow is covered by the interpretation, but it is not highlighted and therefore not exemplified. Or consider again the case of the Guatemalan economists. They may be been particularly interested in the change in the equilibrium values once the appropriate change has been made to the valve marked ‘foreign exports’. This means that the machine would exemplify this property. But in other contexts, this property might not be exemplified at all. For example, when explaining the working of the machine, Phillips himself ignores the impact of foreign imports and exports until the end of his paper (1950, Section. 3). This means that, although I-instantiated, the relevant Z-properties would not be highlighted, and thereby would not be exemplified even though they have been covered by the interpretation all along.

Whether or not a Z-property covered by the interpretation is exemplified depends on whether we have epistemic access to the corresponding O-property and on whether the context selects that O-property as a focal point of the investigation. The adoption of an interpretation in no way determines that this has to be the case. O, together with the interpretation, provides a ‘menu’ of Z-properties that the model I-instantiates. Whether or not any of these properties is exemplified depends on the epistemic purposes of those using the Z-representation.

5 Imputation and keys

So far we have focused on what turns an object into a Z-representation. GE’s observation that Z-representations do not have to represent any Z remains true in the context of scientific representation. Yet at least some models do represent particular targets. The initial definition of representation-as states three conditions for this to happen: the model has to denote a target T and it has to exemplify properties that are then imputed to T (we write ‘T’ rather than ‘Y’ from now on to make notation more mnemonic).

Imputation can be analysed in terms of property ascription. The model user may simply ascribe to the target system the properties exemplified by the model, and this is what establishes that the model represents the target as having those properties. In this way models allow for surrogative reasoning: imputing a property to a target generates a hypothesis: T has that property. And notice that nothing in our discussion requires that these hypotheses are true; the result of the imputation can be right or wrong, thereby allowing models to misrepresent their targets.

However, in many cases of representation-as the properties exemplified by a Z-representation aren’t transferred to a target unaltered. In her discussion of imputation Elgin posits that a representation imputes the exemplified properties ‘or related ones’ to its target (2010, 10). This observation is particularly pertinent in scientific contexts. The properties of a model are rarely, if ever, taken to hold directly in their target systems and so the properties imputed to targets may diverge significantly from the properties exemplified in the model.

The problem with invoking ‘related’ properties is not its correctness, but its lack of specificity. Any property can be related to any other property in some way or another, and as long as nothing is said about what this way is, it remains unclear what properties are ascribed to T. So what connects the properties exemplified by a Z-representation with those that are imputed to the target system? There is no universal answer. In some cases the connection could be described as ‘de-idealisation’. In the Phillips-Newlyn machine – which was known to have a margin of error – the connection was to move from exact properties (like the interest rate being x) to intervals around those properties (like the interest rate being x ± 4% of x); or the property imputed could be even less specific, like an imputed positive correlation between foreign investment and interest rates, without any specific value.

One could put faith into context as a determinant of what properties are imputed to the target. We’d rather not. It remains unclear what a model says about its target as long as the relation between the properties exemplified by the model and the properties imputed to the target is unspecified. We therefore prefer to write this explicitly into the definition of representation-as. Let P 1, …, P n be the Z-properties exemplified by the model, and let Q 1, …, Q m be the ‘related’ properties that the model imputes to T (n and m can but need not be equal). Then the representation must come with a key K specifying how the P 1, …, P n are converted into Q 1, …, Q m :

Key K: Let M = 〈O, I〉 be a model and let P 1 ,  … , P n be Z-properties exemplified by M. A key K associates with the set {P 1, …,  P n } a set {Q 1, …, Q m } of Z-properties that are candidates for imputation on the target system. We then write K({P 1, …, P n })  =  {Q 1, …, Q m }.

The third clause in the definition of representation-as then becomes: X exemplifies P 1 , …, P n and imputes some of the properties Q 1 , …, Q m to T where the two sets of properties are connected to each other by a key K.

The idea of a key comes from maps, which are paradigmatic for understanding scientific representation (Frigg 2010a, b). Consider a map of the world. It exemplifies a distance of 29 cm between the two points labelled ‘Paris’ and ‘New York’. The map comes with a key, which includes a scale, 1:20,000,000 say, and this allows us to translate a property exemplified by the map into a property of the world, namely that New York and Paris are 5800 km apart. Or consider the case of a scale model of a ship being used to represent the forces an actual ship faces when at sea. The exemplified property in this instance is the resistance the model ship faces when dragged through a water tank. But this doesn’t translate into the resistance faced by the actual ship in a straightforward manner. The resistance of the model ship and the resistance of the real ship stand in a complicated non-linear relationship because smaller objects encounter disproportionate effects due to the viscosity of the fluid. The exact form of the key is often highly non-trivial and emerges as the result of a thoroughgoing study of the situation.Footnote 13 Determining how to move from properties exemplified by models to properties of their target systems can be a significant task, and it should not go unrecognized in an account of scientific representation.

K is a blank to be filled. The key associated with a model depends on a myriad of factors: the scientific discipline, the context, the aims and purposes for which the model is used, the theoretical backdrop against which the model operates, etc. Building K into the definition of representation-as does not prejudge the nature of K, much less single out a particular key as the correct one. In some instances the key might be identity: the properties exemplified by the model are imputed unchanged to the target. In other cases the relation between the properties of the model and those imputed to the target may be similarity; see, for instance, (Frigg 2010a, 131–132; Giere 2004). In others, the key might take the form of an ‘ideal limit key’ (cf. Laymon 1991). But keys might also associate exemplified properties with entirely different properties to be imputed to the target (for example, colours with tube lines as is the case in the London Underground map). The requirement is merely there must be some key for something to qualify as a representation-as.

The above examples also show that introducing keys does not amount to smuggling in a mimetic conception of representation via the back door. On the contrary, keys can be highly conventional. This sharply distinguishes our account from accounts like Giere’s (2004, 2010) and Weisberg’s (2012, 2013) who take similarity (or at least purported similarity), in the relevant respects to the appropriate degrees, to be the relation between models and their targets. As discussed above, although keys can be the identity map, or a mapping between similar properties, this is not built into the account, and in this sense our account is more general that those based on the notion of similarity.Footnote 14

6 A remark concerning denotation

The first condition on models representing their targets is that they denote them. Denotation is a difficult and much discussed concept. What establishes that a model denotes a target is a question we cannot resolve in this paper, and which remains an interesting problem for further research. Our aim here is to merely put in place a few sign-posts, indicating promising avenues and issuing warnings about blind alleys.

Sometimes denotation is restricted to language, but this restriction is neither necessary nor useful because there is nothing intrinsically language-centric about denotation. Pictures denote their subjects and models denote their targets. Likewise sometimes denotation is restricted to symbols denoting singular objects. Again, there is no need to hamstring ourselves in this way. Just as a proper name denotes its bearer, a predicate can denote either all elements in its extension or a universal (depending on ones’ metaphysics of properties). As a consequence no category mistake is committed if we say that a model denotes a class of objects rather than one singular object (a CH4-model, for instance, denotes all methane molecules).

That a model as a whole denotes a target as a whole does not preclude there being additional denotation relationships between parts of the model and parts of the target. The Phillips-Newlyn machine as a whole denotes the Guatemalan economy, and parts of it – for instance the reservoir labelled foreign owned balances and the flow labelled income – denote parts of the economy.

What establishes denotation is a vexing question.Footnote 15 In the philosophy of language there are two broad families of approaches. According to the descriptivist approach (which goes back to Frege and Russell) names function as disguised definite descriptions, and as such denote whatever satisfies them. According to the so-called direct reference approach (which goes back to Mill, Marcus, and Kripke), names directly pick out their bearers without going via any descriptive content. Both of these are in principle compatible with our view of scientific models and for now we want to remain agnostic about this choice. We also have pluralist leanings and want to make room for there being different ways to establish denotation in different cases and in different contexts. In many cases denotation is borrowed from language. In a map of New York we see a black dot with ‘Grand Central Terminal’ written next to it, and so the dot borrows denotation from language. Many models seem to work in the same way. The Phillips-Newlyn machine denotes whatever the word ‘Guatemalan economy’ denotes. At least in cases where this happens, this is to hand over the problem of uncovering the roots of denotation to the philosophy of language.

7 The DEKI account of representation

We can now tie the loose ends together and fill the blank in the epistemic representation problem: ‘M is a scientific representation of T iff M is a Z-representation that represents T as Z’. In a bit more detail this amounts to the following:

DEKI: Let M = 〈O, I〉 be a model, where O is used by a scientist as the base of the model and I is an O-Z-Interpretation. Let T be the target system. M represents T as Z iff all of the following conditions are satisfied:(i) M denotes T (and in some cases parts of M denote parts of T).(ii) M exemplifies Z-properties P 1 ,  …, P n .(iii) M comes with key K associating the set {P 1, …, P n } with a set of properties {Q 1, …, Q m }: K({P 1, …, P n })  =  {Q 1, …, Q m }(iv) M imputes at least one of the properties Q 1 , … Q m to T. M is a scientific representation of T iff M represents T as Z as defined in (i)-(iv).

We call this the DEKI account of representation to highlight its key features: denotation, exemplification, keying-up and imputation. Figure 1 provides a schematic representation of the account.

Fig. 1
figure 1

The DEKI account of representation

We can now present a complete analysis of how the model that has guided us through this paper works. The Phillips-Newlyn machine (O) is used as the base of a model by Guatemalan economists. Z is an economy. The machine is endowed with Phillips’ O-Z-interpretation (I), mapping O-properties onto Z-properties. The machine so interpreted is an economy-representation, and as such it is a model M (an economy-model). The Guatemalan economists used M as a model-of the Guatemalan economy by making it denote the Guatemalan economy (i). They did so by borrowing the reference of the linguistic expression ‘Guatemalan economy’ and the model denotes whatever the term denotes. The machine instantiates a number of water-pipe-properties and, via I, it I-instantiates a number of economy properties. Some of them – the effect that a decrease in foreign exports had on income and the interest rate for instance – are exemplified because they were highlighted (ii). We can presume that the economists used an interval-valued key, which moved from specific changes in value for the interest rate before and after the change in foreign exports to values ± 4% around them (iii) and imputed the result to the Guatemalan economy (iv).

This, we claim, is the right analysis not only of how the Phillips-Newlyn machine works symbolically; it is also the right analysis for all other material models. The use of plasticine sausages as models of myoglobin, of mice as models for humans, etc. all can be analysed in terms of DEKI.

A number of qualifications are in order. First, {P 1, …, P n } need not be a list of independent, or even monadic, properties. In fact the set can be highly structured with some Ps expressing relationships between other Ps.

Second, DEKI is the general form of an account of representation and as such it needs to be concretised in every particular instance of representation. In every concrete case of a model representing a target one has to specify what O is, how it is interpreted, what sort of Z-representation it is and what properties it exemplifies, how denotation is established, what translation key is used, and how the imputation is taking place. Depending on what kind of representation we are dealing with, these ‘blanks’ will be filled differently. But far from being a defect, this degree of abstractness is an advantage. ‘Scientific modelling’ is an umbrella term covering a vast array of different activities in different fields, and a view that sees representations in fields as diverse as macroeconomics, biochemistry, and fluid dynamics in exactly the same way is either mistaken or too coarse. Our definition occupies the right middle ground: it is general enough to cover a large array of cases and yet it allows us to say what is specific about them.

Third, DEKI meets our conditions of adequacy from Section 1. DEKI allows for misrepresentation in at least two places. A representation is accurate if T indeed posses the properties that M imputes to it. This need not be the case; in fact that T possesses any of the imputed properties is not built into the notion of representation-as. M can represent T as possessing properties Q 1 ,  …, Q m and T might not instantiate a single one of them.Footnote 16 If M represents T as having properties that it doesn’t have it misrepresents it. The other place where misrepresentation can enter is denotation. Denotation can fail in various ways – a representation can purportedly denote a target that does not exist or it can denote the wrong target. The surrogative reasoning condition requires that models represent their targets in a way that allows scientist to generate hypotheses about them. This requirement is satisfied by condition (iv), which requires that at least one property be imputed to T. This imputation generates a hypothesis about T that can then be tested.

Finally, it is worth pointing out that the ordering of the conditions is not supposed to introduce a temporal element into either scientific representation or the process of constructing the model; nor is it meant to indicate logical priorities. None of the conditions has to be established prior to the others, and the model could exemplify the properties even before being used to represent a target by the model user. The user could equally well start off with the target system and a set of properties of interest. She could then construct an inverse key associating those properties with ones that we have firmer grasp on in the context of model building. She could then construct a model that exemplifies those properties, in the appropriate manner under the appropriate interpretation, before taking the model and establishing the denotation relation between it and the target. Such a process is not ruled out by our conditions. DEKI does not function as a diachronic account of scientific representation: as long as the conditions are met, in whatever order, a model represents its target system as Z.

8 Envoi

Material objects can be turned into scientific models by means of an interpretation. This, combined with DEKI, provides an account of how they represent their targets as thus or so. Although some scientific models are material objects, others aren’t. This raises the question of whether, and if so how, our account of representation can be generalised to cover models like the Newtonian model of the solar system, the logistic model of a population, and the Solow model of economic growth. These are, to use Hacking’s phrase, ‘things that one holds in one’s head rather than one’s hands’ (1983, 216). We submit that the difficulty with these models is ontological not semantic. The problem for any account of representation which requires that models instantiate properties is that it is remains unclear what that means for non-physical objects. Thomson-Jones (2010) points out that such objects, in virtue of their abstractness, cannot instantiate the kind of properties one would like to impute to real systems. Abstract objects neither have mass; nor do they oscillate.

While this may be a serious problem for a similarity account of representation such as Giere’s (1988), DEKI can reply to this in two ways. Firstly, according to our account, models don’t strictly speaking need to instantiate physical properties. Rather, they can be taken to exemplify properties that apply to abstract objects, which are then keyed up with physical ones to be imputed to their target systems. Or alternatively, the model can exemplify physical properties, but through I-instantiation rather than instantiation proper.Footnote 17 Secondly, we are hopeful that ‘fictional’ accounts of the ontology of models will allow us to reconcile the claim that such models are non-physical objects with the idea that they can nevertheless be said to instantiate, at least in some sense, physical properties in the way that Sherlock Holmes instantiates the property of being a pipe smoker, despite the fact he is not a physical being. Frigg (2010a, b) and Godfrey-Smith (2006) provide outlines of what such an account could look like.Footnote 18 Developing the details of such an account and integrating it with the DEKI account of representation is a project for future research.