Philosophy & Technology

, Volume 26, Issue 1, pp 75–80 | Cite as

How Modeling Can Go Wrong

Some Cautions and Caveats on the Use of Models
COMMENTARY

Abstract

Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.

Keywords

Modeling Simulation Failure Oversimplification 

1 Models

My discussion will deal with the down-side of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry.

The aim of the enterprise of modeling to construct an artificial manifold M, the model, whose salient features or operations replicate those of a corresponding realty R. We resort to models primarily because of incapacity: how something works is too complex for us to manage and we resort to a simplified simulacrum to stand in it stead.

A model seeks to replicate the salient features of its object in a simpler, more manageable, more perspicuous way. The aim is to provide a larger, functionally more complex whole with a simulacrum whose mode of operation mirrors that of this object in those respects, at least, that are relevant and informative in the setting of an investigation. The name of the game is to make tractable a complex reality which — in its real-life elaborateness — is not effectively manageable. Accordingly, a model is a cognitive tool that is devised in such a way that there is a good reason (albeit never conclusive reason) to think that a question about reality that we resolve on its basis is correctly answered.

And the adequacy of a model is a functional issue — viz. to enable us to answer questions about a reality that is not accessible to understating with equal ease and convenience. And successful modeling is not a matter of how closely that model resembles its correlative reality overall. Rather, it is a matter of the extent to which it engenders success in answering the questions of the particular problem range that is at issue. Models are instrumental devices by nature. And as with any instrument, adequacy must be judged by experience — the salient question being whether the tool effectively accomplishes its work.

Now, the ways in which such a procedure can go awry are must exactly those that are represented by the respects in which a model M can fail to be faithful to the reality R that M is supposed to reflect.

Potentially, there are various possible deficiencies in our models of reality.

Obviously, the most drastic failure-mode to which modeling us subject is distortion through mis-decribing the modus operandi of the item being modeled. But this is simply a generic term for modeling failure and does not get down to the productive factors that lie at the causal source of the thing. These will prominently include the four:
  • Oversimplification by omitting significant features or factors that are there.

  • Overcomplexification by introducing operationally distributing factors that are not there.

  • Overestimation by representing some factor as present to a greater extent than it actually is.

  • Underestimation by representing some factor as absent to a greater extent than it actually is.

Among all of these failure-causes, it is oversimplification that is by far the most common and threatening.

2 Oversimpification as a Gateway to Error

Oversimplification becomes a serious cognitive impediment by failing to take note of factors that are germane to the matters at hand, thereby doing damage to our grasp of the reality of things. Whenever we unwittingly oversimplify matters we have a blind spot where some facet of reality is concealed from our view.

For oversimplification consists in the omission of detail in a way that creates or invites a wrong impression in some significant — i.e., issue-relevant — regard. In practice, the line between beneficial simplification and harmful oversimplification is not easy to draw. Often as not, it can only be discerned with the wisdom of retrospective hindsight because whether that loss of detail has negative consequences and repercussions is generally not clear until after a good many returns are in.

For the most part, oversimplification involves loss. The student who never progresses from Lamb’s Tales from Shakespeare to the works of the bard of Avon himself pays a price not just in detail of information but in the comprehension of significance. And the student who substitutes the Cliff’s Notes version for the work itself suffers a comparable impoverishment. To oversimplify a work of literature is to miss much of its very point. Whenever we oversimplify matters by neglecting potentially relevant detail, we succumb to the flaw of superficiality. Our understanding of matters then lack depth and thereby compromises its cogency. But this is not the worst of it.

One of the salient aspects of oversimplification lies in the fundamental epistemological fact that errors of omission often carry errors of commission in their wake: that ignorance plunges us into actual mistakes.

Oversimplification is, at bottom, nothing but a neglect (or ignorance) of detail. Its roots lie in a lack of detail — in errors of omission. When we fill in gaps and omissions — as we all too generally do — we are likely to slide along the slippery slope of allowing simplification lead us into error. For where Reality is concerned, incompleteness in information invites incorrectness.

Consider a domain consisting of a 3×3 tic-tac-toe square. And consider having X’s everywhere except for a blank in the middle. You will then be tempted to fill that middle square also by an X rather than an O. After all, this maximizes the number of available universal generalizations: X’s in all the columns, X’s in all the rows, X’s along every diagonal, etc. Still, reality might not be all that cooperative, and unravel your neat uniformity by having an O at the middle. And you are then led about something very fundamental — namely, the kinds of laws and regularities that are obtained.

Whenever there is a blank in our knowledge, the natural and indeed the sensible thing to do is to fill it in in the most direct, standard, plausible way. We assume that the person we bump into in the street speaks English and say “oops, sorry” — even though this may well prove to be altogether unavailing. We regard the waiter in the restaurant as ours even where it is the brother who bears a family resemblance. We follow the most straightforward and familiar routes up to the point where a DETOUR sign appears. We willingly and deliberately adopt the policy of allowing oversimplification to lead as an error time and again because we realize it does so less frequently than the available alternatives.

Oversimplification then is the bane of modeling. A model is, in effect, a theory to the effect that reality comports itself as the model does. And we test models on the same way in which we test theories, namely, via their application. We use these models as a basis for making predictions. And when these fail to work out, we know that something is amiss with our models.

There is, however, one very important difference between theories and models. Our scientific theories are crucial to explain not only how nature works but also to explain the fact that it works in a certain sort of way. By contrast, our models at best serve to explain how things work. Explaining how it is that this works in this way is something that requires more powerful instruments than mere modeling. And only after we have achieved this deeper level of understanding can we explain why it is that our models succeed to the extent that they do. Modeling thus comes neat to the start of the development of scientific understanding. It is not its final terminus.

Modeling runs into problems because it is generally a venture trying to give a simpler and more readily manageable picture of a complex reality. And the prime source of error here is obviously oversimplification.

The root cause of oversimplification is ignorance: we oversimplify when there are features of the processes at issue about which we are ignorant. And it is somewhere between hard and impossible to come to terms with this. Ignorance is the result of missing information, and one level of this is inevitably tenuous.

What is at work here is one of the fundamental principles of epistemology: We are bound to be ignorant regarding the details of our ignorance. I know that there are facts about which I am ignorant, but I cannot possibly know what they are. For to know what such-and-such is a fact about which I am ignorant, I would have to know that this is a fact — which by hypothesis is something that I do not know. And the same situation prevails on a larger scale. We can know that in various respects the science of the present moment is incomplete — that there are facts about the working of nature that it does not know. But, of course, I cannot tell you in detail what those deficiencies are.

Oversimplification plays a critical role throughout all contexts of information processing — be it in inquiry (information development) or inference (information exploitation) or communication (information transmission). The entire range of information management sees oversimplification entering upon the scene — often with decidedly unhappy results.

And our own ignorance is something that it is very hard to get a cognitive grip on. I can tell that I am ignorant of something-or-other. But I cannot ever tell just what this is. To know just what the fact is of which I am ignorant, I would need to know this fact itself — which, by hypothesis, I do not.

It is simply impracticable to get an adequate grip on our ignorance.

We can plausibly estimate the amount of gold or oil yet to be discovered, because we know the earth's extent and can thus establish a proportion between what we have explored and what we have not. But we cannot comparably estimate the amount of knowledge yet to be discovered, because we have and can have no way of relating what we know to what we do not. At best, we can consider the proportion of currently envisioned questions we can in fact resolve; and this is an unsatisfactory procedure. For the very idea of cognitive limits has a paradoxical air. It suggests that we claim knowledge about something outside knowledge. But (to hark back to Hegel), with respect to the realm of knowledge, we are not in a position to draw a line between what lies inside and what lies outside — seeing that, ex hypothesi, we have no cognitive access to that latter. One cannot make a survey of the relative extent of our knowledge or ignorance about nature except by basing it on some overall picture or model of nature that is already in hand via prevailing science. But this is clearly an inadequate procedure.

This process of judging the adequacy of our knowledge on its own telling may be the best we can do, but it remains an essentially circular and consequently inconclusive way of proceeding. The long and short of it is that there is no cognitively satisfactory basis for maintaining the adequacy of our oversimplified models short of subjecting them to the risky trial of bitter experience. For better or for worse, with models the proof if the pudding lies in the using.

3 Some Retrospects

In this regard, I would like to mention one personal venture in empirical modeling to illustrate the preceding perspective. Let me narrate a bit of ancient history about that ancient form of modeling — war gaming. During the years 1954–1956, I worked at RAND Corporation in Santa Monica, which continues in existence as a major think tank on public policy issues but in those days devoted almost exclusively to USAF concerns. Now, the military is generally fighting the last war, and at that stage the air force was thinking back anxiously to the disaster that befell its naval sister service at Pearl Harbor a little over a decade before. So our modeling took a war-gaming slant on the issue of what sort of response attack the USSR could — with their then-available resources — inflict on US retaliatory capabilities. As best I recall, we came to three conclusions: (1) That the operations would have to be an immensely complex affair carried on in vast scale. (2) That this would almost certainly be a process that could not be carried on with absolute secrecy in the face of our then-available observational capabilities. But (3) that the task would become vastly more difficult if our then operative policy of the forward basing of the Strategic Air Command were changed to one resolving to more extensive use of bases in North America. In the course of working out the detail of so complicated a conjectural exercise, it became clear that there is simply no way of proceeding there without making a great many oversimplifying assumptions, but that for the specific inquiry at hand this did not matter because the purpose of the investigation attuned accepting certain unrealism for the sake of a worst-case scenario. (After all, in making one’s defensive preparations, a certain unrealism in over-crediting the enemy is a pardonable sin.)

But in most modeling situations, such an acceptance of palpable unrealism is not justifiable.

Thus, consider a very different contrast case. In the late 1960s, the “Club of Rome” sponsored a study of economic–industrial growth on a worldwide basis by the MIT System Dynamics Group. Its findings looked to the neo-Malthusian “limits and growth” (to use the phrasing of its final report). The upshot was the idea that unless various politically and socially unrealizable changes were made — and made rapidly — the world’s social and economic system would collapse by the year 2040. Now, the Club’s dire prediction may well come to be realized eventually — (after all, eventually is a long time). But it was grossly off-target with regard to anything like its contemplated time span. The fact of it is that the oversimplifications on which the analysis was based made for an unrealistic acceleration of the trends and tendencies at work.

And this sort of thing can create big problems. For when a model is used as basis for large-scale policy decisions, its misfiring can have consequences that are not just local to the particular issue at hand but can call the entire process into question.

The use of models for deciding matters of policy in ways that are secure to detail must always be sign-posted PROCEED WITH CARE. Because when the aim of the modeling enterprise is to institute public policy changes, any intrusion of unrealism can all too readily set in motion a counter-reaction that can defeat our faith in rational inquiry itself.

But one important point comes to light here. In putting our models to work, we do well to confront the prospects of oversimplification and its implications. And in doing so we have to realize that a significant structural imbalance is at work here as between two sorts of issues: the defenses which seek the maintenance of a status quo and the offensive which seeks to change it. If I take defensive measures on the basis of oversimplifying the difficulties posed by the offensive my position is strengthened. But if I take offensive measures on the basis of oversimplifying the requirements for a successful defense, I risk disaster. The purposive nature of the enterprise at hand has important implications for the lessons that we can responsibly draw from our models.

4 Conclusion

And so, in concluding, let me stress the key lesson of these deliberations. It is that the nature of our models can and should reflect the purposes of their use. If the purposes at issue are predictive accuracy in faithfully depicting the actual phenomena, then great demands will be made on our models. Oversimplification is now a fatal flaw, and by and large every practicable step must be taken to avoid it.

But, on the other hand, if it is only general guidance that we require, then requirements of detailed and precise faithfulness can be relaxed. We need not know the impending rainfall to within two decimal places if all we have to decide is whether or not to take an umbrella. In modeling as elsewhere practice can and should appropriately be coordinated to purpose.

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  1. 1.Stony Brook UniversityStony BrookUSA
  2. 2.Department of PhilosophyCenter for Philosophy of Science, University of PittsburghPittsburghUSA

Personalised recommendations