Minds and Machines

, Volume 21, Issue 2, pp 337–359

Program Verification and Functioning of Operative Computing Revisited: How about Mathematics Engineering?



DOI: 10.1007/s11023-011-9237-z

Cite this article as:
Pincas, U. Minds & Machines (2011) 21: 337. doi:10.1007/s11023-011-9237-z


The issue of proper functioning of operative computing and the utility of program verification, both in general and of specific methods, has been discussed a lot. In many of those discussions, attempts have been made to take mathematics as a model of knowledge and certitude achieving, and accordingly infer about the suitable ways to handle computing. I shortly review three approaches to the subject, and then take a stance by considering social factors which affect the epistemic status of both mathematics and computing. I use the analogy between mathematics and computing in reverse—that is to say, I consider operative computing as a form of making mathematics, and so attempt to learn from computing to mathematics in general. I conclude that “mathematics engineering” is a field to be both developed for practical improvement of doing mathematics and taken into consideration while philosophizing about mathematics as well.


Computing engineeringOperative computingPhilosophy of mathematicsProgram verificationSoftware engineering

Operational Computing: Practice and Theory1

Since electronic computers were developed and operated, in the forties of the last century, the computing field has been developing and growing rapidly and intensively. The usage of computers entered to and spread in military and governmental organizations, universities and research institutes, a variety of industrial fields, and private and domestic environments. Nowadays computational systems take part in very many fields of our life, and so the proper functioning of those computational systems is critical.

The issue of such proper functioning turned out to be intricate and elusive, not trivial matter at all. As technology developed and advanced, the number of computer users increased, and the applicative uses of computers highly varied. That made computer programming, or rather adjusting computer systems for many different needs of users, more and more complicated and dependent on a complex environmental context, which means harder to be achieved; and so malfunctions of computing systems, as a result of programming faults and mistaken operating, increased. Thus, professionals of computing field, both on the academy and on the industry, turned their attention to the issue (see, e.g., (MacKenzie 2001, Chap. 2; Schach 2007, Chap. 1)).

Recognizing the multiplying and piling up difficulties of computing systems functioning as a central problem—considered as the “software crisis”— which needs to be treated and dealt with seriously and thoroughly, led to the establishment of a new field: software engineering or computing engineering.2 The central goal, or Raison d’être, of computing engineering is to enable the designing, creating, using and maintaining of reliable computing systems; and it is usually defined as the application of systematic, disciplined, quantifiable approach to the development, operation and maintenance of computing, i.e. the application of methodical engineering to computing. However, the issue of which methods and means are to be applied for achieving this goal was, and still is, not clear and unanimously agreed, and for about 40 years the matter is still being debated (see, e.g., (Simons et al. 2003; Cockburn 2004)).

The early developments of electronic computing, as well as the usage of computers, were, as one could expect, established, executed and promoted mainly by people whose professional work included a lot of calculative work, and who had significant mathematical knowledge, skills, and interests. Many of them were professional mathematicians, who retrained for computing by writing computer programs, developing theoretical and applied environments and systems for writing such programs (programming languages), and analyzing properties of such frameworks. So when the issues of the appropriate operational computing activities and the establishment of the field on suitable foundations were considered and discussed, it was likely and natural for many practitioners and theoreticians of computing to refer to frames of work and methods of representation, processing and analysis of mathematics; and that reference was part (or consequence) of viewing the nature of these frames and methods as justifiers and reinforcers of the status of computing activity. One term which is commonly used in that context is ‘program verification’—the act of proving or disproving the correctness of computer programs or intended algorithms underlying a system with respect to a certain formal specification or properties, using formal methods of mathematics. The attitude of establishing computing activity (or at least some specific parts of it) on mathematical foundation, both operatively and epistemologically, is known as the mathematical paradigm (see, e.g., (Colburn et al. 1993, prologue part)).3

A typical supporter, developer and proponent of the mathematical paradigm is the prominent computer scientist Tony Hoare. He puts the main principle of the paradigm simply and clearly:

Computer programming is an exact science in that all the properties of a program and all the consequences of executing it… can, in principle, be found out from the text of the program itself by means of purely deductive reasoning. (Hoare 1969, p. 576)

And in a latter work:

Computers are mathematical machines. Every aspect of their behavior can be defined with mathematical precision, and every detail can be deduced from this definition with mathematical certainty by the laws of pure logic. … Programming is a mathematical activity. Like other branches of applied mathematics… its successful practice requires the determined and meticulous application of traditional methods of mathematical understanding, calculation, and proof. (Hoare 1986, p. 135)

In the above cited pieces the word ‘formality’ is not mentioned, but as Hoare speaks of “means of purely deductive reasoning”, “defining with mathematical precision”, “deduced from this definition with mathematical certainty by the laws of pure logic”, and “meticulous application of traditional methods of mathematical understanding, calculation, and proof” as the obligatory guidelines by which computing activity has to be managed by, it is obviously read between the lines. Some others, such as Barbara Liskov and Stephan Zilles, considering data abstraction for the specifications of programs from a similar point of view, are explicit about it:

Formality. A specification method should be formal, that is, specifications should be written in a notation which is mathematically sound… formal specification techniques can be studied mathematically… the syntax and semantics of the language in which the specifications are written must be fully defined. (Liskov and Zilles 1977, p. 9)

So according to the formalist mathematical paradigm, mathematics—as a realm in which objects are represented and constructed formally, by completely defined semantics and syntax, and statements are deducted purely logically—is the appropriate model for designing, preparing and building computer programs, in order to achieve the desired faultless reliable computing systems.

Not every computing practitioner looked for the ultimate salvation within formality. As opposed to believers and adherents of that view, there were also doubters and objectors. The most vehement, as well as renowned, attack against the formalist mathematical paradigm is probably the work of De Millo, Lipton and Perlis (abbreviated here as DLP), (De Millo et al. 1979). DLP propose an approach opposing to the formalist approach described before, focusing their attack on the view of mathematics “as a cold, formal, logical, mechanical, monolithic process of sheer intellection”, arguing that “insofar as it is successful, mathematics is a social, informal, intuitive, organic, human process, a community project” (De Millo et al. 1979, p. 271). DLP describe the processes which statements go through before they are accepted as “proven theorems”: Beginning as some idea in the mathematician’s mind, which seems interesting enough to be examined; continuing as formation of that idea, by using “intuition aids”, such as sketches, drawings, calculatory examples, some visuals demonstration instruments; then sharing the idea with other mathematicians, discussing, correcting and improving it as (seems to be) needed; presenting the results in meetings, seminars, conferences and professional journals, and, as more mathematicians are exposed to it, the discussions are broadened and deepened; examining relations and connections between the results and statements which are already considered proven, as well as with open conjectures; and if the public of mathematicians is convinced that the statement (or some of its modified versions) is true, then it is considered as a proven theorem. During the course of that long process, the formal frame of presentation and discussion can serve as an instrument for examination and check of the results, but this is done only partly, while stating it is “possible in principle” to be done in the classical logical deductive full sense of the matter. This “principal possibility”, say DLP, is never fully fulfilled, and so it remains, as they call it, “an imaginary formal demonstration”.

A basic mistake of formalists of computing, according to DLP, is ignoring these social factors. Formal texts of “program verifications”, they say, are not suitable and are not used for reading, understandably transforming, discussing, internalizing, assimilating, generalizing; and so these texts do not help us to achieve the goal of proofs—strengthening our confidence in the correctness of statements. Being convinced of the program’s correctness takes substantial understanding of its functioning, and that understanding depends on support and anchoring in the web of social and environmental elements. In order to achieve that with computing, in DLP’s opinion, the field should be considered as a useful knowledge field, such as, for example, engineering:

How then do engineers manage to create reliable structures? First, they use social processes very like the social processes of mathematics to achieve successive approximations at understanding. Second, they have a mature and realistic view of what ‘reliable’ means: in particular, … it never means ‘perfect’. … The analogy in programming is any functioning, useful, real-world system. (De Millo et al. 1979, p. 279)

DLP’s work aroused many reactions, of both sympathy and resistance, and the polemic about the subject continued more intensely at the late eighties, with the philosopher James Fetzer’s work, (Fetzer 1988). Fetzer proposes a third approach to the subject, distinguishing between statements which can (and should) be proved deductively, by logical rules, and statements which are learned observationally and empirically, by sensual perception, inductive principles and causal explanations.4 Mathematical statements are, for Fetzer, clearly of the first kind, and so they are proved to be true by their objective validity; accordingly, those “social factors”, which DLP considered, are neither necessary nor sufficient for their proof to be valid (Fetzer 1988, p. 1050). Then Fetzer heads to consider “computer programs”, telling apart two of their aspects, or rather two meanings of the term:

Since computer programs, like mathematical proofs, are syntactical entities consisting of sequences of lines…, they both appear to be completely formalized entities for which completely formal procedures appear to be appropriate. … Yet programs differ from theorems, at least to the extent to which programs are supposed to possess a semantic significance that theorems seem to lack. For the sequences of lines that compose a program are intended to stand for operations and procedures that can be performed by a machine, whereas the sequences of lines that constitute a proof do not. (Fetzer 1988, p. 1053)

That is to say—as Fetzer elaborates later—that as long as we consider computer programs as abstract algorithms, these are formal mathematical objects, whose properties are derived from definitions and basic logical rules, and so are to be proved formally and indubitably; but the operational functioning of “real-life” computer programs, the results of their running on actual computers, depend on many contingent—physical, technical, circumstantial—factors of the context, and so cannot be proved to be certain (in the same meaning as above of “certainty”). Statements about operational computing are, by their nature, of the second kind, and therefore can be learned and sustained only up to inherent limitations of certitude, but cannot be “formally verified”. As “computer programs” can mean either (i) algorithms, (ii) encodings of algorithms, (iii) encodings of algorithms which can be compiled, or (iv) encodings of algorithms which can be compiled and executed on computers (Fetzer 1988, p. 1058), so the usage of term “program verification”, says Fetzer, can be misleading:

The very idea of program verification trades upon an equivocation. Interpreted in senses (i) and (ii), there is no special difficulty that arises in ‘verifying’ that output O follows from input I as a logical consequence of axioms… Under such an interpretation, however, nothing follows from the verification of a ‘program’ concerning the performance of any physical machine. … Interpreted in senses (iii) and (iv), however, that … cannot be subject to absolute verification, precisely because the truth of these axioms depends upon the causal properties of physical systems, whose presence or absence is only ascertainable by means of inductive procedures. (Fetzer 1988, p. 1059)

So the very idea of the third approach is distinguishing between computing in the mathematical sense of the word, which can be verified and proved to be absolutely correct, and computing in the more contingent daily-life (and operatively programming and executing on real computers) sense of it, which cannot.

Debates and discussions of the subject, it might be added, have continued afterwards, and the matter is still discussed nowadays.5

What About Mathematics?

The three approaches to computing described above share an attempt to compare mathematics and computing, and in due course draw conclusions about the appropriate standards of proceedings of the first field to be adopted in the second one. Let us first take a look at mathematics and some aspects of its doing, before considering mathematics/computing relations and trying to learn a lesson from these relations.

Mathematics has been discussed for many years from a philosophical point of view, attempting to characterize its epistemic nature.6 Many descriptions of the mathematical activity, which have been given in those discussions, draw a picture of a (“representative”) single mathematician, working independently on his or her own; like it is of no relevance to consider the mathematician’s cognitive conditions and abilities to transfer and communicate mathematical texts, and the mathematical community the mathematician is a part of and in relations with. However, we know that a considerable routine part of the mathematician’s work includes communicating texts with other mathematicians, verbally and in writing, via emails and chats, publishing and reading professional papers. Mathematical knowledge and norms have a significant social part, as they are created and shaped dynamically through many social activities. So inevitably, in the context of such activities, large parts of the mathematical matter bound up with treatment and communication of texts, and the arrangement of that treatment and communication and their proper functioning are a requested essential need.

Such a multistage enduring treatment of texts needs, of course, conventions of creation, comprehension, processing and representation of the texts and their usage. In the same way, management of social activities requires conventions of communicating among the communicators. Mathematics as we know it would not exist the way it does unless mathematicians could have handled mathematical texts and communicate them with each other: exchanging basic information; sharing crude intuitive ideas; investigating, both each for oneself and together, these ideas and their developments; and representing, for continuous work in the future and use of the results, the formally refined products of the investigation. When the activity takes place by interactions of many practitioners of the society, the cooperation is not reduced only to transfer and receptivity of information. In order that these actions would serve learning and gaining of knowledge, agreed standards and social conventions (either explicit and declared or not) of the appropriateness of the information—the manners of its obtaining, classification and representation, as well as the determining of its importance and legitimacy—should be formed and accepted. For the mathematician there are certain meanings for terms like “a true (mathematical) statement”, “a valid (mathematical) proof”, “a convincing (mathematical) explanation”, etc’; and excepting the meanings of such terms enables the establishment of meanings of higher order, such as “a good mathematician”. These meanings are not private, but common (though not necessarily fully and uniformly) to a community of mathematicians, and these conceptual sharing and cooperation enable the social mathematical activities. The social layer of mathematics is entangled with a normative layer, larger than merely “communication rules”. Such a normative layer always exists, even when we consider a single mathematician working alone; but when many mathematicians are involved, this layer is highly extended and complicated, since the arrangement and regularization of the social activities of the field—the field’s collective forming—is carried out in the frame of that layer.

Does the philosopher of mathematics have to take the aforesaid into consideration? In other words: should the philosopher of mathematics consider as relevant and be interested in the methods and means by which mathematical texts are created, shaped, codify mathematical contents and communicated, and the normative standards of conceptions which underlie those methods and means? One who is a “foundational realist”—that is to say, takes mathematical statements to have absolute truth-values, independent of the human thought—would probably answer the question negatively, considering those methods and means, as well as those norms and conventions, as secondary to the truth-realism level. According to such a view, first there is what is “real by itself”, and only afterwards, and in accordance with, comes what “should be considered as real”; and so, the rules for acceptance, representation, categorization, transfer and receptivity of (real) information are supposed to be determined by optimal accordance with that “real by itself”. The truth is true, no matter how humans would grasp, represent and interweave it into their world-picture.

Nevertheless, even if we accept the view of mathematical statements having contents which determine their truth-values, independently of the human mind7— still, if we are interested in investigating the place and status of mathematics in the human episteme, then all that is included in the human epistemic treatment of mathematics—methods and modes of learning and knowing mathematics by humans—is potentially relevant to our philosophical investigation. Standards and norms of representation, categorization, transfer and receptivity of mathematical information are elements of that epistemic array of mathematics; so to exempt us from the philosophical need to consider them, there should be shown how the mathematical activities can be carried out epistemically separated from these occupations. A forceful answer to those “foundational realist” positions is, in my opinion, the request to back those positions up with an establishment of an ultimate epistemology for mathematics—ultimate not only in the sense of not needing the infrastructure of representation and communication means at all, but also which can be justified as primary and compelling, separately from the usage of such means.8 As no fulfillment of such a request is achieved, the means of representation and communication of mathematics are worth being considered as a meaningful component in epistemological investigations of mathematics.

In addition to that, we should notice that the place of mathematics in our cognitional and perceptual world-picture is definitely dependent on the properness of the mathematical knowledge, which is based on the stability and reliability of the representation of mathematical information. Even without dealing with the issue of “absolute correctness” of information and its representation, the information needs at least to be durable and compatible with the whole epistemic array; if the representation of information is not well interlaced with the cognitive functioning, that is, of course, improper. For such properness it is required that the representing text is reliable and stable, and also that the communication among mathematicians is suitably executed; that is to say, that some signifying element of what is meant to be represented is well preserved and comprehended. Assumptions about these kinds of properness, as about representative and communicative properness in general, have to be investigated seriously, since they are not trivial and self-evident.

It is important to clarify and emphasize that we do not mean to take a stance on the philosophical questions of mathematics, having to do with mathematical and meta-mathematical norms and concepts, like ‘what is a mathematical truth?’, ‘how is a mathematical truth known?’, ‘what is the epistemological role of mathematics and of mathematical statements?’, etc. Such important questions are definitely of the philosopher of mathematics’ interest, but here we aim at a general, more basic, idea. The point is the essential pertinence and importance of inter-human, communal and accustomed-normative aspects of mathematics in the making to the philosophy of mathematics.

Let us illustrate the idea by taking a look at an important concept, very much relevant to the discussion—the concept of (mathematical) proof. This concept has been discussed and investigated in many philosophical texts (see, e.g., (Detlefsen 1992) and the sources mentioned, referred to and annotated there), and here we do not pretend, of course, to exhaust, or even touch, each and every of its aspects. One point to mention is that proof is considered to have two distinguished features: one is to verify that some statement is true, and the other is to explain why it is true. As some have suggested, these two can be divorced and philosophically treated separately (see, e.g., (MacKenzie 2005)), since they serve two different epistemic functions. Moreover, in works such as (Giaquinto 2005) the differentiation is extended and elaborated, as Giaquinto distinguishes between explanation which is directed to understanding, justification which aims at gaining confidence, and proof which is meant to achieve certainty; stressing that both explanation and justification “do not collapse into proving” (Giaquinto 2005, p. 76).

Indeed, for a “foundationalist” philosopher of mathematics, this separation is characteristic, since by such an approach the two functions mentioned above are inherently different of kind. The (“real-by-itself”) truth-value of a statement is taken to be its fundamental attribute, unambiguously decided conclusively determines the position of the statement; and so telling the statement to be true or false and verifying its validity is philosophically crucial. Compared to this feature, the understanding of the mathematical content of a statement is a much vaguer concept, entangled in a very complicated web of mental factors, and so considered as problematic for philosophical investigation.

But how convincing is this separation? Truly, we can have a mathematical statement explained and understood, at least to some extent, without being sure whether it is true or not; while, in some situations, we may consider some statement to be true without actually (or fully) comprehending its content (this may happen, for example, when we are told by some trustworthy authority that the statement is true, and so we take it to be reliable enough to consider this statement as true).9 However, from a non-foundationalist perspective, taking into account typical actual work and motives of mathematicians, things look differently, The prominent mathematician William Thurston, relating to the famous computer proof for color theorem and the controversy surrounding it, writes:

I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true. (Thurston 1994, p. 162)

The mathematician and philosopher Gian-Carlo Rota writes:

Mathematicians are on the lookout for … an argument that will uncover the still hidden reason for the truth of the conjecture. …. Not all proofs give satisfying reasons why a conjecture should be true. Verification is proof, but verification may not give the reason. (Rota 1997, pp. 186–187)

And the mathematician Joseph Auslander writes (relating also to mathematical exploration, an issue worth consideration by itself, but applicably to our discussion of proof here):

Proof as exploration: Every mathematician knows that when he/she writes out a proof, new insights, ideas, and questions emerge. Moreover, the proof requires techniques, which may then be applied to the consideration of new problems. What makes this topic interesting, and somewhat complex, is that there is not always a hard line between explanation and exploration. (Auslander 2008, p. 68)

Indeed, such situations as described earlier here, having the two epistemic functions of proof being separated, are rather artificial and not indicative. Mostly and essentially, understanding the content of the statement is very relevant to our judging its being true; these two features are tightly connected, since the second is based on and resulted by the first. Analogously, instead of treating ‘explanation’, ‘justification’ and ‘proof’ to be three separated epistemic concepts, we may rather take ‘proof’ and ‘justification’ to indicate different levels of “epistemic strength”, or “proximity to certainty”; ‘explanation’ can be considered as a more general and broader concept, including ‘proof’ and ‘justification’ as its subtypes, by the same manner of taking ‘convincing’ and ‘certain knowledge’ as specific modes of ‘understanding’. A proof is an explanatory process, particularly meant to convince of some statement, while general explanation can have less distinct goals, harder to characterize (but by no means less important).

Mathematical explanation as a philosophical concept has been discussed and treated in some works (see, e.g., (Steiner 1978; Resnik and Kushner 1987; Sandborg 1998; Mancosu 2000)), but relatively few, much less than the concept of proof. Apparently this has to do with the fact mentioned, that ‘explanation’, being a vague and context-dependent concept, is considered as more problematic for discussion in the analytic philosophic tradition. But if we are interested in a philosophical investigation of mathematical explanations, the textual means of communication and representation of mathematics (being explained) and the normative standards and evaluations of such means are definitely of relevance, since they not only enable but constitute such explanations.

To conclude this section: as mathematical practice is necessarily and prevalently tied-up with text communication, the question “What is a good mathematical text?”—in a variety of meanings of “good”—becomes a significant part of the question “Which ways of doing mathematics are good?”. Considering texts, the quality is estimated by a diversity of textual attributes and functions—clarity, intelligibility, elegance, incisiveness, profundity, learnability, modifiability, and more. Such properties are very hard to be unambiguously characterized by reduction to simpler features; yet, the normative status of the texts, and therefore of (part of) the mathematical activity, surely depends on them heavily. Consequently, the philosopher of mathematics should take them into account, when characterizing its place in the frame of the overall human epistemic complex.


Let us now turn again to the issue of conducting computing activities in the light of its relations with mathematics. The three approaches to “program verification”, or to the functioning of operative computing, described in “Operational Computing: Practice and Theory” here, share an attempt to compare mathematics and computing, and in due course to draw conclusions about the right standards of computing management. According to both the first and the third approach, it is agreed that mathematics is satisfyingly modeled in the frame of formal axiomatic systems of deduction, and that working within the frame of such systems guarantees the validity of mathematical arguments and the correctness of mathematical statements. By the speakers in behalf of these approaches, it seems like this agreement needs almost no reason but mentioning the proved durability of mathematics against mistakes and contradictions along the ages. However, as claimed and explained here in “What About Mathematics?”, there are some strong arguments against the purist formalist view of mathematics, and so against this agreement. The impossibility to fully model, within a formal axiomatic system, text-representation, communicative, social, and normative aspects of mathematical activity, puts—for those who recognize the meaning of such aspects in the investigation of the epistemic status of mathematics—very hard, probably insurmountable, difficulties to take formality as the comprehensive essence-element of mathematics, which assures our believing it to be absolutely correct.

In the context, it is worth focusing on the third approach, represented here by Fetzer’s words. By this view, statements of “pure” mathematics are distinctly separated from statements of “practical” (computational) experience; accordingly, the firsts suit the formal deductive modeling which guarantees its correctness, and the seconds do not, and so their proper functioning cannot be guaranteed. It seems that this view classifies the fields, and so separates their epistemic status, in too simplified a manner, by dichotomically separating the pure abstract, formally represented, objectively and cleanly settled as true or false (“analytic”), from the concrete, learned via the limited sensual experience, depends on contingent practical factors, and so remains unavoidably uncertain (“synthetic”).10 Such attempts of separation face such huge philosophical difficulties, that it is hard to see how a dichotomy like that can be defended.11

We can see how unconvincing this separation is, in the context it is discussed, by looking at examples where the examined object is not a computer program, necessarily intended to be implemented and run on some computer, but an abstract algorithm, represented in a formal or semi-formal language (like some programming language, pseudo-code, flowchart, etc’). Such an algorithm can be a well-defined object of a formal axiomatic system. Fetzer himself considers such a distinction, between computer programs which are intended to be run on computers and abstract algorithms (as “two kinds of computer programs”), and for him it seems obvious that the correctness of the seconds is an objective matter, suitable for formal check. However, as computing theoreticians and algorithm designers and practitioners know very well, fully comprehending such an algorithm, represented by hundreds or thousands (or even many more) lines or chart parts, can be very difficult; accordingly, the possibility to certainly and indisputably know (“to prove”) theoretically that this algorithm is absolutely correct may be realistically out of our reach. The theory of algorithms includes tools and techniques developed for proving the correctness of algorithms, but using them is limited to relatively small and simple algorithms (see, e.g., (Harel 2004, Chap. 5)). The problem of proving correctness is not only with objects which cannot be formally represented within an axiomatic system (abstract algorithms surely can be, and actually are, represented like that), but their complexity makes the proof of their correctness, in the frame of such systems, very hard, and sometimes, as stated before, practically infeasible. The correctness of algorithms cannot always be settled as a deductive process of drawing conclusions in a logical formal system. Treatment of computer programs, even at early stages of their lives, when they are still just abstract algorithms and have not yet implemented the software executive, is often carried out by other means than formal methods of deduction, owing to the level of complexness of those algorithms. Bearing in mind the requirement to give a stable epistemological basis of how mathematics and (theoretical, algorithmic) computing is done, accepting Fetzer’s attempt to consider these difficulties as “a practical matter of the state of things”, which is “meaningless in principle” seems unlikely.

The idea can be clarified and illustrated by examining the concept of abstraction, which is central and fundamental in software engineering. Generally and plainly, abstraction in this context refers to a representation which captures only essential aspects of a computing system, reducing the complexity of the system apparent to the abstraction’s user. The more complicated and multiple-component the system is, the harder it is to grasp its multi-level functioning, and so the harder the system is for design, creation, operation, and maintenance. Abstraction is a very useful and necessary methodological tool for handling these difficulties by dividing the system (from the early phases of its conceptual design and on) to parts according to its different functionalities and characterizing the interconnections of those parts; and so, by eliminating from each subsystem part the components which are irrelevant for some limited level of functioning of this part, simplifying and easing the work with the whole system. Of course, deciding on the level of abstraction, the partition of the system to subsystems, and their mutual relations, components and functionalities can be a very challenging mission by itself. This issue has been studied and discussed a lot in software engineering (see, e.g., (Schach 2007, Chap. 7; Bjørner 2006, Chap. 12)).

Two features of abstraction make it a key element for adherents of formal verification. First, it is distinctly characterized by representing real-world systems, with many components of a variety of concreteness levels, as much “cleaner” abstract systems of functionalities. Second, it enables and helps to “divide and conquer” the systems by parting them to subsystems, each of which can be taken care of separately, at least to some level of functionality. These features contribute to our ability to verify the proper functioning of computing systems by using formal methods of representation, modeling and processing (see, e.g., (Bjørner 2006, Chap. 12)). And so, naturally, abstraction and its usages in software engineering are considered as support of the formal approach to program and system verification.

In our discussion here we do not intend to deny or doubt the utility and benefit of abstraction; it is definitely a very useful and valuable tool. But considering its two features mentioned above, we should notice their difference in relevance to our discussion. Considering the first one—representation of real-world systems as abstract systems—we state and emphasize again that even though abstract systems are easier to formal treatment than real-world systems, yet formality does not exhaust all the aspects of abstract systems (such as algorithms), and so it is not likely to be the ultimate, certainly not the only, method for achieving proper functioning of computing systems and programs. As for the second feature—assisting the usage of the “divide and conquer” paradigm by simplifying the treatment of (relatively) separated subsystems—this is absolutely true for every level of concreteness or abstraction of the system, either the system is of some daily-life practical activities or an abstract (large enough to be interesting and challenging to understand) algorithm.

Let us add that there are different levels of abstraction in mathematics too. There are, of course, numerous applications for mathematical theories in many fields, but moreover, mathematics is a reflexive field—that is to say, mathematics includes bodies of theories about making mathematics, e.g., proof theory—so analyzing a mathematical system (or some parts of it) may be executed within the frame of a more abstracted mathematical subfield. This raises an interesting issue, which is not going to be dwelled on here, of the relation between mathematics and the practice of mathematics. For our purposes, we just remark that though the question of what is “the real world” considering mathematics still calls for cogitation and investigation (by the philosopher of mathematics), the potential of abstraction as a tool for treating systems and the inability to completely exhaust such treatment by formal methods are true for mathematics as the above-considered for computing.

Turning again to the second approach in the debate mentioned in “Operational Computing: Practice and Theory”, represented by DLP, it seems like it can be recruited for an anti-foundational position in the philosophy of mathematics, but by reversing the analogy in question. DLP take for granted the assumption that formalistic modeling of mathematical systems is not what enables true understanding of mathematics, and so, in analog, it will not enable such understanding in the computing field either. Even for someone who accepts this assumption, it seems like, philosophically-strategically, it is preferable to see the difficulties to achieve certainty in the computing fields, specifically by program verification, as strengthening and supporting such an assumption about mathematics, rather than relying on this assumption about mathematics for explaining the problematic character of achieving certainty in computing. I find it felicitous to characterize the difficulties to achieve certainty in computing as difficulties which stem from, inter alia, society and communication dynamics and the need to connect and preserve perceptual elements of textual representation—factors which cannot be fully and formally axiomatized and modeled; and so taking (the relevant part of) computing as a fundamental “practical” aspect of (“applied”) mathematics, and hence deduce the pertinence of these factors to the status of mathematics. This is more convincing than beginning with the assumption of such factors in mathematics and deducing from that about the characterization of difficulties in computing.

For sharpening the discussion, let us we examine the meaning of ‘correctness’. Correctness can be interpreted as being in accordance with the state of things “out there” in the relevant context, as in the classical logical sense; it can also mean consistency with some requirements, in a strict formal sense. What sort of correctness do we require form a computing system (including computer programs)? The requirements of the system are specified in some language, and we want it to behave by these requirements, which is correctness in the second sense. Indeed, formal verification methods concentrate on representing the specifications in some appropriate formal language and taking care of the consistency (as possibility to be fulfilled) of those specification conditions in some instances of implementation of the system. However, we have to keep in mind that the ultimate goal of the system is to function in some situations of a non-formal context, i.e. to be correct in the first sense of correctness. It is interesting to notice how both DLP and Fetzer use this distinction as an argument for their positions, and both of them, in my opinion, miss it, at least to some extent.

The gap (or relation) between those two senses is an issue which has been dwelled on in many discussions of the matter. DLP mention it as a “fundamental logical objection” to verification, since they take the transition from the (informal) requirements to the (formal) program as necessarily informal. Ideas of the same kind appear in other works (see, e.g., (Zemanek 1979; Smith 1985). Such critiques against formal verification methods have been answered by the argument that the two senses of correctness of a computing system are simply two phases of correctness proof of this system, and none of them is valid without the other, so formal verification is not only legitimate but necessary (see, e.g., (Maurer 1979)). This answer is convincing, and so it is rather unlikely to accept DLP’s vigorous rejection of formal methods comprehensively. The importance and usefulness of formal methods of verification is not to be denied. But we should notice that the “gap argument” above speaks for the importance and inevitability of informal treatment of computing systems. Formal methods, useful and fertile as they may be, are not the ultimate and final element of verification.

As for the meaning of correctness, we can take a more moderate version of the first sense by relating to satisfying functioning. We need the computing system to function in some accordance with the “outer world”, not necessarily ultimately—i.e., guaranteed to be completely free of any mistakes and being “absolutely true”—but in a manner which enables a (broader) effective functioning of the relevant contextual environment of that outer world. If a computing system possibly includes some mistakes, but its functioning, as long as we can check and assure, fulfills our requirements, we may not call it absolutely correct, but we can take its proper functioning as satisfying enough.

What about correctness in mathematics? As lightly touched and said in “What About Mathematics?”, in the philosophy of mathematics the question about the meaning (or existence) of an “outer world” which mathematical statements refer to is in dispute. Some philosophers (“formalists”) consider mathematical statements as referring to no subject matter, and so mathematical correctness is interpreted as in the second meaning mentioned above. There are serious philosophical difficulties with this position which we are not about to discuss here, but taking an opposing position and arguing for “outer” referents of mathematical statements (and so interpreting mathematical correctness as in the first sense) is not easy either (see, e.g., (Hersh 1997; Shapiro 2000)). One option we can adopt is considering some milder sense of satisfying functioning for mathematical correctness, taking the “outer world” relevant context as simply our epistemic world-picture. The correctness of a mathematical statement may be checked by some formal calculus rules, but this correctness is determined by accordance (being non-conflicting) of that statement with our whole epistemic web of belief (and this check is possible since those rules are also in accordance with that web).

Fetzer tells apart “pure” mathematical statements from “applied” computing statements, taking formal methods as suitable for the verification of the firsts and not of the seconds. This dichotomy is parallel to the distinguishment of the two senses of correctness discussed here, and Fetzer obviously takes mathematical statements to be correct by the second sense and computing statements by the first. But as claimed and explained above, this distinguishment does not have to be interpreted as a strict dichotomy, but rather as two complementary aspects. The dichotomy is quite problematic and unconvincing, while the complementary approach is fertile, both philosophically and practically. Proper functioning is a suitable term for the required in mathematics and computing, and it is to be achieved by formal methods and by other means as well.

(One may remark that the general point of our discussion—the great importance of informal, communicational, social, normative factors in mathematics and its making—can be discussed and preached by only directly relating to mathematical systems (including theoretical algorithms), without considering real-world computing systems and computer programs ran on actual electronic machines. Such treatment of the issue does not have to be done by focusing on Fetzr’s work. This is true, but an essential element of the argument here is the objection to the dichotomic separation between theoretical mathematics in its “purity” and applicative (including computational) “real-world” mathematics, both in general by philosophical characterization and in the practice of enhancing performances and improving functioning. For this perspective, Fetzer’s ideas are of high relevance.)

Our attitude towards the correctness (or functioning) of a system definitely depends on the importance of that system, or the effect of faults in its functioning. Computer systems are involved with almost every aspect of our life, and while some of those aspects are critical, some others are relatively minor. A fault in the computing control system of a nuclear plant may be disastrous, while a caused-by-fault temporary shutdown of Facebook is no more than frustrating and annoying. The measure we take and the means and methods we use in order to decrease the chances of such faults and increase our certainty in the system being correct are in respect to the importance of the system, and that includes the usage of formal and informal methods as well. The more important and critic the system is, the more and varied means we use for assuring its correctness and the less we are satisfied and feel safe with only a limited kind of such means. Software engineering is a very rich and diverse realm because many kinds of verification means, formal and informal, are in need and proved to be productive (see, e.g., (Schach 2007, introduction part)).

Correctness of mathematical systems is usually considered differently, since mathematics is not usually thought of being critic in a similar manner. A mistake in some (theoretical) mathematical argument may be intellectually disturbing, but what effect does it have out of mathematics? If mathematical correctness is so different in kind from computing correction, doesn’t it draw a separation line between the two, a separation which is taken against our analogy?

In response to that, we would first remind not only the greatly diverse applicability of mathematics—which means that correctness of mathematical systems does have a significant effect on the “outer” environment—but also the dynamism of that applicability. It is hard, perhaps impossible, to tell a subfield of mathematics as completely inapplicable, since there is no knowledge of the possibilities of its applicability that may arise in the future. The history of mathematics tells about “purely-theoretical” mathematical subfields which have turned to be very applicable (like, e.g., number theory). This has to do with the aforementioned principle objection to the dichotomic separation between “theoretical” and “practical”, “logically necessary” and “empirically contingent”. By the same line of argumentation, we should notice that the separation of the intellectual and epistemic context of mathematics from the general epistemic and functional context (including sub-contexts of computing systems) is unacceptable in principle. The differences of the problematic character or severity between the disfunction of (some) computing systems and the disfunction of (some) mathematical systems may be a matter of a degree or of a specific kind, but not a matter of a conceptual kind in general.

In the same context, comparing the aspect of “correctness” or “functioning” in mathematics and in computing calls for an obvious response: we come across, as mentioned here in “Operational Computing: Practice and Theory”, many problems with the functioning of computer systems; “bugs” are often discovered in computer programs, and computer faults occur quite frequently. On the other hand, this does not happen in mathematics. One of the most conspicuous features of mathematical systems is their solidity and indubitable veracity. So how can those two be compared?

Responding to that, we should first notice that the common myth about mathematics being ultimately immune to mistakes, contradictions, and refutations does not fit that smoothly with the history of mathematics. During the development of mathematics through the ages, conceptual changes, some of which opposing and contradicting former conceptions, have occurred, and moreover, many errors have been detected in proofs of statements, and some of those statements have been refuted.12 We can add that argumentatively, the pretention to separate abstract fields from the need to lean on “practical experience”, on the one hand, and the reliance on experimental evidences for the soundness of that separation, on the other land, seems like a non-sensing running with the hares and hunting with the hounds, not very convincingly. But having said that, it is still a fact that mathematics, as a field of study and research, appears to be very highly reliable comparing to other fields (including natural sciences, and also computing). Mistakes and contradictions which have occurred and been discovered in mathematics have never led to rejection of a large part of the mathematical body of knowledge or any of its basic principles, as has happened with other fields. So there is still a need to explain the functional difference between mathematics and computing, and what can be learnt from it.

If we accept the basic idea of the second approach, described and represented by DLP’s says in part 2, of the significance of informal factors, it may help to explain, at least partially, the differences in factors of representation and social communication and their treatment between the two fields. Mathematics has been (being) done by humans for thousands, and during that long period of time the modes of representation of objects, processes, and statements in mathematics have been going through many metamorphoses. Social mechanisms and processes of communication, text representation and handling, and normative selection and estimation, have been developed for mathematical activities and their products; in that sense, “mathematics engineering” is an old mature field. As opposed to that, modern computing is only a few dozens of years old, and attempts to arrange and consolidate similar mechanisms and processes for it are, naturally, much younger. It should not surprise us, therefore, that computing engineering supports, at this stage, less efficient and advanced tools for treating those aspects.

Yet there is one more inherent and interesting difference between mathematical activities and operative computing activities. The mathematician who tries to prove some statement carries out, usually, two processes, in parallel or one after the other: if we describe the statement as a set of conditions which implies some conclusion, the mathematician tries, on one hand, to intuitively characterize the conditions, looking for some conceptual elements which exhaustively pinpoint the relevant gist of the conclusion (“the big picture”); and tries, on the other hand, to progress in local deductive steps from the conditions (or some of them), in order to reach the conclusion (“a detailed implementation”). The process of proving is often described as starting from the characterization of the whole picture and continuing to a multi-stage detailed implementation, but it is not necessarily so in practice—the proof might be obtained by starting from “small” deductive steps, and only after figuring out and forming what stems from these steps, continuing by some general intuitive characterization, and so on. Anyway, these two processes are mostly combined and conglomerated together, so the intuitive relation of the prover to the situation is kept, and the transition steps of the deduction are controlled and inspected particularly as well. On the other hand, in algorithmic-programming thinking, the connection between these two processes is often more problematic, because the technology of the algorithmic-programming operativity is stronger than in mathematics: an algorithmic-programming characterization of basic deductive operations (following, or combined with, an intuitive characterization of the whole conceptual picture) can embody a far-reaching deductive step, a step which might be very difficult to follow without breaking it to locally-controlled operation stages.13 Accordingly, even a long chain of statements of a mathematical deduction can, in most cases, be controlled and inspected (though taking its toll, and not necessarily easy), step by step; that is opposed to the situation with even a relatively short algorithm, whose result (and its correspondence with the requirements) might be very hard to find out, having no general method for.14 Let us emphasize again that the problem is not the impossibility of the algorithmic process to be formally and axiomatically defined, but the “disproportion” between the elementarity of the basic operative steps of the algorithm and their effect on the conditions and processes of the system; and as a result, the loosening of the connection between the basic intuition of the algorithm, with the infrastructure of certainty of the primary state of things, and the computational steps occurring in the algorithm with their results. So there is no need at all to take that difference as a strengthener of the dichotomic distinction between the epistemic status of mathematics and that of computing. Instead, we might take it as a manifestation of the development processes of mathematics within computing, processes which should be taken into consideration while investigating the status of mathematics.

Let us stress again that accepting the basic idea of the second approach mentioned—recognizing the importance of representational, communicational and social factors in functioning, and so in estimating the epistemic status, of both mathematics and computing—does not compel us at all to fully reject the attempts to use formal modeling of computing systems in order to increase the certainty of our knowledge about computing and mathematical systems. When such modeling is combined in a whole inferring and learning system (without necessarily pretending to “pure” deductive logic), it might very well be useful as a strengthening element of our (proximity to) certainty, founding on an acceptance of a basic conceptual infrastructure, which has already been proved to be stable.15 Analogously, the mathematician sometimes accepts as true statements which are attained by operative deductive manipulations in a formal axiomatic system (“formal proofs”), even without directly and intuitively comprehending their meaning. This is done by taking formal proving mechanisms as tools which have been proven themselves (combined with other tools) to be successful. Of course, this is meaningfully different from characterizing the formal deductive aspect as the ultimate essence of mathematical activity, assuring, as it were, its absolute certainty.

Strengthening this point, we may take some time-perspective look at the issue. More than 30 years have passed since DLP published their work, and during that time the subject of program verification and formal treatment of computing systems in general has gone a good part of the way. In the recently published paper, (Asperti et al. 2009), the authors survey many of the advances and works in the subject, concluding that some methods of formal verification have actually been found to be very fertile and effective. Moreover, a stronger argument of (Asperti et al. 2009) is that some of these methods, and the realm of formal verification in general, are found to be useful and promising for mathematical practice, specifically for theorem proving. So Asperti, Geuvers, and Natarajan reverse the analogy between mathematics and computing, but from an opposite direction to the one presented here: they take mathematical practice to be relied on the usage of computing means of formal verification. Accordingly, Asperti, Geuvers, and Natarajan’s approach to DLP’s work is mainly critique, as they do not accept DLP’s strict rejection of formal methods.

We may very well agree with Asperti, Geuvers, and Natarajan’s positive attitude towards the potential and value of computing formal systems in mathematics, which means we do not have to wholly accept DLP’s position. The argument here is for the importance of social, communicational and normative factors in mathematics as well as in computing, and for the implausibility of the pretention of dealing with mathematics and computing completely formally; this is not to say that formal tools and methods are not useful and successful at all, both in mathematics and in computing.

This can bring us to identify, in the third approach described in “Operational Computing: Practice and Theory”, a true basic idea and description: indeed, the multiplicity and diversity of experimental and contingent aspects of working in and with many computing systems make their control pretty difficult, and this surely serves as a partial explanation for the problems with assurance of their proper functioning; that is in comparison with other systems (like some mathematical systems, including computing systems, and even systems of many other fields), whose parallel aspects are simpler and more limited, and have been investigated, studied and organized more comprehensively—inter alia (but not only) by formal analysis tools—and so function more properly. Computing systems are also systems of mathematical activities, and their elements which are problematic (or impossible) to be formalized manifest different features of mathematical work, and accordingly different levels of possibility to formalize that work. It is important to notice that these level do not have to be fully parallel to the levels of certainty we acquire about different ways of making mathematics, though it is reasonable to assume that they would partly be: the tools for studying and organizing a field of knowledge and work include the formal modeling and analysis methods; the suitability of other tools for the studying and organizing of the field, such as those considering the social and communicational aspects, does not need to lead to avoidance of using the formal tools. Actually, using such other tools can help us to gain a better understanding of the use of the formal tools, resulting in a more successful and beneficial usage of them.

How have the difficulties in operative computing been (being) coped with? In the large field of software engineering (or computing engineering), many methods and heuristics have been developed and tried in order to identify mistakes in computer systems, reduce their number, and minimize the damages they cause: Characterizing and stating clearly (though not necessarily formally) the requirements of the program, both in a general level and in functional details, before writing; attempting to estimate and measure the prevalence and severity of mistakes in the program; dividing complicated programs according to their functionalities and checking the different parts—each one simpler than the whole program—separately, before having them combined to one program or one system (as mentioned here before, considering abstraction); carefully and systematically reviewing and inspecting the codes of the programs, semantically checking it, testing and examining trial versions of (parts of) computer programs before the programs are fully operated and run “for real”; taking into consideration psychological and sociological factors of working conditions of computer programmers and workers, in order to create good working environments which enable more efficient programming work. All these and more combine different aspects of “engineering” in different levels of formality, and they share the backbone idea that computer programming is a human activity, and as so it is doomed to include mistakes; and since absolutely eliminating the mistakes by formalizing the process of programming is not realistically possible, we should not hang to ideal “absolute correctness”, but the efforts should be focused on “proper (or satisfying) functioning”—a milder term, but a very fertile and productive one.

In the context, taking a look at the computing field after a few dozens of years, we can say that it actually functions pretty well. Some people may be puzzled by this claim or object it, saying that, as mentioned here above, many computing faults and “bugs” do occur; computing practitioners still need to spend a lot of effort, time and money to take care of those mistakes, correct them, control and minimize the damage they cause. But those errors which reduce our chances to achieve the desired certainty and cause malfunction of computing systems happen locally—specific algorithms, specific programs, specific computer systems. Not only that the general theories of programming and algorithmics are no less stable and credible than “classic” mathematical theories, but large-scale malfunctioning of computer system is very rare; we hardly hear about “computing disasters”.16 As described in a nutshell in the previous paragraph, lots of efforts have been taken as a response to the general problem of the function of computing systems, and, as a result, a large percent of local faults in computing system is detected and corrected before the systems are operated for real; and accordingly, the increasing of the number of system failures, caused by such faults, has been braked. Actually, some computing professional are surprised by the robustness of operative computing. It is an interesting challenge to explain this robustness, and rigorous-formal foundational establishment does not seem to be like a suitable convincing explanation. What can serve as such is the natural development of these arrangement and regularization mechanisms of representational, communicational and social activities of computing.17


What is the lesson to be learnt from this discussion with regard to mathematics? Theoretically (that is to say—philosophically) speaking, comparing mathematics and computing and examining the dynamics of the last realm can equip us with a criteria for epistemic functioning: a system can be considered as correctly (properly, satisfyingly) functioning, if some relevant audience can be convinced that this system manifests some process, which can be methodically repeated and yields some desired results. This view of functionality can apply to mathematics, especially if we take computing systems as including, inter alia and inseparably, some kinds of mathematical activities. Mathematics can be considered as an integral part of our “web of belief”, and not necessarily a singular case of it.

Practically (that is to say—operatively) speaking, we may consider the improvement of our “mathematics engineering”, by investigating its arrangers and regulators of representational, communicational and social elements, and probably develop some more. Are the representation methods of the requirements for mathematical proofs (i.e., the theorems and their statements to be proved), in complicated and complex cases, well-suited? Do we handle efficiently very long mathematical (computational) processes, functionally dividing and conquering them by parts? Have we profitable methods for defining simple test-cases of computational statements, methods which can help us to determine the region of correctness of the statements, the objects for which the statements are true? Can we systematically estimate the psychological and sociological factors which affect the mathematician’s study and work, and arrange the working conditions of mathematicians accordingly? It is true that mathematics has been proved to be reliable and trustable for quite a long time, but this does not say that we cannot, and should not try to, improve the productivity and efficiency of mathematical work. Such attempts will hopefully result in the betterment of mathematical functionality—more and better mathematical accomplishments.


The word "operational" ("operative", "operating", etc') is somewhat overloaded in computer science (operating systems, operative research, and more), but it is still used in this work, mostly with "computing", referring to "practical computing", in the sense of actual programs run on electronic machines. Sometimes it is used for activities being taken in some other, probably more theoretical and general, context.


The term 'software engineering' was coined, apparently, around the middle of 1969, before the organization of the first conference of software engineering in October 1969 (See Pelaez 1988). As we are interested here in "engineering" of different aspects, not only of software, the term 'computing engineering' might be preferable, though mostly we use 'software engineering', for reasons of tradition.


Some crucial developments in (what is considered nowadays as) the theory of computer sciences, which has partly evolved prior to operational electronic computing, have been done as mathematical works in the full sense of the word. Here we consider latter developments of operational computing—particularly actual programming, design and creation of programming tools, running computer programs and operating computer systems.


Of course, this distinction is kind of similar to the well-known distinction in philosophy between the "a priori" and the "a posteriori" knowledge. (See also footnote 10 here.) In a later interview Fetzer said that such a distinction had been, indeed, in his mind, but he had chosen not to use these terms, considering them as too technical a philosophical terminology (see (MacKenzie 2001, p. 215)). Fetzer also distinguishes more delicately between statements which are learnt directly by observation and those which are deduced by induction and causality, but the important distinction here is the one described before.


For an updated list of publications, see, e.g., (Rapaport 2007) (up to 12 April 2007, and occasionally updated).


Very many texts of the philosophical literature deal with variety of aspects of this reach subject. See, e.g., the comprehensive and systematic (Hersh 1997; Shapiro 2000).


There are different kinds of "realistic" views in the philosophy of mathematics (as well as in philosophy in general). One view is taking mathematical objects to be existing independently of the human mind, so having their properties independently of our being in touch with those objects ("ontological realism"). Here we make do with a more moderate version of realism, assuming the truth value of mathematical statements is determined independently of the human mind ("truth-value realism") (see, e.g., (Shapiro 2000)). Of course, the first kind implies the second one.


Arguments of this kind can be found in texts of philosophy of mathematics, such as, e.g., (Davis 1972; Ernest 1991; Bloor 1994; Hersh 1997).


An interesting discussion of such situations, from a philosophical point of view, appears on (Tymoczko 1979).


Here we use the well-known terms "analytic"/"synthetic", as our representation here fits that duo, at least in many of its common uses in philosophy, more than the aforementioned in footnote 4, "a priori"/"a posteriori". The relations between these two duos and the differences in meaning between them are a complicated and much-discussed philosophical issue, which we do not need to dwell on here. The terms are used mainly as "keyword connectors" to the relevant philosophical discussions.


The philosophical literature considering this issue is very rich. A few works to mention, of the very many, are the momentous and much celebrated, (Quine 1951), its important "philosophical descendents", (Putnam 1962) and (Pigden 1987), and the latter work, (Putnam 2002).


That issue is also treated in many works. See, e.g., (Kitcher 1984; Crowe 1988; Sasaki 2005).


Every programmer knows the process of debugged interpreting—a controlled executing in steps of a computer program, inspecting and checking the result of every step, comparing it with the desired and expected result of that step.


Scherlis and Scott, considering the issue of program verification, have probably had some similar idea in mind, writing, in their interesting work, (Scherlis and Scott 1983): "But, even relatively elementary programs tend to be more complicated than elementary theorems. Why? Because in a certain sense more of their structure has to be made more explicit. In mathematical literature, it is often sufficient to do one precise calculation, and for the other cases say 'similarly'. A proof is often more a genteel tourist guide than instructions for a treasure hunt. Programs, on the other hand, not only operate on very highly structured data, but they must do so in unobvious ways in order to be efficient."


See the aforementioned in the previous footnote, (Scherlis and Scott 1983).


Computing errors do happen, as said, and so faults of computing system are not rare at all. But here we consider "disasters" as faults which result in harsh damage, such as death, severe injury, or grave financial loss. Such cases are very uncommon. See, e.g., (MacKenzie 2001, pp. 299–301; TRD 2009).


For interesting suggestions and discussions of such explanations, see (Collins 1990, pp. 62–65; MacKenzie 2001, Chap. 9).



This paper is based on work which has been carried out while I was a doctoral student at The Cohn Institute for The History and Philosophy of Sciences and Ideas, Tel-Aviv University. I wish to thank Leo Corry for his help and comments.

Copyright information

© Springer Science+Business Media B.V. 2011