Skip to main content
Log in

On malfunctioning software

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Artefacts do not always do what they are supposed to, due to a variety of reasons, including manufacturing problems, poor maintenance, and normal wear-and-tear. Since software is an artefact, it should be subject to malfunctioning in the same sense in which other artefacts can malfunction. Yet, whether software is on a par with other artefacts when it comes to malfunctioning crucially depends on the abstraction used in the analysis. We distinguish between “negative” and “positive” notions of malfunction. A negative malfunction, or dysfunction, occurs when an artefact token either does not (sometimes) or cannot (ever) do what it is supposed to. A positive malfunction, or misfunction, occurs when an artefact token may do what is supposed to but, at least occasionally, it also yields some unintended and undesirable effects. We argue that software, understood as type, may misfunction in some limited sense, but cannot dysfunction. Accordingly, one should distinguish software from other technical artefacts, in view of their design that makes dysfunction impossible for the former, while possible for the latter.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Admittedly, some artefacts are not obviously function-bearing. For instance, one may be reluctant to attribute a function to some artworks or decorations. But similar exceptions may be disregarded as outside the scope of our analysis.

  2. See e.g., Hansson (2006) and Houkes and Vermaas (2010).

  3. Karen Neander writes, “Most (if not all) physiological categories are functional categories [...] (This should seem a familiar idea, because categories of artefacts are similar: a brake is a brake in virtue of what it is supposed to do—was intended or designed to do—not in virtue of having some specific structure or disposition....)” (1995, p. 117).

  4. Neander raises this criticism regarding Wright (1973) in (2004) and Ruth Millikan does the same for Robert Cummins’ causal-role functions in (1989). Paul Davies turns the criticism around, claiming that historical functions provide no better account of malfunction than causal-role functions in (2000a,b). Indeed, Millikan has argued that this fact applies to functional categories in general, and not just to artefactual and biological types: “it is of the essence of purposes and intentions [and hence, of functions] that they are not always fulfilled” (1989, p. 294).

  5. See, for example, Houkes and Vermaas (2010), and Jespersen and Carrara (2011).

  6. See [Fresco and Primiero (2013)] for more details. For an analysis of errors related to ethical, policy and legal approaches to software development and maintenance, see Gotterbarn (1998).

  7. The literature in both philosophy of technology and philosophy of computer science often links the role of specification and design in determining artefacts’ correctness to a normative aspect. This link relies on different possible understandings of normativity. For example, on one understanding, it is the description of what an artefact does translated into the specification of what an artefact should do. On another understanding, it is an artefact’s design that indirectly establishes the criteria for ethical and legal evaluation of the consequences of the artefact’s use. Yet, on another understanding, it is that stable and successful realisation of technological artefacts that requires agents to behave so as to enable their intended functioning. For more on this see, for example, Vincenti (1990), Radder (2009) and Turner (2011).

  8. Notoriously, in computer science the notion of a side-effect is not a negative one. It is used to refer to the ability of a program to modify the state of a system or produce an observable interaction with the environment, besides returning an appropriate value of the function called. In the ensuing discussion, we use the term side-effect in its common sense of an unexpected result of an action.

  9. Neander refines this rough definition later by specifying that “what it was selected for” should be interpreted as the “lowest level of description” applicable.

  10. We use the terms “well-functioning” and “properly functioning” interchangeably throughout.

  11. We assume that its target is one that the missile is designed and reasonably expected to hit. If the missile is designed for slow-moving aircraft, then the fact that it cannot strike a modern jet fighter is irrelevant to whether it is functioning properly.

  12. This example is drawn from the recall notice for certain models of Olympus film cameras (http://www.cpsc.gov/cpscpub/prerel/prhtml06/06250.html). These cameras were prone to overheat due to defects in the flash circuit. The recall notice reports no other symptoms.

  13. See http://www.cpsc.gov/cpscpub/prerel/prhtml06/06181.html.

  14. It may be that pollution controls are primarily motivated by regulation rather than consumer interest, but this is beside our main point.

  15. Of course, one could still claim that standard cars do misfunction by polluting, when comparing them to other locomotion means that satisfy the same functions without polluting. The point is whether this evaluation is clear (it is), not whether it is justified (it may not be).

  16. As a disclaimer, it should be noted that there are many subtleties in the distinctions made here that exceed the scope of the article.

  17. x is a mathematical entity and as such it can be identified with a mathematical function. We do not offer a taxonomy of programs and software based on this property.

  18. Here, there is another subtle distinction that should be made between compiled and interpreted programming languages. We do not include it, because it is not significant for the remaining discussion.

  19. The type/token distinction can be approached differently if another LoA is considered. For example, if one takes into account only the machine code and not the source code (which is essential for our analysis), one could consider the executable as a type and its copies as tokens. The LoA is crucial to our formulation of Thesis 3. For an accessible discussion of some curious features of programs and software see, for example, Berry (2011, Chaps. 2 and 4).

  20. Since we are no longer dealing with mathematical functions, but teleological functions, we might say that MS-Word for Windows and OpenOffice Writer have a similar, but not the same, basic function, namely, word processing. But they implement different sets of features, only some of which overlap.

  21. One could add here a further classification to refer to the different instances of the same distribution by talking of software versions. For example, the ordered numbered—or sometimes alphabetically named—instances of the same software or program issued during a given period of time. The problem is that software versions are not necessarily instances of the same underlying algorithm design. For a program version 1.1 might contain modification to the design of version 1.0, such that the two versions can no longer be considered instances of the same algorithm.

  22. See also Fresco and Primiero (2013).

  23. For more details, see, for example, Hodges (1993, 1995), Kirchner and Mosses (2001), and Turner (2005).

  24. We thank an anonymous referee for this important objection.

  25. In the example above, c may be initially overlooked when the functional requirements for W are documented. It will, thus, escape the normal testing process that is common in software engineering practice. If, at some point, this is discovered, W will be deemed to malfunction (as indicated by the eventual fixing of W, the adding of c to the requirements specification and the adding of an appropriate test case). This clearly shows that software cannot be “blamed” for malfunctioning, as it always results from some design error.

References

  • Angius, N. (2013). Abstraction and Idealization in the formal verification of software systems. Minds and Machines, 23(2), 211–226.

    Article  Google Scholar 

  • Angius, N. (2014). The problem of justification of empirical hypotheses in software testing. Philosophy and Technology, 27, 423–439. doi:10.1007/s13347-014-0159-6.

  • Berry, M. D. (2011). The philosophy of software: Code and mediation in the digital age. New York: Palgrave Macmillan.

    Book  Google Scholar 

  • Colburn, T. (1998). Information modelling aspects of software development. Minds and Machines, 8(3), 375–393.

    Article  Google Scholar 

  • Colburn, T. (1999). Software, abstraction and ontology. The Monist, 82(1), 3–19.

    Article  Google Scholar 

  • Colburn, T., & Shute, G. (2007). Abstraction in computer science. Minds and Machines, 17(2), 169–184.

    Article  Google Scholar 

  • Davies, P. S. (2000a). Malfunctions. Biology and Philosophy, 15(1), 19–38.

    Article  Google Scholar 

  • Davies, P. S. (2000b). The nature of natural norms: Why selected functions are systemic capacity functions. Noûs, 34(1), 85–107.

    Article  Google Scholar 

  • Fetzer, J. (1999). The role of models in computer science. The Monist, 82, 20–36.

    Article  Google Scholar 

  • Floridi, L. (2011). The philosophy of information. Oxford: Oxford University Press.

  • Franssen, M. (2006). The normativity of artefacts. Studies in History and Philosophy of Science, 37, 42–57.

    Article  Google Scholar 

  • Fresco, N., & Primiero, G. (2013). Miscomputation. Philosophy and Technology, 26, 253–272. doi:10.1007/s13347-013-0112-0.

  • Gotterbarn, D. (1998). The uniqueness of software errors and their impact on global policy. Science and Engineering Ethics, 4(3), 351–356.

    Article  Google Scholar 

  • Gruner, S. (2011). Problems for a philosophy of Software Engineering. Minds and Machines, 21(2), 275–299.

    Article  Google Scholar 

  • Hansson, S. O. (2006). Defining technical function. Studies in History and Philosophy of Science, 37(1), 19–22.

    Article  Google Scholar 

  • Hodges, W. (1993). The meaning of specifications II: Set-theoretic specification, Semantics of Programming Languages and Model Theory, ed. Droste and Gurevich, Gordon and Breach, Yverdon, 1993, 43–68.

  • Hodges, W. (1995). The meaning of specifications I: Initial models. Theoretical Computer Science, 152, 67–89.

    Article  Google Scholar 

  • Houkes, W., & Vermaas, P. E. (2010). Technical functions: On the use and design of artefacts. Dordrecht: Springer.

    Book  Google Scholar 

  • Hughes, J. (2009). An artifact is to use: An introduction to instrumental functions. Synthese, 168(1), 179–199.

  • Irmak, N. (2012). Software is an abstract artifact. Grazer Philosophische Studien, 86(1), 55–72.

    Google Scholar 

  • Jespersen, B., & Carrara, M. (2011). Two conceptions of technical malfunction. Theoria, 77(2), 117–138.

    Article  Google Scholar 

  • Kirchner, H., & Mosses, P. (2001). Algebraic specifications, higher-order types and set-theoretic models. Journal of Logic and Computation, 11, 453–481.

    Article  Google Scholar 

  • Millikan, R. G. (1989). In defense of proper functions. Philosophy of Science, 56(2), 288–302.

    Article  Google Scholar 

  • Neander, K. (1995). Misrepresenting & malfunctioning. Philosophical Studies, 79(2), 109–141.

    Article  Google Scholar 

  • Neander, K. (2004). Teleological theories of mental content. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (2012 Ed.). http://plato.stanford.edu/archives/spr2012/entries/content-teleological.

  • Northover, M., Kourie, D. G., Boake, A., Gruner, S., & Northover, A. (2008). Towards a philosophy of software development: 40 Years after the birth of software engineering. Journal for General Philosophy of Science, 39(1), 85–113.

    Article  Google Scholar 

  • Preston, B. (2000). The functions of things: A philosophical perspective on material culture. In P. G. Brown (Ed.), Matter, materiality and modern culture (pp. 22–49). London: Routledge.

    Google Scholar 

  • Radder, H. (2009). Why technologies are inherently normative. In A. Meijers (Ed.), Philosophy of technology and engineering sciences. Handbook of the philosophy of science (Vol. 9, pp. 887–921). Amsterdam: North-Holland.

  • Schiaffonati, V., & Verdicchio, M. (2014). Computing and experiments: A methodological view on the debate on the scientific nature of computing. Philosophy and Technology. doi:10.1007/s13347-013-0126-7.

  • Suber, P. (1988). What is software. Journal of Speculative Philosophy, 2(2), 89–119.

    Google Scholar 

  • Symons, J. (2008). Computational models of emergent properties. Minds and Machines, 18(4), 475–491.

    Article  Google Scholar 

  • Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821.

    Article  Google Scholar 

  • Turner, R. (2005). The foundations of specification. Journal of Logic and Computation, 15, 623–662.

    Article  Google Scholar 

  • Turner, R. (2011). Specification. Minds and Machines, 21(2), 135–152.

    Article  Google Scholar 

  • Vincenti, W. G. (1990). What engineers know and how they know it : Analytical studies from aeronautical history. In Johns Hopkins studies in the history of technology. New Series No. 11. Baltimore, MD: Johns Hopkins University Press.

  • Winsberg, E. (1999). Sanctioning models: The epistemology of simulation. Science in Context, 12(2), 275–292.

    Article  Google Scholar 

  • Wright, L. (1973). Functions. Philosophical Review, 82(2), 139–168.

    Article  Google Scholar 

Download references

Acknowledgments

This article was developed initially as a collaboration between Jesse Hughes (see especially Hughes (2009)) and Luciano Floridi. We are extremely grateful to Jesse for having allowed us to re-use his very valuable work. We would also like to acknowledge the constructive feedback of the anonymous referees, whose comments enabled us to improve the article significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Floridi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Floridi, L., Fresco, N. & Primiero, G. On malfunctioning software. Synthese 192, 1199–1220 (2015). https://doi.org/10.1007/s11229-014-0610-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0610-3

Keywords

Navigation