Skip to main content

Advertisement

Log in

A Taxonomy of Errors for Information Systems

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

We provide a full characterization of computational error states for information systems. The class of errors considered is general enough to include human rational processes, logical reasoning, scientific progress and data processing in some functional programming languages. The aim is to reach a full taxonomy of error states by analysing the recovery and processing of data. We conclude by presenting machine-readable checking and resolve algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The isomorphism originates in observations by Curry (Curry and Feys 1958, Curry 1934) and Howard (Howard 1980). For a systematic formal treatment of the issue, see (Sørensen and Urzyczyn 2006).

  2. For a full treatment of the notion of informational semantics and of the operations it defines, see (Allo and Mares 2012).

  3. For the logic of becoming informed that applies at the instructional level of states, see (Primiero 2009, Primiero 2012).

  4. For the logic of being informed that applies at the level of goals, see (Floridi 2006).

  5. The two notions of semantic and functional information are complementary and non-exclusive. See respectively (Floridi 2009, Primiero 2012).

  6. In (Primiero 2012) the upgrade from functional information to knowledge is explained by interpreting information use in terms of a verification function to make data semantically qualified. Knowledge requires the network in which those contents are accessed to be no greater than the set of states where such information cannot be turned into misinformation (a localized consistency requirement). A content of semantic information becomes a knowledge content if it is accessible and usable from every other state of the same network without consistency being lost.

  7. For the relation here explored between error and negative knowledge, see also (Allchin 2000, 2001).

  8. See (Mayo 1996, 2010, Peirce 1878).

  9. The pair \({\langle {\mathcal{P}}, {\mathcal{G}} \rangle}\) by definition corresponds to a triple \({\langle \phi, {\mathcal{S}}_{1\rightarrow i}, \alpha\rangle, }\) where ϕ contains the set of required strategies and \({{\mathcal{S}}_{1\rightarrow i}}\) the set of states/processes the system goes through from 1 to i to reach the goal α = (A valid) at state S i .

  10. This notion lies at the very basis of the recent philosophical analysis of logical and technical malfunction for engineering and semantic systems. See e.g. (Baker 2009; Franssen 2008, Jespersen 2012; Jespersen and Carrara 2011).

  11. Discussion with B.G. Sundholm has clarified the relation with paradigm changes. Notice, however, that we depart here from the terminology used in (Sundholm 2012), where mistakes are explained as simple acts gone wrong. This is due to our more general use of the term ‘error’, and it also agrees with a similar use of the term ‘mistake’ from literature in psychology, see (Reason 1990).

  12. A counterpart of this case in simple propositional terms is an ‘incorrectly justified, false claim’.

  13. The content A in this case is propositionally treated as a ‘faulty justified true claim’.

  14. A basic taxonomy for human reasoning given in (Reason 1990) categorizes errors primarily according to a specular threefold structure of conceptual, behavioural, and contextual levels. Whereas certain sorts of behavioural errors will be excluded from the present taxonomy as they do not fall under the level of abstraction we are considering, we aim at maintaining our taxonomy as general as possible and claim that our notion of procedural error levels for information systems include various cases of epistemic errors common in the behavioural family for human reasoning. Typical cases of such errors that we will not consider are those induced by attention problems, memory problems, voluntary and involuntary deceptions, and the like. Nonetheless, some further reductions might be possible, as for example of memory problems in terms of the later introduced family of storage errors.

  15. This classification agrees with the one for errors in science from (Allchin 2000). In the following, we shall not consider the observational and discursive kinds that are included in that analysis.

  16. For a treatment of error-handling in software engineering see e.g. (Agarwal et al. 2009).

  17. Notice, however, that mutual or co-recursive definitions do not need to be circular.

  18. For the corresponding case of faulty algorithm execution see below the category of slips.

  19. Exceptions are largely used in knowledge representation problems by means of description logics, where default rules are used to state and infer relations that are true only in ‘normal cases’. See e.g. (Baader et al. 2003).

  20. These conditions present a strict analogy to the self-correcting thesis in Peirce.

  21. coq.inria.fr

  22. This strategy is extracted from the typing of errors in functional programming. See e.g. (ch. 5, Michaelson 1989). Notice, however, that the analysis is here fully expanded in view of the error types generated by the previously given taxonomy.

References

  • Agarwal, B. B., Gupta, M., & Tayal, S. P. (2009). Software engineering and testing: an introduction. Jones & Bartlett Learning, Burlington, MA.

  • Allchin, D. (2000). The epistemology of errors. In Philosophy of science association, Vancouver.

  • Allchin, D. (2001). Error types. Perspectives on Science, 9, 38–59.

    Article  Google Scholar 

  • Allo, P., & Mares, E. (2012). Informational semantics as a third alternative? Erkenntnis, 77(2), 167–185.

  • Baader, F., Calvanese, D., McGuinness, D., Nardi, D., & Patel-Schneider, P. (Eds.). (2003). The description logic handbook. Theory, implementation and applications. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Baker, L. R. (2009). The metaphysics of malfunction. Techné: Research in Philosophy and Technology, 13(2), 82–92.

  • Beaver, D. (2001). Presupposition and assertion in dynamic semantics. Stanford: CSLI Publications.

    Google Scholar 

  • Bonnay, D., & Egre’, P. (2011). Knowing one’s limits—An analysis in centered dynamic epistemic logic. In: P. Girard, O. Roy, M. Marion (Eds.), Dynamic Formal Epistemology, Synthese Library (Vol. 351, pp 103–126).

  • Curry, H. B., & Feys, R. (1958). Combinatory logic, volume I. North-Holland. Second printing 1968.

  • Curry, H. B. (1934). Functionality in combinatory logic. Proceedings of the National Academy of Science USA, 20, 584–590.

    Article  Google Scholar 

  • Floridi, L. (2006). The logic of being informed. Logique & Analyse, 196, 433–460.

    MathSciNet  Google Scholar 

  • Floridi, L. (2009). Philosophical conceptions of information. In G. Sommaruga (Ed.), Formal theories of information, volume 5363 of lectures notes in computer science (pp. 13–53). Springer

  • Franssen, M. (2008). Design, use, and the physical and intentional aspects of technical artifacts. In: A. Light, P. E. Vermaas, P. Kroes & S. A. Moore (Eds.), Philosophy and design: From engineering to architecture (pp. 21–35). Berlin: Springer.

    Chapter  Google Scholar 

  • Howard, W. (1980). The formulae-as-types notion of construction. In: J. Seldin & J. Hindley (Eds.), To H. B. Curry: Essays on combinatory logic, lambda calculus and formalism (pp. 479–490). London :Academic Press.

    Google Scholar 

  • Jespersen, B. (2012). A new logic of technical malfunction. Studia Logica. doi:10.1007/s11225-012-9397-8.

  • Jespersen, B., & Carrara, M. (2011). Two conceptions of technical malfunction. Theoria, 77, 117–138.

    Article  Google Scholar 

  • Mayo, D. G. (1996). Error and the growth of experimental knowledge. Chicago: Chicago University Press.

    Book  Google Scholar 

  • Mayo, D. G. (2010). Learning from error severe testing, and the growth of theoretical knowledge. In: D. Mayo & A. Spanos (Eds.), Error and inference. Cambridge: Cambridge University Press.

  • Michaelson, G. (1989). Functional programming through λ-calculus. New York: Dover.

    Google Scholar 

  • Peirce, C. S. (1878). Illustrations of the logic of science vi: Deduction, induction, and hypothesis. Popular Science Monthly, 13

  • Popper, K. R. (1963). Conjectures and refutations. London: Routledge & Keagan.

    Google Scholar 

  • Primiero, G. (2009). An epistemic logic for becoming informed. Synthese (KRA), 167(2), 363–389.

    Article  MATH  MathSciNet  Google Scholar 

  • Primiero, G. (2012). Offline and online data: On upgrading functional information to knowledge. Philosophical Studies. doi:10.1007/s11098-012-9860-4

  • Reason, J. (1990). Human error. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Sørensen, M. H., & Urzyczyn, P. (2006). Lectures on the Curry–Howard isomorphism volume 149 of studies in logic and the foundations of mathematics. Amsterdam: Elsevier.

    Google Scholar 

  • Sundholm, B. G. (2012). Error. Topoi, 31(1), 87–92.

    Google Scholar 

  • Turner, R. (2011). Specification. Minds & Machines, 21(2), 135–152.

    Article  Google Scholar 

  • Williamson, T. (1992). Inexact knowledge. Mind, 101(402), 217–241.

    Article  MathSciNet  Google Scholar 

  • Williamson, T. (1994). Vagueness. London: Routledge.

    Google Scholar 

  • Williamson, T. (2002). Knowledge and its limits. Oxford :Oxford University Press.

    Book  Google Scholar 

  • Woods, H. (2004). The death of argument: Fallacies in agent-based reasoning. Dordrecht: Kluwer Academic Publishers.

    Book  Google Scholar 

Download references

Acknowledgements

Drafts of this paper were discussed at the Fourth Workshop in the Philosophy of Information, University of Hertfordshire and at the Conference on Judgement and Justification, University of Tampere. I wish to thank the participants for helpful discussions. Two anonymous referees have offered criticisms and remarks that have helped clarifying various passages of this work. My personal thanks to Patrick Allo for his comments and observations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giuseppe Primiero.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Primiero, G. A Taxonomy of Errors for Information Systems. Minds & Machines 24, 249–273 (2014). https://doi.org/10.1007/s11023-013-9307-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-013-9307-5

Keywords

Navigation