Skip to main content
Log in

Information Processing as an Account of Concrete Digital Computation

  • Target Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

It is common in cognitive science to equate computation (and in particular digital computation) with information processing. Yet, it is hard to find a comprehensive explicit account of concrete digital computation in information processing terms. An information processing account seems like a natural candidate to explain digital computation. But when ‘information’ comes under scrutiny, this account becomes a less obvious candidate. Four interpretations of information are examined here as the basis for an information processing account of digital computation, namely Shannon information, algorithmic information, factual information and instructional information. I argue that any plausible account of concrete computation has to be capable of explaining at least the three key algorithmic notions of input, output and procedures. Whist algorithmic information fares better than Shannon information, the most plausible candidate for an information processing account is instructional information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Graham White suggested extending the use of the single-input single output transducer paradigm. One way of going about it is to refer to computing agents instead (i.e., in terms of goals, goal-triggers and actions to achieve these goals). Supposedly, this paradigm would encompass conventional digital computers as well as neural nets. After all, neural nets are arguably the paradigmatic case for parallel distributed processing of information. But this requires significant conceptual groundwork to determine the extent to which computing agents are goal-driven rather than rule-driven. Besides, it is an entirely different matter whether all neural nets compute, and even if they do, whether they perform nontrivial digital computation. I discuss this last question elsewhere (Fresco 2010).

  2. The functionalist approach of ignoring the internal constitution of information and only concentrating on its processing instead is not so useful in our case. The conceptual analysis undertaken in this paper results in different IP accounts depending on what information is taken to be. It also has implications for the applicability of the resulting IP account to concrete computation.

  3. In a similar vein, Gualtiero Piccinini and Andrea Scarantino analyse three potential candidates for an IP account: SI, natural semantic information and non-natural semantic information (2011). They conclude that digital computation does not entail the processing of either SI or natural semantic information and it also need not be the processing of non-natural semantic information.

  4. The domain of each UTM is self-delimited much like programming languages. For they provide constructs that delimit the start and end of programs. A self-delimiting TM does not ‘know’ in advance how many input symbols suffice to execute the computation (Calude 2002: pp. 34–35).

  5. UTMs differ in implementation resulting in the informative content of a string being relative to the particular UTM used to calculate its AI complexity, K. Cristian Calude shows that for every two UTMs u1 and u2 ∀x∃c: (x∈S, c∈N) |Ku1(x) − Ku2(x)| ≤ c where x is the input to the UTM (and S is the set of all strings) (2002: p. 38).

  6. This principle tacitly assumes the existence (even in the past) of some agent with a system of values relative to whom the data are (or were) meaningful.

  7. This counterintuitive consequence is known as the Bar-Hillel-Carnap paradox (Floridi 2011: p. 100).

  8. This is consistent with the possibilist’s thesis (in the metaphysics of modality) that the set of all actual things is only a subset all of the things that are (possible).

  9. To a first approximation, a miscomputation is a mistake in the computation process due to a hardware malfunction or a runtime error of the executed program. A runtime error is typically the result of mistakes made by the programmers or designers of the program producing an incorrect or unexpected behaviour at runtime. Less common are errors that are caused by compilers producing an incorrect code (but even those can be attributed to human errors in the complier program). Common examples of runtime errors include the program running out of available memory, attempting to divide by 0, accessing illegal memory locations (e.g., when attempting to read past the last cell of a data array), or dereferencing a NULL pointer, which no longer points to a valid memory location.

  10. This theorem states that there exists a self-delimiting UTM U, such that for every self-delimiting TM T, a constant c can be computed (depending only on U and T), satisfying the following property. If T(x) halts, then U(x′) = T(x), for some string x′ whose length is no longer than the length of the string x plus c (Calude 2009: p. 81).

  11. There is no finite generalised transducer that can simulate a transducer running some program. Yet, Calude et al. (2011: p. 5672) prove that the invariance theorem (informally saying that a UTM provides an optimal means of description up to an additive constant) also holds true for finite state complexity. Finite state complexity of a finite string x is defined in terms of a finite transducer T and a finite string s such that T on input s outputs x. It is defined relative to the number of states of transducers used for minimal encodings of arbitrary strings.

  12. One might question the reasoning behind the key requirements coming from the resulting IP account, rather than coming from actual computing systems. Once these key requirements are explicated, then various interpretations of information can be evaluated as a basis for an IP account. But that would be missing the point, for it is not at all clear what it takes for a physical system to compute. This is also the reason for the existence of many extensionally different accounts of computation (Fresco 2011). The IP account is only one of them, and it is commonly invoked in cognitive science.

  13. At the same time, this requirement makes it less obvious how discrete connectionist networks perform digital computation in the absence of explicit control units. This complication shall not be further considered here.

  14. This requirement is problematic for most neural networks, for they typically lack the flexibility enabled by long-term memory. That is particularly problematic for nets lacking any feedback loops.

  15. It does not follow though that some working OS threads (or even the OS in its entirety) cannot be loaded onto RAM once the OS has finished loading from the persistent memory.

  16. In conventional general-purpose computers, the OS is the core program required for any other program to run. But if neither the OS (as the main executed program) nor any other program (by implication) is running on the computer, then effectively no computation is taking place.

  17. Information processing may be construed in a variety of ways depending on the particular context of enquiry, including (but not limited to) the manipulation, acquisition, parsing, derivation, storing, comparison and analysis of information. However, it seems to me that these depend crucially on at least one of the aforementioned operations. Also, insofar as processing of information is taken as a physical process, in accordance with the second law of thermodynamics, it always results in some change in free energy (Karnani et al. 2009). Thus, even when certain information is deleted from a computing system, it is not completely destroyed, for some energy dissipates from the system into its surrounding.

  18. Yet, it remains to be seen whether this new information stored in the database of a digital computing system is merely a copy of the (new) information that was created externally (e.g., by the human resources manager).

  19. There is an ongoing debate regarding information in deductive inferences. Some, including John S. Mill and the logical positivists, have argued that logical truths are tautologies, and so deductive reasoning does not add any new information. On this view, all valid deductive arguments simply beg the question. Others (notably, Jaakko Hintikka (1984)) have argued that deductive reasoning can indeed produce new nontrivial information.

  20. Generally, the amount of information in any two strings Si and Sj is not less than the sum of the information of Si and Sj, if the content of Si and the content of Sj are in some sense independent (or at least one does not contain the other). Still, there are clearly cases where INF(Si + Sj) < INF(Si) + INF(Sj). For a more detailed discussion of the “additivity” principle see Carnap and Bar-Hillel (1952: pp. 12–13). Arguably, universal instantiation and modus ponens, for instance, as a means of inferring S 4 from S 1 and S 3 also carry some positive information, since without the recipient knowing how to use them, she cannot infer S 4.

  21. Induction, abduction and nonmonotonic logic do not abide by the same principle, and their application does not guarantee the truth of any new information that they potentially produce. Both abductive reasoning and non-monotonic logic play an important role in artificial intelligence and should not be discounted, but they exceed the scope of this paper.

  22. The reader will have noticed that I have deliberately used “transform” here, rather than “process”. For information processing (but not its transformation) also implies the (possible) production of new information.

  23. This characterisation is an adaptation of Peter Corning’s teleonomic definition of control information in cybernetic and biological systems (2001: p. 1277).

  24. For a detailed analysis of the key criteria for evaluating the adequacy of accounts of computation, see Fresco 2008.

  25. Dealing with optimal programs is a feature of (at least conventional) AIT. But this by no means has any special bearing on AIT being an adequate candidate for an IP account of digital computation. Rather, the point is that AIT, unlike SIT, can adequately describe the behaviour of different programs.

  26. It may be argued, however, that increasing the reliability of the message transmission process instils some confidence in the receiver. But even if that were the case, any “new” information here would remain constant and would not increase further by sending each message, say, three times (instead of two).

  27. Strictly, by adding, say, parity bits to a message M 1, the informational content in M 1 plus the parity bits increases over the informational content of just M 1. But unless those parity bits play an additional role as well as an error correction method (e.g., for data security as well as data integrity), the underlying information content is still conveyed by M 1.

  28. Floridi illustrates this point by considering a page of a book written in some unknown language (2011: p. 85). We have all the data but no information, for we do not know their meaning. If we erased half the content of that page, we might say that we have halved the data as well. Suppose we keep erasing the content of that page until the page is blank. Yet, we are left with some data, since the presence of the blank page is still a datum as long as it is different from a nonblank page.

  29. Edmund Gettier (1963) has challenged Plato’s view of knowledge as Justified True Belief. He argued that truth, belief and justification are not sufficient conditions for knowledge. He showed that a true belief might be justified, but fail to be knowledge.

  30. The ace of hearts card, for instance, is represented as a data structure with properties such as a shape, a number etc. This data structure can be processed by the program and when appropriate, the processed data can be presented again in some form of human readable information as output.

  31. I owe this point to Karl-Christian Posch who suggested viewing these different levels of abstraction from an engineering perspective.

  32. Consequently, a special purpose TM would require some modification of the proposed analysis, if we chose not to interpret it as executing a program, per se.

  33. Many intermediate steps have been removed for simplicity. For example, the name of the procedure “InefficientMultiply” in Fig. 1 is not added to the call stack (as it should be) allowing its retrieval at a later stage.

References

  • Abramsky, S., Jagadeesan, R., & Malacaria, P. (2000). Full abstraction for PCF. Information and Computation, 163, 409–470.

    Article  Google Scholar 

  • Adriaans, P. (2008). Learning and the cooperative computational universe. In P. Adriaans & J. van Benthem (Eds.), Handbook of the philosophy of science, volume 8: philosophy of information, pp. 133–167. New York: Elsevier.

    Google Scholar 

  • Agassi, J. (1988). Winter 1988 Daedalus. SIGArt newsletter, 105, 15–22.

    Article  Google Scholar 

  • Agassi, J. (2003). Newell’s list. Commentary on Anderson, J. & Lebiere, C.: The Newell test for a theory of cognition. Behavioural and brain sciences, 26, 601–602.

    Article  Google Scholar 

  • Barwise, J., & Seligman, J. (1997). Information flow: the logic of distributed systems. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Broderick, P. B. (2004). On communication and computation. Minds and Machines, 14, 1–19.

    Article  Google Scholar 

  • Calude, C. S. (1988). Theories of computational complexity. Elsevier Science.

  • Calude, C. S. (2002). Information and randomness: an algorithmic perspective. 2nd edition. Springer.

  • Calude, C. S. (2009). Information: the algorithmic paradigm. In G. Sommaruga (Ed.), Formal theories of information. New York: Springer.

    Google Scholar 

  • Calude, C. S., Salomaa K., Roblot, T. K. (2011). Finite state complexity. Theoretical Computer Science, 412, 5668–5677.

    Google Scholar 

  • Carnap, R., & Bar-Hillel, Y. (1952). An outline of a theory of semantic information. MIT Research Laboratory of Electronics, Technical Report No. 247. Cambridge, MA: MIT.

  • Chaitin, G. J. (2003). Algorithmic information theory. 3rd Printing. Cambridge University Press.

  • Chaitin, G. J. (2007). Thinking about Gödel & Turing: essays on complexity, 1970–2007. Singapoer: World Scientific.

    Book  Google Scholar 

  • Cordeschi, R. (2004). Cybernetics. In L. Floridi (Ed.), The Blackwell guide to philosophy of computing and information (pp. 186–196). Oxford: Blackwell.

    Google Scholar 

  • Corning, P. A. (2001). “Control information”: the missing element in Norbert Wiener’s cybernetic paradigm? Kybernetes, 30, 1272–1288.

    Article  Google Scholar 

  • Dennett, D. C. (1991). Consciousness explained. Boston, MA: Little, Brown.

    Google Scholar 

  • Dershowitz, N., & Gurevich, Y. (2008). A natural axiomatization of computability and proof of Church’s thesis. The Bulletin of Symbolic Logic, 14, 299–350.

    Article  Google Scholar 

  • Downey, R. G., & Hirschfeldt, D. R. (2010). Algorithmic randomness and complexity. NY: Springer.

    Book  Google Scholar 

  • Dretske, F. I. (1981). Knowledge and the flow of information. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Dretske, F. I. (1993). Can intelligence be artificial? Philosophical studies, 71, 201–216.

    Article  Google Scholar 

  • Dreyfus, H. L. (1979). What computers can't do: the limits of artificial intelligence (2nd ed.). NY: Harper & Row.

    Google Scholar 

  • Dunn, M. (2008). Information in computer science. In P. Adriaans & J. van Benthem (Eds.), Handbook of the philosophy of science, volume 8: philosophy of information, pp. 581-608. New York: Elsevier.

    Google Scholar 

  • Fetzer, J. H. (2004). Information: does it have to be true? Minds and Machines, 14, 223–229.

    Article  Google Scholar 

  • Feynman, R. P. (1996). The Feynman lectures on computation. Reading, MA: Addison-Wesley.

    Google Scholar 

  • Floridi, L. (2005). Is semantic information meaningful data? (pp. 351–370). LXX: Philosophy and Phenomenological Research.

    Google Scholar 

  • Floridi, L. (2008). Trends in the philosophy of information. In P. Adriaans & J. van Benthem (Eds.), Handbook of the philosophy of science, volume 8: philosophy of information, pp. 113–131. New York: Elsevier.

    Google Scholar 

  • Floridi, L. (2009). Philosophical conceptions of information. In G. Sommaruga (Ed.), Formal theories of information (pp. 13–53). Berlin: Springer.

    Chapter  Google Scholar 

  • Floridi, L. (2011). The philosophy of information. Oxford, UK: Oxford University Press.

    Book  Google Scholar 

  • Fresco, N. (2008). An analysis of the criteria for evaluating adequate theories of computation. Minds and Machines, 18, 379–401.

    Article  Google Scholar 

  • Fresco, N. (2010). A computational account of connectionist networks. Recent Patents on Computer Science, 3, 20–27.

    Article  Google Scholar 

  • Fresco, N. (2011). Concrete digital computation: what does it take for a physical system to compute? Journal of Logic, Language and Information, 20, 513–537.

    Article  Google Scholar 

  • Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23, 121–123.

    Google Scholar 

  • Gruenberger, F. (1976). Bug. In A. Ralston (eds.) Encyclopedia of computer science. p. 189. NY: Van Nostrand Reinhold.

  • Harnad, S. (1990). The symbol grounding problem. Physica, 42, 335–346.

    Google Scholar 

  • Hintikka, J. (1984). Some varieties of information. Information processing and management, 20, 175–181.

    Article  Google Scholar 

  • Karnani, M., Pääkkönen, K., & Annila, A. (2009). The physical character of information. Proceedings of the Royal Society, A, 465, 2155–2175. doi:10.1098/rspa.2009.0063.

    Article  Google Scholar 

  • Larsson, S., Lüders, F. (2004). On the concept of information in industrial control systems. In the proceedings of the National Course in Philosophy of Computer Science, pp. 1–6.

  • Malacaria, P. (2007). Assessing security threat of looping constructs. In the Proceedings of 34th ACM Symposium on Principles of Programming Languages, pp. 225–235.

  • Penrose, R. (1989). The emperor’s new mind. London: Oxford University Press.

    Google Scholar 

  • Piccinini, G. (2007). Computing mechanisms. Philosophy of Science, 74, 501–526.

    Article  Google Scholar 

  • Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37, 1–38.

    Article  Google Scholar 

  • Ralston, A. (1976). Encyclopedia of computer science. 1st edition. Petrocelli Books.

  • Scarantino, A., & Piccinini, G. (2010). Information without truth. Metaphilosophy, 41, 313–330.

    Article  Google Scholar 

  • Shannon, C. E. (1948). A mathematical theory of communication. Mobile Computing and Communications Review, 5, 1–55.

    Google Scholar 

  • Smith, B. C. (2002). The foundations of computing. In M. Scheutz (Ed.), Computationalism: new directions (pp. 23–58). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Soare, R. (2007). Computability and incomputability. In S. B. Cooper, B. Löwe, & A. Sorbi (Eds.), Proceedings of the third conference on Computability in Europe, Lecture Notes in Computer Science, 4497 (pp. 705–715). Berlin: Springer.

    Google Scholar 

  • Sorensen, R. (2007). Can the dead speak? In S. Nuccentelli and G. Seay (Eds.) Themes from G. E. Moore: new essays in epistemology and ethics. New York: Oxford University Press.

  • Turing, A. M. (1950). Computing machinery and intelligence (pp. 433–460). LIX: Mind.

    Google Scholar 

  • Wang, H. (1974). From mathematics to philosophy. New York: Humanities Press.

    Google Scholar 

  • White, G. (2011). Descartes among the robots: computer science and the inner/outer distinction. Minds and Machines, 21, 179–202.

    Article  Google Scholar 

  • Wiener, N. (1948). Cybernetics: or control and communication in the animal and the machine. Cambridge: The MIT Press.

    Google Scholar 

  • Wiener, N. (1966). God and Golem, Inc.: a comment on certain points where cybernetics impinges on religion. Cambridge: The MIT Press.

Download references

Acknowledgements

Part of this research was done during a visiting fellowship at the IAS-STS in Graz, Austria in 2011. Thanks to Gualtiero Piccinini, Oron Shagrir and Matt Johnson for useful comments on earlier drafts of this paper. I have greatly benefited from discussions with Naftali Tishby and Karl Posch on the mathematical theory of information and with Cristian Calude on algorithmic information theory and for that I am grateful. I would also like to express my gratitude to both Graham White and Marty Wolf, who refereed the paper and agreed to drop their anonymity in the process. Their insightful comments helped reshape and improve this paper significantly. I am indebted to Phillip Staines for his detailed comments and ongoing support. Earlier versions of this paper were presented at the 2010 AAPNZ conference in Hamilton, NZ, the 2011 AISB convention in York, UK and the IAS-STS fellowship colloquium in Graz, Austria. All the people mentioned above contributed to the final draft of the paper, but I am responsible for any remaining mistakes. This paper is dedicated in loving memory of Moshe Bensal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nir Fresco.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fresco, N. Information Processing as an Account of Concrete Digital Computation. Philos. Technol. 26, 31–60 (2013). https://doi.org/10.1007/s13347-011-0061-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-011-0061-4

Keywords

Navigation