Skip to main content
Log in

From Interface to Correspondence: Recovering Classical Representations in a Pragmatic Theory of Semantic Information

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

One major fault line in foundational theories of cognition is between the so-called “representational” and “non-representational” theories. Is it possible to formulate an intermediate approach for a foundational theory of cognition by defining a conception of representation that may bridge the fault line? Such an account of representation, as well as an account of correspondence semantics, is offered here. The account extends previously developed agent-based pragmatic theories of semantic information, where meaning of an information state is defined by its interface role, to a theory that accommodates a notion of representation and correspondence semantics. It is argued that the account can be used to develop an intermediate approach to cognition, by showing that the major sources of tension between “representational” and “non-representational” theories may be eased.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Here I am not assuming much about when in the complexification of cognitive organization representations should be expected. It could be as early as bacterial organization, or it could be as late as human cognition. I take it for granted that human beings use representations. I suspect that single cell organisms do not. I acknowledge that representational mechanisms do not scream out of the lower lever descriptions of neuronal (or biochemical) organization of brains or cells. They are a theoretical abstraction. The contention is that they are an explanatory and architecturally useful one.

  2. For example Clark (1998), see also more recent discussion in Dale et al. (2009) and the articles in the special issue of Dale (2008).

  3. Again, some connectionists take a more complex position about this. See for example, Churchland (2012).

  4. The levels of description here are different from the levels of analysis that Marr (1982) introduced. However, there exists a connection which will not be explored here.

  5. Note that (1) is a programmatic assumption that is generally accepted by everybody that would consider an intermediate strategy desirable. Some philosophers, however, may accept (1) as an empirically correct but non-essential principle for studding cognition. To them I have nothing else to say here.

  6. The earliest theory of semantic information that has a pragmatic dimension is due to MacKay (1969). A full-fledged PSI was developed by Nauta (1972) within the semiotic and cybernetic tradition, but in the following forty years there was little further development. More recently, a revival of PSI was attempted in (Vakarelov 2010). Floridi has also endorsed a form of PSI, more implicitly first (Floridi and Taddeo 2007), and more explicitly lately (Floridi 2011c, forthcoming).

  7. It is possible to hold the view that notions such as meaning, content or representation presuppose some mental capacities that require or are equivalent to the possession of language (Davidson 2001; Gauker 1994). Such positions often depend on theories of intentionality that demand self-referential thoughts of complex propositional form. There are many examples of such approaches. For a very extreme example see (Baker 1981). Cognitive science has little use of such archaic theories of representation, so I will not take them seriously here. There are more sophisticated theories of semantics that also end up holding that semantization requires very sophisticated cognitive machinery that is coextensive with possession of language, even if, strictly speaking, the semantics does not derive from the language. Floridi (2011b) holds a similar view, for example. One possibility is that there is a terminological disagreement about what we call semantics. Even if the disagreement is more substantive, space will not allow to enter deeply in a debate with such a position here.

  8. Theories such as that of Nauta (1972) and Floridi (2011b) are more nuanced and contain elements of both.

  9. I cannot think of interesting ways of swopping the two roles, where aboutness relates to status and significance to content.

  10. Vakarelov (2010) does not use explicitly Neander’s distinction.

  11. For natural cognitive systems there is a further question about the source of the teleology. In Vakarelov (2011b) I resort to the idea, originally proposed in Maturana and Varela (1980), and developed in various places since, that autopoiesis, as an organizational system condition, is sufficient for natural teleology.

  12. Nauta modulates the input and output process by other systems—receivers and emitters. The reception of information is also described as discriminable form or potential information (to be distinguished from actual information, which is always semantic). The idea is related to Ashby’s notion of transmission of variety (1956).

  13. It is quite common to assume that semantics requires semantics for: a system is not a meaning-user if it is not a meaner. This idea is implicit in theories connecting meaning to intentionality, and in theories requiring interpretation. I regard this to be a grave mistake, and one of the biggest obstacles to naturalizing semantics. A program that attempts to explain meaning with an act of meaning is doomed to fail, because it tries to explain something simpler with something more complicated that presuppose it.

  14. Although probably not formal tools yet. The complex organization of even simple cognitive systems is too great for the analytic machinery of modern dynamical systems theory, including exotic fields such as synergetics (Haken 1993, 2000). This is why an independent informational description is needed as an alternative discursive framework.

  15. It is defined by a two-level gradient of abstraction, in the terminology of LoA theory (Floridi and Sanders 2004).

  16. This example is intended only as a visualizable and intuitable (but cartoonish) illustration of the constructions. I am not implying that the constructions offer a realistic analysis of the example. The mechanisms involved in visual perception, conceptualization, etc., are considerably more complex that the machinery developed here can accommodate. Indeed, the machinery is intended to capture simple cognition. Unfortunately, simple cognitive systems do not provide intuitive illustrations of the constructions. In fact, one of the aims of the model developed here is to provide tools for the analysis of simple systems.

  17. It is not assumed that identification of such macro-structure is possible in practice through analytic means, except for toy examples. It should not be expected that such identification would be entirely observer-independent. It only would be assumed that such macro-structure is possible to define. Formally, worse case scenario, the macro-structure is identical to the dynamical micro-structure of states. In this case, however, nothing interesting can be said about the system.

  18. Note that it is always possible to define macro-structure on N if we have a macro-structure on M and a micro-state (dynamical) function f : M → N. Macro-states are simply sets of micro-states, every such set is mapped by f to a set of micro-states of N. The important question is whether this macro-structure is interesting for the operation of the organism.

  19. The word ‘triangulation’ has various uses in different fields. Here I model a term from psychology related to in-group communication, where individuals that do not talk may communicate and coordinate through an intermediary.

  20. For simplicity, I do not discuss the cases where O may have other sources as well, especially S.

  21. Throughout, the word “virtual” is used in a technical sense. The suggested contrast is not between ontological status: virtual versus real. The contrast is between where the causal support lies, as in ‘computer generated, virtual reality’ versus ‘physical reality’—virtual as emulated, but not virtual as unreal.

  22. Again, this is only a fanciful example. I am not endorsing a language of thought hypothesis here.

  23. The more common way of saying this is that representations are ‘intentional’. Because of the various mentalistic connotations of ‘intentional’, I purposefully avoid using the word here.

  24. I will not discuss this issue any further, as I have not attempted to investigate and dismiss alternative accounts. Instead, I offer directly a positive model. Comparative analysis is also needed, but will be the subject of future work.

  25. Of course, this is not what the agent would say, had it had the further cognitive capacities to say things. The agent would think that it is representing the world.

  26. The model allows for there to be both an actual and a virtual coupling between two media, if there are two different triangulation monitoring media, one of which defines the link internally. Something like this happens when we describe a causal link between two systems as defining a representation relation. In the process, inadvertently, we create a virtual link between the system. It is this virtual link, in our description, not the causal link, that produces the representation.

  27. The discussion here is strictly about internal correspondence semantics for the agent. There may be different, non-correspondence internal semantics that are defined by different information media network structures. There may be internal interface semantics, for example. The machinery of IMNs offers, potentially, a wide array of possible semantic systems. This will be explored in future research.

  28. In order to maintain simplicity, I am being a bit sloppy here. Technically, representation, as defined above, requires triangulation monitoring. So, when we say that O is monitored by T, what actually happens is that there is another medium O’ that is coupled to O and this coupling is monitored by T. O and O’ may be identical and may be coupled by an information processing operation (see section “Information Media Networks”) Making this idea precise would require a closer examination of the formalism of IMNs, which will distract from the aim of this article. Such ideas would be needed to make the definition of reflexivity more rigorous.

References

  • Ackoff, R. L. (1958). Towards a behavioral theory of communication. Management Science, 4(3), 218–234.

    Article  Google Scholar 

  • Ashby, W. (1956). An introduction to cybernetics. London: Chapman & Hall.

    MATH  Google Scholar 

  • Baker, L. R. (1981). Why computers can’t act. American Philosophical Quarterly, 18(2), 157–163. http://www.jstor.org/stable/20013906.

  • Beer, R. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences, 4, 91–99.

    Article  Google Scholar 

  • Bogdan, R. J. (1988). Information and semantic cognition: An ontological account. Mind and Language, 3(2), 81–122.

    Article  MathSciNet  Google Scholar 

  • Broadbent, D. E. (1958). Perception and communication, vol 2. London: Pergamon press.

    Book  Google Scholar 

  • Brooks, R. (1986). A robust layered control system for a mobile robot. Robotics and Automation. IEEE Journal of, 2(1), 14–23. doi:10.1109/JRA.1986.1087032.

    Article  Google Scholar 

  • Carnap, R., & Bar-Hillel, Y. (1952). An outline of a theory of semantic information. Technical Report 247, MIT.

  • Chemero, A. (2009). Radical embodied cognitive science. Cambridge, MA: MIT Press.

    Google Scholar 

  • Churchland, P. (2012). Plato’s camera: How the physical brain captures a landscape of abstract universals. MA: MIT Press.

    Google Scholar 

  • Clark, A. (1998). Being there: Putting brain, body, and world together again. Cambridge, MA: MIT Press.

    Google Scholar 

  • Clark, A., & Toribio, J. (1994). Doing without representing. Synthese, 101(3), 401–431.

    Article  Google Scholar 

  • Dale, R. (2008). Introduction to the special issue on: Pluralism and the future of cognitive science. Journal of Experimental and Theoretical Artificial Intelligence, 20(3), 153.

    Article  Google Scholar 

  • Dale, R., Dietrich, E., & Chemero, A. (2009). Explanatory pluralism in cognitive science. Cognitive science, 33(5), 739–742.

    Article  Google Scholar 

  • Davidson, D. (2001). Inquiries into truth and interpretation. (2nd ed.). Oxford: Clarendon Press.

    Book  Google Scholar 

  • Dretske, F. (1981). Knowledge and the flow of information. Cambridge, MA: MIT Press.

    Google Scholar 

  • Eliasmith, C. (2009). Dynamics, control, and cognition. In P. Robbins & M. Aydede (Eds.), Cambridge handbook of situated cognition (pp. 134–154). Cambridge: Cambridge University Press.

    Google Scholar 

  • Floridi, L. (2005). Information, semantic conceptions of Stanford Encyclopedia of Philosophy Edward N. Zalta (Ed.), http://plato.stanford.edu/entries/information-semantic/ .

  • Floridi, L. (2011). A defence of constructionism: Philosophy as conceptual engineering. Metaphilosophy, 42(3), 282–304.

    Article  Google Scholar 

  • Floridi, L. (2011). The philosophy of information. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Floridi, L. (2011). Semantic information and the correctness theory of truth. Erkenntnis, 74(2), 147–175.

    Article  MATH  Google Scholar 

  • Floridi, L. (forthcoming). Perception and testimony as data providers. Logique et Analyse.

  • Floridi, L., & Sanders, J. (2004). Levellism and the method of abstraction. IEG Research Report IEG-RR-4. Oxford: Oxford University.

  • Floridi, L., & Taddeo, M. (2007). A praxical solution of the symbol grounding problem. Minds and Machines, 17(4), 369–389.

    Article  Google Scholar 

  • Fodor, J., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71.

    Article  Google Scholar 

  • Froese, T. (2010). From cybernetics to second-order cybernetics: A comparative analysis of their central ideas. Constructivist Foundations, 5(2), 75–85.

    Google Scholar 

  • Froese, T., Di, P., & Ezequiel, A. (2011). The enactive approach: Theoretical sketches from cell to society. Pragmatics & Cognition, 19(1), 1–36.

    Article  Google Scholar 

  • Gauker, C. (1994). Thinking out loud: An essay on the relation between thought and language. Princeton: Princeton University Press.

    Google Scholar 

  • Gibson, J. (1986). The ecological approach to visual perception. Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Glasersfeld, E. (1974). Piaget and the radical constructivist epistemology. In: von Glasersfeld, E., SCD (Eds.), Epistemology and education (pp. 1–24). Athens GA: Follow Through Publications.

  • Glasersfeld, E. (1996). Radical constructivism: A way of knowing and learning, studies in mathematics education series, vol 6. New York, NY: Routledge.

    Google Scholar 

  • Greco, G.M., Paronitti, G., Turilli, M., & Floridi, L. (2005). How to do philosophy informationally. Lecture notes in computer science, vol 3782 (pp. 623–634).

  • Grice, H. (1957). Meaning. The Philosophical Review, 66(3), 377–388.

    Article  Google Scholar 

  • Haken, H. (1993). Advanced synergetics: Instability hierarchies of self-organizing systems and devices. (3rd ed.). Berlin: Springer.

    Google Scholar 

  • Haken, H. (2000). Information and self-organization: A macroscopic approach to complex systems. (2nd ed.). Berlin: Springer.

    Google Scholar 

  • Harman, G. (1982). Conceptual role semantics. Notre Dame Journal of Formal Logic, 28(April), 242–256.

    Article  MathSciNet  Google Scholar 

  • Heylighen, F., & Joslyn, C. (2001). Cybernetics and second order cybernetics. Encyclopedia of Physical Science & Technology, 4, 155–170.

    Google Scholar 

  • Heylighen, F., Joslyn, C., & Turchin, V. (1991). A short introduction to the principia cybernetica project. Journal of Ideas, 2(1), 26–29.

    Google Scholar 

  • Kelso, J. A. S. (1995). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: MIT Press.

    Google Scholar 

  • MacKay, D. M. (1969). Information, mechanisms and meaning. Cambridge, MA: MIT Press.

    Google Scholar 

  • Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. New York: W. H. Freeman and Company.

    Google Scholar 

  • Maturana H. (1978) Psychology and biology of language and thought. chap Biology of language: The epistemology of reality (pp. 27–63). New York: Academic Press.

  • Maturana, H. (2002). Autopoiesis, structural coupling and cognition: A history of these and other notions in the biology of cognition. Cybernetics and Human Knowing, 3(4), 5–34.

    Google Scholar 

  • Maturana, H. R., & Varela, F. J . (1980). Autopoiesis and cognition: The realization of the living, Boston studies in the philosophy of science. Boston: D. Reidel Publishing Company.

    Google Scholar 

  • Morris, C. (1972). Writings on the general theory of signs. The Hague: Mouton.

    Google Scholar 

  • Nauta, D. (1972). The meaning of information. The Hague: Mouton.

    Google Scholar 

  • Neander, K. (2012). Teleological theories of mental content. In Zalta, E. N. (Eds.), The Stanford encyclopedia of philosophy. spring 2012 edn.

  • Newell, A., & Simon, H. (1981). Computer science as empirical enquiry. In J. Hougeland (Eds.), Mind design II. Cambridge, MA: MIT Press.

    Google Scholar 

  • Powers, W. (1973). Behavior: The control of perception. New York, NY: Hawthorne.

    Google Scholar 

  • Pylyshyn, Z. W. (1986). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: MIT Press.

    Google Scholar 

  • Riegler, A. (2002). When is a cognitive system embodied? Cognitive Systems Research, 3(3), 339–348.

    Article  Google Scholar 

  • Rumelhart, D. E. (1989). Computer science as empirical enquiry. In M. I. Posner (Eds.), Mind design II. Cambridge, MA: MIT Press.

    Google Scholar 

  • Stewart, J., Gapenne, O., & Di Paolo, E. (2011). Enaction: Toward a new paradigm for cognitive science. Cambridge, MA: MIT Press.

    Google Scholar 

  • Thelen, E., & Smith, L. (1994). Dynamics system approach to the development of cognition and action. Cambridge, MA: MIT Press.

    Google Scholar 

  • Vakarelov, O. (2010). Pre-cognitive semantic information. Knowledge, Technology & Policy, 23(2), 193–226.

  • Vakarelov, O. (2011a). The cognitive agent: Overcoming informational limits. Adaptive Behavior, 19(2), 83–100.

    Google Scholar 

  • Vakarelov, O. (2011b). General situated cognition. PhD thesis, The University of Arizona. http://hdl.handle.net/10150/202751.

  • Vakarelov, O. (2012). The information medium. Philosophy and Technology, 25(1), 47–65.

  • van Gelder, T. (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences, 21, 615–665.

    Google Scholar 

  • Varela, F. J., Thompson, E. T., & Rosch, E. (1992). The embodied mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Von Foerster, H. (2002). Understanding understanding: Essays on cybernetics and cognition. New York, NY: Springer.

    Google Scholar 

  • von Uexküll, J. (1909). Unwelt und Innenwelt der Tierre. Berlin: Springer.

    Google Scholar 

  • Weber, A., & Varela, F. (2002). Life after kant: Natural purposes and the autopoietic foundations of biological individuality. Phenomenology and the Cognitive Sciences, 1(2), 97–125.

    Article  Google Scholar 

  • Wittgenstein, L. (1953). Philosophical investigations. Oxford: Blackwell.

    Google Scholar 

Download references

Acknowledgments

This research was been supported by a New Faculty Fellows award from the American Council of Learned Societies, funded by the Andrew W. Mellon Foundation. I would like to thank the participants of the Fourth Workshop on the Philosophy of Information, University of Hertfordshire, UK. I would like also to thank Fred Dretske, Karen Neander, Walter Sinnott-Armstrong, and Alexander Rosenberg for insightful discussion of aspects of this work, as well as the participants in a Duke Philosophy Colloquium, where parts of this work where presented. Finally, I would like to thank the insightful comments of the several anonymous referees.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Orlin Vakarelov.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vakarelov, O. From Interface to Correspondence: Recovering Classical Representations in a Pragmatic Theory of Semantic Information. Minds & Machines 24, 327–351 (2014). https://doi.org/10.1007/s11023-013-9318-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-013-9318-2

Keywords

Navigation