## Abstract

The concept of representation has been a key element in the scientific study of mental processes, ever since such studies commenced. However, usage of the term has been all but too liberal—if one were to adhere to common use it remains unclear if there are examples of physical systems which cannot be construed in terms of representation. The problem is considered afresh, taking as the starting point the notion of activity spaces—spaces of spatiotemporal events produced by dynamical systems. It is argued that representation can be analyzed in terms of the geometrical and topological properties of such spaces. Several attributes and processes associated with conceptual domains, such as logical structure, generalization and learning are considered, and given analogues in structural facets of activity spaces, as are misrepresentation and states of arousal. Based on this analysis, representational systems are defined, as is a key concept associated with such systems, the notion of representational capacity. According to the proposed theory, rather than being an all or none phenomenon, representation is in fact a matter of degree—that is can be associated with measurable quantities, as is behooving of a putative naturalistic construct.

This is a preview of subscription content, access via your institution.

## Notes

Unlike the neurosciences, in which the terms “code” and “representation” seem to be used almost interchangeably (Bickhard 1993).

By this, I refer not only to conceptual information, but to perceptual information as well, as according to the account presented here the representation of both types of information shares many formal aspects.

The fact that top down information reflecting past experience, expectations and what not can profoundly alter the resulting precept and help overcome partial or degraded information notwithstanding.

By readout mechanism I refer to the ability of a system to register its internal states, or rather take measurements of those. I am not, however, contending that readout is strictly speaking part of representation itself (which would lead to an infinite regression), but rather that information embedded in a physical state or process is of little avail if the system at hand cannot act upon it. A clear example would be the rings inside the trunk of a tree—while they hold information about the trees’ approximate age in years, it can hardly be argued that the tree has access to this information, that is, can act upon it.

To be more exact, usually the idea is to construct a monomorphism from the base domain to the automorphism group of some object (such as an algebraic structure). Such a monomorphism is an isomorphism onto its image.

Which is to concede little, such a project would pretty much amount to hubris given the resources at our disposal at this time.

To some extent this has been the actual practice, as physiological data has had a major influence on psychology (and of course philosophy of mind), and psychophysics is used to guide and constrain neuronal measurements and modeling. However, the cohesion of such a program is obviously compromised without a unifying explicit framework.

Following Shepard (1987) similarity and difference are likely to be exponential functions of distance.

Moreover, the question of how similar two objects are has as many senses as the compared objects have attributes. Thus, the low level approximation of trajectories will critically differ according to sense.

To appreciate this, note that an instance of activity devoid of structure cannot be said to represent a thing, as to access the information it presumably carries implies that the system could act upon this information. For this to be true the system would need to create a second such representation which could be subjected to manipulation/computation.

A compelling example would be a convex Euclidean set—regardless of its dimension any partitioning of this set will be arbitrary (and unlearnable). Thus, it can be said to represent very little if at all.

And only the ring structure—as improper scale can lead to the detection of spurious structure such as in 1C.

Given a function

*F*:*M**→**N*, the level set for*c**∈**F*(*M*) is the set of all points*x**∈**M*for which*F*(*x*)*=**c*.Note, by definition points on this manifold possess the characteristic values associated with activity within such a distinct state of arousal, and thus will possess the degree of complexity typical of a state.

By this I do not mean necessary and sufficient conditions but rather construe “aspect” in the metric sense as will be elaborated below which is more in accord with Wittgenstein’s (1953/2001) notion of family resemblance between members of a category.

The simplest example would be a line whose end points are glued together—the result would be a topological ring (if it were done with bending the end result would be a circle. Bending, however, doesn’t figure in topology).

A set

*A*is open in*X*/~ iff the union of the equivalence classes in*A*is open in*X*.Recall, that ultimately, we are concerned with spaces ordered by the metric structure of primal similarity, and thus betweeness can be given exact sense (see Gardenfors 2000).

A similar scenario is the subject matter of shape theory (Kendall et al. 1999; Small 1999). The results of shape theory, however, do not figure in our discussion as the normalizations used in the definition of shape are the “objective” ones (i.e. rotation translation and scale), whereas human vision acts along somewhat divergent “subjective” lines (i.e. constancy mechanisms). Moreover landmark representation is a dubious candidate for visual representation—according to it triangles are triplets while circles are of infinite dimension (or at least order of magnitudes more than 3) whereas it is hardly the case that qua being visual objects there is such a prominent difference between the two. Both considerations suggest that more likely than not there might be a profound difference between the structure of perceptual shape spaces and that of normalized spaces of landmarks.

In which due to finiteness products of open sets are open.

The framework of k-sets (Kozma and Freeman 2003) which has been championed by Freeman for the last two decades has many parallels to the ideas above. It is, however, beyond the scope of this paper to explore them.

To add further complication it seems that biological representational systems are by no means unbiased—that is, it is in their constitution to misrepresent the information presented via the sensual organs. This so called unrealistic construal of reality can be very beneficial as it can drive organisms to advantageous behavior (or its

), but can alter profoundly the realized conceptual and perceptual domains.*pursuit*The complementary point would be that it is the higher order, or more global features of the world distribution which are precarious to varying degrees—due to the inherent difficulty in picking up such detail.

The Riemannian curvature tensor is given by the Gaussian curvature of 2D intersections of a Riemannian manifold: \( K(X,Y) = {\frac{Rm(X,Y,Y,X)}{{\left| X \right|^{2} \left| Y \right|^{2} - \left\langle {X - Y} \right\rangle^{2} }}} \), where

*K*is the Gaussian curvature,*Rm*is the Riemannian curvature tensor, and*X*,*Y*are independent (tangent) vectors (Lee 1997). So increase in curvature can be taken to mean that the sum over the absolute value over such pairs increase.

## References

Bassett, D. S., Meyer-Lindenberg, A., Achard, S., Duke, T., & Bullmore, E. (2006). Adaptive reconfiguration of fractal small-world human brain functional networks.

*Proceedings of the National Academy of Sciences of the United States of America,**103*(51), 19518–19523. doi:10.1073/pnas.0606005103.Bickhard, M. H. (1993). Representational content in humans and machines.

*Journal of Experimental & Theoretical Artificial Intelligence,**5*, 285–333. doi:10.1080/09528139308953775.Carlsson, G., & De-Silva, V. (2004). Topological estimation using witness complexes. In

*Symposium on Point-Based Graphics*.Chalmers, D. J. (1996). Does a rock implement every finite-state automaton?

*Synthese,**108*, 309–333. doi:10.1007/BF00413692.De-Silva, V., & Carlsson, G. (2004). Topological estimation using witness complexes. In

*Symposium on Point-Based Graphics*, ETH, Zürich, Switzerland, June 2–4.Destexhe, A., & Contreras, D. (2006). Neuronal computations with stochastic network states.

*Science,**314*(5796), 85–90. doi:10.1126/science.1127241.Edelman, S. (1998). Representation is representation of similarities.

*The Behavioral and Brain Sciences,**21*, 449–498.Edelman, S. (1999).

*Representation and recognition in vision*. Cambridge: Bradford Books, MIT Press.Edelman, S. (2001). Neural spaces: A general framework for the understanding of cognition?

*The Behavioral and Brain Sciences,**24*, 664–665. doi:10.1017/S0140525X01320083.Edelman, S. (2008). On the nature of minds, or: Truth and consequences.

*Journal of Experimental and Theoretical AI,**20*(3),181–196.Edelsbrunner, H., Letscher, D., & Zomorodian, A. (2002). Topological persistence and simplification.

*Discrete & Computational Geometry,**28*(4), 511–533.Fekete, T., Grinvald, A., Pitowsky, I., & Omer, D. B. (2009). The representational capacity of cortical tissue.

*Journal of Computational Neuroscience,**27*, 211–227. doi: 10.1007/s10827-009-0138-6Freeman, W. J. (2006). A cinematographic hypothesis of cortical dynamics in perception.

*International Journal of Psychophysiology,**60*(2), 149–161. doi:10.1016/j.ijpsycho.2005.12.009.Gardenfors, P. (2000).

*Conceptual spaces*. Cambridge: Bradford Books, MIT Press.Grinvald, A., & Hildesheim, R. (2004). VSDI: A new era in functional imaging of cortical dynamics.

*Nature Reviews. Neuroscience,**5*(11), 874–885. doi:10.1038/nrn1536.Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception.

*The Behavioral and Brain Sciences,**27*, 377–442.Harnad, S. (1990). The symbol grounding problem.

*Physica D. Nonlinear Phenomena,**42*, 335–346. doi:10.1016/0167-2789(90)90087-6.Haselager, P., De Groot, A., & Van Rappard, H. (2003). Representationalism vs. anti-representationalism: A debate for the sake of appearance.

*Philosophical Psychology,**16*(1), 6–23. doi:10.1080/0951508032000067761.Hobson, A. J., Pace-Schott, E., & Stickgold, R. (2000). Dreaming and the brain: Toward a cognitive neuroscience of conscious states.

*The Behavioral and Brain Sciences,**23*(6), 793–842, 904–1018, 1083–1121. doi:10.1017/S0140525X00003976.Hubel, D. H., & Wiesel, T. N. (1962). Receptive fileds, binocular interaction and functional architecture in the cat's visual cortex.

*Journal of Physiology,**160*, 106–154.Hubel, D. H., & Wiesel, T. N. (1969). Anatomical demonstration of columns in the monkey striate cortex.

*Nature,**221*,747–750.Hubel, D. H., & Wiesel, T. N. (1979). Brain mechanisms of vision.

*Scientific American,**241*(3), 150–162.Jancke, D., Chavane, F., Naaman, S., & Grinvald, A. (2004). Imaging cortical correlates of illusion in early visual cortex.

*Nature,**428*(6981), 423–426. doi:10.1038/nature02396.Kendall, D. G., Barden, D., Carne, T. K., & Le, H. (1999).

*Shape and shape theory*. London: Wiley.Kozma, R., & Freeman, W. J. (2003). Basic principles of the KIV model and its application to the navigation problem.

*Journal of Integrative Neuroscience,**2*(1), 125–145. doi:10.1142/S0219635203000159.Lee, J. M. (1997).

*Riemannian manifolds: An introduction to curvature. Graduate texts in mathematics 176*. New York: Springer.Lee, J. M. (2003).

*Introduction to smooth manifolds. Graduate texts in mathematics 218*. New York: Springer.Lee, T. S., & Mumford, D. (2003). Hierarchical Bayesian inference in the visual cortex.

*Journal of the Optical Society of America,**20*(7), 1434–1448. doi:10.1364/JOSAA.20.001434.Mazor, O., & Laurent, G. (2005). Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons.

*Neuron,**48*(4), 661–673. doi:10.1016/j.neuron.2005.09.032.Putnam, H. (1988).

*Representation and reality*. Cambridge, MA: MIT Press.Quine, W. V. O. (1951). Two dogmas of empiricism.

*The Philosophical Review,**60*, 20–43. doi:10.2307/2181906.Rabinowitch, I., & Segev, I. (2006). The endurance and selectivity of spatial patterns of long-term potentiation/depression in dendrites under homeostatic synaptic plasticity.

*The Journal of Neuroscience,**26*(52), 13474–13484. doi:10.1523/JNEUROSCI.4333-06.2006.Robins, V. (2000). Computational topology at multiple resolutions. PhD Thesis, Department of Applied Mathematics, University of Colorado, Boulder.

Robins, V. (2002). Computational topology for point data: Betti numbers of alpha-shapes. In K. Mecke & D. Stoyan (Eds.),

*Morphology of condensed matter: Physics and geometry of spatially complex systems. Lecture notes in physics*(Vol. 600, pp. 261–275), Springer.Sharma, J., Angelucci, A., & Sur, M. (2000). Induction of visual orientation modules in auditory cortex.

*Nature,**404*(6780), 841–847. doi:10.1038/35009043.Shepard, R. N. (1958). Stimulus and response generalization: Tests of a model relating generalization to distance in psychological space.

*Journal of Experimental Psychology,**55*(6), 509–523. doi:10.1037/h0042354.Shepard, R. N. (1987). Toward a universal law of generalization for psychological science.

*Science,**237*(4820), 1317–1323. doi:10.1126/science.3629243.Small, C. G. (1996).

*The statistical theory of shape*. New York: Springer.Tononi, G. (2004). An information integration theory of consciousness.

*BMC Neuroscience,**5*(1), 42. doi:10.1186/1471-2202-5-42.Tononi, G., & Edelman, G. M. (1998). Consciousness and complexity.

*Science,**282*(5395), 1846–1851. doi:10.1126/science.282.5395.1846.von der Malsburg, C. (1981).

*The correlation theory of brain function*(pp. 81–82). Gottingen: Max-Planck-Institute for Biophysical Chemistry, Internal Rep.von Melchner, L., Pallas, S. L., & Sur, M. (2000). Visual behavior mediated by retinal projections directed to the auditory pathway.

*Nature,**404*(6780), 871–876.Wittgenstein, L. (1953/2001).

*Philosophical investigations*. Oxford: Blackwell.Zomorodian, A., & Carlsson, G. (2005). Computing persistent homology.

*Discrete & Computational Geometry,**33*(2), 249–274.

## Acknowledgments

The author wishes to thank Neta Zach, Shimon Edelman, Steve Farkas and Uri Liron for their meticulous reading of earlier versions of this manuscript, and Itamar Pitowsky and Amiram Grinvald for their help and support. This work was supported by the Weizmann Institute of Science, Rehovot, Israel, and the Interdisciplinary center of Neural Computation, the Hebrew university, Jerusalem.

## Author information

### Authors and Affiliations

### Corresponding author

## Rights and permissions

## About this article

### Cite this article

Fekete, T. Representational Systems.
*Minds & Machines* **20**, 69–101 (2010). https://doi.org/10.1007/s11023-009-9166-2

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s11023-009-9166-2

### Keywords

- Representation
- Conceptual representation
- Representational capacity
- Computation and mind
- Computational neuroscience
- Isomorphism
- Similarity
- Misrepresentation
- States of consciousness
- Learning
- Geometry
- Topology
- Homology
- Persistent homology
- Curvature