Skip to main content

A Tale of Two Animats: What Does It Take to Have Goals?

  • Chapter
  • First Online:

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms (“animats”) controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Furthermore, representations of individual environmental features are typically distributed across many elements [6], and thus do no coincide with the Markov Brain’s elementary (micro) logic components.

  2. 2.

    Note that this holds, even if we could evaluate the correlation between internal and external variables in an observer-independent manner, except then the correlations might not even be meaningful for the investigator.

  3. 3.

    If M would not constrain its inputs, its state would just be a source of noise entering the system, not causal information.

  4. 4.

    Sets of elements can constrain their joint inputs and outputs in a way that is irreducible to the constraints of their constituent elements taken individually [13]. The irreducible cause-effect information of a set of elements can be quantified similarly to Eqs. 2.22.3, by partitioning the set and measuring the distance between \(p\left( {\left. {z_{t \pm 1} } \right|m_{t} } \right)\) and the distributions of the partitioned set.

  5. 5.

    By contrast to the uniform, perturbed distribution, the stationary, observed distribution of system Z entails correlations due to the system’s network structure which may occlude or exaggerate the causal constraints of the mechanism itself.

  6. 6.

    Take a neuron that activates, for example, every time a picture of the actress Jennifer Aniston is shown [22]. All it receives as inputs is quasi-binary electrical signals from other neurons. The meaning “Jennifer Aniston” is not in the message to this neuron, or any other neuron.

  7. 7.

    For example, an AND logic gate receiving 2 inputs is what it is, because it switches ON if and only if both inputs were ON. An AND gate in state ON thus constrains the past states of its input to be ON.

  8. 8.

    This notion of causal autonomy applies to deterministic and probabilistic systems, to the extent that their elements constrain each other, above other background inputs, e.g. from the sensors.

References

  1. Schrödinger, E.: What is Life? With Mind and Matter and Autobiographical Sketches. Cambridge University Press (1992)

    Google Scholar 

  2. Still, S., Sivak, D.A., Bell, A.J., Crooks, G.E.: Thermodynamics of Prediction. Phys. Rev. Lett. 109, 120604 (2012)

    Article  ADS  Google Scholar 

  3. England, J.L.: Statistical physics of self-replication. J. Chem. Phys. 139, 121923 (2013)

    Article  ADS  Google Scholar 

  4. Walker, S.I., Davies, P.C.W.: The algorithmic origins of life. J. R. Soc. Interface 10, 20120869 (2013)

    Article  Google Scholar 

  5. Albantakis, L., Hintze, A., Koch, C., Adami, C., Tononi, G.: Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Comput. Biol. 10, e1003966 (2014)

    Article  ADS  Google Scholar 

  6. Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Comput. 25, 2079–2107 (2013)

    Article  MathSciNet  Google Scholar 

  7. Albantakis, L., Tononi, G.: The intrinsic cause-effect power of discrete dynamical systems—from elementary cellular automata to adapting animats. Entropy 17, 5472–5502 (2015)

    Article  ADS  Google Scholar 

  8. Online Animat animation. http://integratedinformationtheory.org/animats.html

  9. Quiroga, R.Q., Panzeri, S.: Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10, 173–185 (2009)

    Article  Google Scholar 

  10. King, J.-R., Dehaene, S.: Characterizing the dynamics of mental representations: the temporal generalization method. Trends Cogn. Sci. 18, 203–210 (2014)

    Article  Google Scholar 

  11. Haynes, J.-D.: Decoding visual consciousness from human brain signals. Trends Cogn. Sci. 13, 194–202 (2009)

    Article  Google Scholar 

  12. Bateson, G.: Steps to an Ecology of Mind. University of Chicago Press (1972)

    Google Scholar 

  13. Oizumi, M., Albantakis, L., Tononi, G.: From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS Comput. Biol. 10, e1003588 (2014)

    Article  ADS  Google Scholar 

  14. Pearl, J.: Causality: models, reasoning and inference. Cambridge University Press (2000)

    Google Scholar 

  15. Ay, N., Polani, D.: Information Flows in Causal Networks. Adv. Complex Syst. 11, 17–41 (2008)

    Article  MathSciNet  Google Scholar 

  16. Krakauer, D., Bertschinger, N., Olbrich, E., Ay, N., Flack, J.C.: The Information Theory of Individuality. The architecture of individuality (2014)

    Google Scholar 

  17. Marshall, W., Albantakis, L., Tononi, G.: Black-boxing and cause-effect power (2016). arXiv: 1608.03461

    Google Scholar 

  18. Marshall, W., Kim, H., Walker, S.I., Tononi, G., Albantakis, L.: How causal analysis can reveal autonomy in biological systems (2017). arXiv: 1708.07880

    Google Scholar 

  19. Tononi, G., Boly, M., Massimini, M., Koch, C.: Integrated information theory: from consciousness to its physical substrate. Nat. Rev. Neurosci. 17, 450–461 (2016)

    Article  Google Scholar 

  20. Albantakis, L., Tononi, G.: Fitness and neural complexity of animats exposed to environmental change. BMC Neurosci. 16, P262 (2015)

    Article  Google Scholar 

  21. Tononi, G.: Integrated information theory. Scholarpedia 10, 4164 (2015)

    Article  ADS  Google Scholar 

  22. Quiroga, R.Q., Reddy, L., Kreiman, G., Koch, C., Fried, I.: Invariant visual representation by single neurons in the human brain. Nature 435, 1102–1107 (2005)

    Article  ADS  Google Scholar 

Download references

Acknowledgements

I thank Giulio Tononi for his continuing support and comments on this essay, and William Marshall, Graham Findlay, and Gabriel Heck for reading this essay and providing helpful comments. L.A. receives funding from the Templeton World Charities Foundation (Grant#TWCF0196).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Larissa Albantakis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Albantakis, L. (2018). A Tale of Two Animats: What Does It Take to Have Goals?. In: Aguirre, A., Foster, B., Merali, Z. (eds) Wandering Towards a Goal. The Frontiers Collection. Springer, Cham. https://doi.org/10.1007/978-3-319-75726-1_2

Download citation

Publish with us

Policies and ethics