Towards Representational Autonomy of Agents in Artificial Environments
Autonomy is a crucial property of an artificial agent. The type of representational structures and the role they play in the preservation of an agent’s autonomy are pointed out. A framework of self-organised Peircean semiotic processes is introduced and it is then used to demonstrate the emergence of grounded representational structures in agents interacting with their environment.
Unable to display preview. Download preview PDF.
- 1.Collier, J.: Autonomy in Anticipatory Systems: Significance for Functionality, Intentionality and Meaning. In: Dubois, D.M. (ed.) The 2nd Int. Conf. on Computing Anticipatory Systems, Springer, New York (1999)Google Scholar
- 2.Christensen, W.D., Hooker, C.A.: Representation and the Meaning of Life. In: Clapin, H., Staines, P., Slezak, P. (eds.) Representation in Mind: New Approaches to Mental Representation, Elsevier, Oxford (2004)Google Scholar
- 3.Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (1999)Google Scholar
- 4.Peirce, C.S.: The Essential Peirce. Selected Philosophical Writings 1 (1992, 1998)Google Scholar
- 7.Arnellos, A., Spyrou, T., Darzentas, J.: Towards a Framework that Models the Emergence of Meaning Structures in Purposeful Communication Environments. In: The 47th Annual Conf. of the Int. Society for the Systems Sciences (ISSS), vol. 3(103) (2003)Google Scholar