The Challenge of Anticipation pp 3-22
Introduction: Anticipation in Natural and Artificial Cognition
What will artificial cognitive systems of the future look like? If we are asked to imagine robots, or intelligent software agents, several features come to our mind such as the capability to adapt to their environments and to satisfy their goals with only limited human intervention, to plan sequences of actions for realizing long-term objectives, to act collectively in view of complex objectives, to interact and cooperate with us, with and without natural language, to take decisions (also in our place), etc.
Currently these capabilities are far beyond the possibilities of robots and other artificial systems. In the next years a huge effort will be required for scaling up the potentialities of the artificial systems that we are able to build nowadays. One way to overcome these limitations is to take inspiration form the functioning of living organisms. A large body of evidence, which we review in this chapter, indicates that natural cognitive systems are not reactive but essentially anticipatory systems. We do not think that this is a mere coincidence. On the contrary, we claim that anticipation is a crucial—and foundational—phenomenon in natural cognition. Individual behavior is guided by anticipatory mechanisms that are used for behavioral control, perceptual processing, goal-directed behavior, and learning. And also effective social behavior relies on the anticipation of the behavior of other agents. We argue that anticipation is a key ingredient for the design of autonomous, artificial cognitive agents of the future: Only cognitive systems with anticipation mechanisms can be credible, adaptive, and successful in interaction with both the environment and other autonomous systems and humans. This is the challenge that we anticipate for the future of cognitive systems research: the passage from reactive to anticipatory cognitive embodied systems.
Unable to display preview. Download preview PDF.