Skip to main content
Log in

What should I do next? Using shared representations to solve interaction problems

  • Research Article
  • Published:
Experimental Brain Research Aims and scope Submit manuscript

Abstract

Studies on how “the social mind” works reveal that cognitive agents engaged in joint actions actively estimate and influence another’s cognitive variables and form shared representations with them. (How) do shared representations enhance coordination? In this paper, we provide a probabilistic model of joint action that emphasizes how shared representations help solving interaction problems. We focus on two aspects of the model. First, we discuss how shared representations permit to coordinate at the level of cognitive variables (beliefs, intentions, and actions) and determine a coherent unfolding of action execution and predictive processes in the brains of two agents. Second, we discuss the importance of signaling actions as part of a strategy for sharing representations and the active guidance of another’s actions toward the achievement of a joint goal. Furthermore, we present data from a human-computer experiment (the Tower Game) in which two agents (human and computer) have to build together a tower made of colored blocks, but only the human knows the constellation of the tower to be built (e.g., red-blue-red-blue-\(\ldots\)). We report evidence that humans use signaling strategies that take another’s uncertainty into consideration, and that in turn our model is able to use humans’ actions as cues to “align” its representations and to select complementary actions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. The term signaling is more common in the animal literature, in which it is recognized that animal signaling strategies are evolved so as to change the behavior of another organism in such a way that the (evolved) response of the received is beneficial for the sender (Maynard-Smith and Harper 2003). This entails that signaling in the animal domain is not necessarily a deliberate choice, but can depend on an automatic mechanism. Stigmergy is another important animal mechanism for coordination, in which a trace left in an environment is a stimulus that can be used by the same or another agent to continue a task; it can be considered a self-organizing process rather than the product of explicit planning (Camazine et al. 2001). The term communication is broader and more common in the linguistic literature (although of course there are communication strategies that are not linguistic, such as for instance the use of gesture); here, the emphasis is on deliberated strategies to pursue communicative intentions through speech acts such as promising something, informing somebody about something, or committing to something (Austin 1962; Grice 1975).

  2. More generally, optimally interacting agents should adopt a non-myopic strategy in which the costs are calculated all over the interaction, not just for the next action. This would easily lead to non-computable strategies, though.

  3. By convention, the observed variables are represented as shaded nodes in the network.

  4. This process is also known as filtering.

  5. Likelihood computation in this network can be performed exactly by the forward-backward algorithm or approximately by the abovementioned particle filters.

  6. This is compliant with the ideomotor view (James 1890; Prinz 1997) that actions are selected based on goal representations through bidirectional \(action \leftrightarrow effect\) links, where the actioneffect direction is equivalent to forward modeling and serves as an effect predictor, and the effectaction direction is equivalent to inverse modeling and serves as a goal-directed action selection mechanism.

  7. Right and left trajectories were balanced for their (un)comfort, and the position of red and blue buttons was balanced across participants.

  8. More formally, Builders should select a signaling action (only) when its cost (modeled by the variable U) is lower than the losses associated with (expected) Helper’s errors (as a consequence of the fact that it has a false belief: an inference that can be done with the likelihood computation method described above). Another formulation of the same problem is comparing the value of information (Howard 1966) resulting from the signaling action, and the costs of executing it. Note that not executing a signaling action is a signal, too, and typically means that the interaction can proceed without modifications.

  9. An estimate of the conditional distribution P(non − signaling|likelihood); the probability of the signaling actions is simply 1 − P(non − signaling|likelihood).

  10. A hypothesis that can be made and which is compatible with hierarchical processing in the brain (e.g., Friston 2008) is that during joint performance, the processes in play change as an effect of the confidence in the predictions. When prediction errors at the highest level is low, this tends to suppress processes at the lower levels (because these are redundant). Prediction errors at higher levels make lower-level processes more relevant. In sum, monitoring can be done at the highest level that produces low prediction error.

References

  • Aarts H, Gollwitzer P, Hassin R (2004) Goal contagion: perceiving is for pursuing. J Pers Soc Psychol 87:23–37

    Article  PubMed  Google Scholar 

  • Austin JL (1962) How to do things with words. Oxford University Press, New York

    Google Scholar 

  • Bacharach M (2006). In: Gold N, Sugden R (eds) Beyond individual choice. Princeton University Press, Princeton

  • Baker CL, Saxe R, Tenenbaum JB (2009) Action understanding as inverse planning. Cognition 113(3):329–349

    Article  PubMed  Google Scholar 

  • Bicho E, Erlhagen W, Louro L, Silva ECE (2011) Neuro-cognitive mechanisms of decision making in joint action: a human-robot interaction study. Hum Mov Sci

  • Botvinick MM (2008) Hierarchical models of behavior and prefrontal function. Trends Cogn Sci 12(5):201–208

    Article  PubMed  Google Scholar 

  • Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD (2001) Conflict monitoring and cognitive control. Psychol Rev 108(3):624–652

    Article  PubMed  CAS  Google Scholar 

  • Bratman ME (1992) Shared cooperative activity. Philos Rev 101:327–341

    Article  Google Scholar 

  • Camazine S, Franks NR, Sneyd J, Bonabeau E, Deneubourg JL, Theraula G (2001) Self-organization in biological systems. Princeton University Press, Princeton

    Google Scholar 

  • Clark H, Krych M (2004) Speaking while monitoring addressees for understanding. J Mem Lang 50(1):62–81

    Article  Google Scholar 

  • Clark HH (1996) Using language. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Conte R, Castelfranchi C (1995) Cognitive and social action. University College London, London

    Google Scholar 

  • Csibra G, Gergely G (2007) "Obsessed with goals": functions and mechanisms of teleological interpretation of actions in humans. Acta Psychol 124:60–78

    Article  Google Scholar 

  • Cuijpers RH, van Schie HT, Koppen M, Erlhagen W, Bekkering H (2006) Goals and means in action observation: a computational approach. Neural Netw 19(3):311–322

    Article  PubMed  Google Scholar 

  • Demiris Y, Khadhouri B (2005) Hierarchical attentive multiple models for execution and recognition (hammer). Robot Auton Syst J 54:361–369

    Article  Google Scholar 

  • Desmurget M, Grafton S (2000) Forward modeling allows feedback control for fast reaching movements. Trends Cogn Sci 4:423–431

    Article  PubMed  Google Scholar 

  • Dindo H, Zambuto D, Pezzulo G (2011) Motor simulation via coupled internal models using sequential monte carlo. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI)

  • Doucet A, Godsill S, Andrieu C (2000) On sequential monte carlo sampling methods for bayesian filtering. Stat Comput 10(3):197–208

    Article  Google Scholar 

  • Friston K (2008) Hierarchical models in the brain. PLoS Comput Biol 4(11):e1000211

    Article  PubMed  Google Scholar 

  • Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079(1):36–46

    Article  PubMed  CAS  Google Scholar 

  • Frith CD, Frith U (2008) Implicit and explicit processes in social cognition. Neuron 60(3):503–510

    Article  PubMed  CAS  Google Scholar 

  • Galantucci B (2005) An experimental study of the emergence of human communication systems. Cogn Sci 29:737–767

    Article  Google Scholar 

  • Garrod S, Pickering MJ (2004) Why is conversation so easy? Trends Cogn Sci 8(1):8–11

    Article  PubMed  Google Scholar 

  • Garrod S, Pickering MJ (2009) Joint action, interactive alignment, and dialog. Top Cogn Sci 1(2):292–304

    Article  Google Scholar 

  • Georgiou I, Becchio C, Glover S, Castiello U (2007) Different action patterns for cooperative and competitive behaviour. Cognition 102(3):415–433

    Article  PubMed  Google Scholar 

  • Gergely G, Csibra G (2003) Teleological reasoning in infancy: the naive theory of rational action. Trends Cogn Sci 7:287–292

    Article  PubMed  Google Scholar 

  • Grice HP (1975) Logic and conversation. In: Cole P, Morgan JL (eds) Syntax and semantics. vol 3, Academic Press, New York

    Google Scholar 

  • Grosz BJ (1996) Collaborative systems. AI Mag 17(2):67–85

    Google Scholar 

  • Grush R (2004) The emulation theory of representation: motor control, imagery, and perception. Behav Brain Sci 27(3):377–396

    PubMed  Google Scholar 

  • Hamilton AFdC, Grafton ST (2007) The motor hierarchy: from kinematics to goals and intentions. In: Haggard P, Rossetti Y, Kawato M (eds) Sensorimotor foundations of higher cognition. Oxford University Press, Oxford

    Google Scholar 

  • Hoffman G, Breazeal C (2007) Cost-based anticipatory action selection for human–robot fluency. IEEE Trans Robot 23(5):952–961

    Article  Google Scholar 

  • Howard R (1966) Information value theory. IEEE Trans Syst Sci Cybern 2(1):22–26

    Article  Google Scholar 

  • James W (1890) The principles of psychology. Dover Publications, New York

    Book  Google Scholar 

  • Jeannerod M (2001) Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage 14:103–109

    Article  Google Scholar 

  • Jeannerod M (2006) Motor Cognition. Oxford University Press, Oxford

    Book  Google Scholar 

  • Kilner JM, Friston KJ, Frith CD (2007) Predictive coding: an account of the mirror neuron system. Cogn Process 8(3)

  • Kirsh D (1999) Distributed cognition, coordination and environment design. In: Proceedings of the European conference on cognitive science, pp 1–11

  • Knoblich G, Sebanz N (2008) Evolving intentions for social interaction: from entrainment to joint action. Philos Trans R Soc Lond B Biol Sci 363(1499):2021–2031

    Article  PubMed  Google Scholar 

  • Konvalinka I, Vuust P, Roepstorff A, Frith CD (2010) Follow you, follow me: continuous mutual prediction and adaptation in joint tapping. Q J Exp Psychol (Colchester) 63(11):2220–2230

    Article  Google Scholar 

  • Lange FPD, Spronk M, Willems RM, Toni I, Bekkering H (2008) Complementary systems for understanding action intentions. Curr Biol 18(6):454–457

    Article  PubMed  Google Scholar 

  • Levinson SC (2006) On the human "interaction engine". In: Enfield NJ, Levinson SC (eds) Roots of human sociality: culture, cognition and interaction. Berg, pp 39–69

  • Maynard-Smith J, Harper D (2003) Animal Signals. Oxford University Press, Oxford

    Google Scholar 

  • Murphy KP (2002) Dynamic bayesian networks: representation, inference and learning. PhD thesis, UC Berkeley, Computer Science Division

  • Newman-Norlund RD, van Schie HT, van Zuijlen AMJ, Bekkering H (2007) The mirror neuron system is more active during complementary compared with imitative action. Nat Neurosci 10(7):817–818

    Article  PubMed  CAS  Google Scholar 

  • Newman-Norlund RD, Bosga J, Meulenbroek RGJ, Bekkering H (2008) Anatomical substrates of cooperative joint-action in a continuous motor task: virtual lifting and balancing. Neuroimage 41(1):169–177

    Article  PubMed  Google Scholar 

  • Noordzij ML, Newman-Norlund SE, de Ruiter JP, Hagoort P, Levinson SC, Toni I (2009) Brain mechanisms underlying human communication. Front Hum Neurosci 3:14

    Article  PubMed  Google Scholar 

  • Pacherie E (2008) The phenomenology of action: a conceptual framework. Cognition 107:179–217

    Article  PubMed  Google Scholar 

  • Pezzulo G (2008) Coordinating with the future: the anticipatory nature of representation. Mind Mach 18(2):179–225

    Article  Google Scholar 

  • Pezzulo G (2011) Grounding procedural and declarative knowledge in sensorimotor anticipation. Mind Lang 26:78–114

    Google Scholar 

  • Pezzulo G (submitted) Shared representations as coordination tools for interaction

  • Pezzulo G, Castelfranchi C (2009) Thinking as the control of imagination: a conceptual framework for goal-directed systems. Psychol Res 73(4):559–577

    Article  PubMed  Google Scholar 

  • Prinz W (1990) A common coding approach to perception and action. In: Neumann O, Prinz W (eds) Relationships between perception and action. Springer, Berlin, pp 167–201

    Google Scholar 

  • Prinz W (1997) Perception and action planning. Eur J Cogn Psychol 9:129–154

    Article  Google Scholar 

  • Rao RP, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2(1):79–87

    Article  PubMed  CAS  Google Scholar 

  • Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169–192

    Article  PubMed  CAS  Google Scholar 

  • Searle J (1990) Collective intentions and actions. In: Cohen JMPR M, Pollack E (eds) Intentions in communication. MIT Press, Cambridge, pp 401–416

    Google Scholar 

  • Sebanz N, Knoblich G (2009) Prediction in joint action: what, when, and where. Top Cogn Sci 1:353–367

    Article  Google Scholar 

  • Sebanz N, Bekkering H, Knoblich G (2006) Joint action: bodies and minds moving together. Trends Cogn Sci 10(2):70–76

    Article  PubMed  Google Scholar 

  • Tomasello M, Carpenter M, Call J, Behne T, Moll H (2005) Understanding and sharing intentions: the origins of cultural cognition. Behav Brain Sci 28(5):675–691

    PubMed  Google Scholar 

  • Tucker M, Ellis R (2004) Action priming by briefly presented objects. Acta Psychol 116:185–203

    Article  Google Scholar 

  • Vesper C, Butterfill S, Knoblich G, Sebanz N (2010) A minimal architecture for joint action. Neural Netw 23(8-9):998–1003

    Article  PubMed  Google Scholar 

  • Van der Wel R, Knoblich G, Sebanz N (2010) Let the force be with us: Dyads exploit haptic coupling for coordination. J Exp Psychol Human Percept Perform

  • Wilson M, Knoblich G (2005) The case for motor involvement in perceiving conspecifics. Psychol Bull 131:460–473

    Article  PubMed  Google Scholar 

  • Wolpert D, Flanagan J (2001) Motor prediction. Curr Biol 11:729–732

    Article  Google Scholar 

  • Wolpert DM, Doya K, Kawato M (2003) A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci 358(1431):593–602

    Article  PubMed  Google Scholar 

  • Yoshida W, Dolan RJ, Friston KJ (2008) Game theory of mind. PLoS Comput Biol 4(12):e1000, 254+

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank Guenther Knoblich, Natalie Sebanz and their research group for fruitful discussions, and two anonymous reviewers for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Pezzulo.

Additional information

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreements 231453 (HUMANOBS) and 270108 (Goal-Leaders).

Appendix

Appendix

A Inference in DBNs

Given a DBN, we can always split its nodes into a set of hidden variables at time \(t, \mathcal{X}_t, \) and a set of observed variables at the same time step, \(\mathcal{Z}_t. \) In order to solve the inference task in DBNs, we want to recursively estimate the posterior distribution \(p(\mathcal{X}_t|\mathcal{Z}_{1:t})\) from the corresponding posterior one step earlier, \(p(\mathcal{X}_{t-1}|\mathcal{Z}_{1:t-1})\)(Murphy 2002).

The first step involves the application of the Bayes rule on the target posterior:

$$ p({\mathcal{X}}_t|{\mathcal{Z}}_{1:t}) = \eta p({\mathcal{Z}}_t|{\mathcal{X}}_t, {\mathcal{Z}}_{1:t-1}) \cdot p({\mathcal{X}}_t|{\mathcal{Z}}_{1:t-1}) $$
(6)

The usual assumption of state completeness (i.e. Markov assumption) allows us to simplify the equation above. If we knew \(\mathcal{X}_t\) and were interested in predicting the evolution of \(\mathcal{Z}_t, \) no past states or observations would provide us any additional information. Thus, \(\mathcal{X}_t\) is sufficient to explain the observed variables \(\mathcal{Z}_t\) so the above equation can be simplified in:

$$ p({\mathcal{X}}_t|{\mathcal{Z}}_{1:t}) = \eta p({\mathcal{Z}}_t|{\mathcal{X}}_t) \cdot p({\mathcal{X}}_t | {\mathcal{Z}}_{1:t-1}) $$
(7)

We can now expand the probability distribution \(p(\mathcal{X}_t |\mathcal{Z}_{1:t-1}) \):

$$ p({\mathcal{X}}_t | {\mathcal{Z}}_{1:t-1}) = \int p({\mathcal{X}}_t | {\mathcal{X}}_{t-1}, {\mathcal{Z}}_{1:t-1}) \cdot p({\mathcal{X}}_{t-1}|{\mathcal{Z}}_{1:t-1}) d {\mathcal{X}}_{t-1} $$
(8)

Once again we exploit the Markov assumption: given \(\mathcal{X}_{t-1}\) past actions and observations do not provide any additional information regarding \(\mathcal{X}_t. \) The above equation can therefore be further simplified in:

$$ p({\mathcal{X}}_t|{\mathcal{Z}}_{1:t}) = \eta p({\mathcal{Z}}_t|{\mathcal{X}}_t) \cdot \int p({\mathcal{X}}_t | {\mathcal{X}}_{t-1}) \cdot p({\mathcal{X}}_{t-1}| {\mathcal{Z}}_{1:t-1}) d {\mathcal{X}}_{t-1} $$
(9)

The last equation provides a general formulation of the inference task. However, in most practical applications, no closed form solution exists and approximate methods should be used instead. One such method is know as particle filtering belonging to the family of sequential Monte Carlo algorithms used to represent the required posterior density function by a set of random samples (or particles) with associated weights and to compute estimates based on these samples and weights (Doucet et al. 2000).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pezzulo, G., Dindo, H. What should I do next? Using shared representations to solve interaction problems. Exp Brain Res 211, 613–630 (2011). https://doi.org/10.1007/s00221-011-2712-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00221-011-2712-1

Keywords

Navigation