Skip to main content

Can Artificial Systems Be Part of a Collective Action?

  • Chapter

Part of the Philosophical Studies Series book series (PSSP,volume 122)

Abstract

In this paper I will question whether being an agent in a collective action necessarily presupposes the demanding conditions that common views in philosophy postulate as necessary for being an intentional agent. Artificial systems will serve as a showcase where we can ascribe collective actions without necessarily claiming the ability of individual intentional agency. To argue for this claim I will clarify the conditions that a possible participant of a collective action necessarily has to fulfill. In the case of collective actions, it is not primarily intentionality that matters, but social abilities that play a crucial role, since social cognition enables participants of a collective action to successfully interact. I will argue that successful interaction requires that we are able to anticipate the behavior of others, and to coordinate with them. However, it may not require higher-order representational abilities – including joint attention, theory of mind and theory of emotion. The behavior of artificial systems may be interpreted as if they are able to ‘read’ social hints and behave in such a way that we tend to treat them as if they had mental or even emotional states. This leads to cases of collective agency where it is at least extremely counter-intuitive to describe the participating artificial system as merely a tool. In this paper I will argue for a position showing that less demanding conditions like pure goal-directedness plus certain social and cognitive abilities may be sufficient to participate in a collective action.

Keywords

  • Collective action
  • Joint action
  • Intentionality
  • Artificial systems

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-15515-9_11
  • Chapter length: 14 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   99.00
Price excludes VAT (USA)
  • ISBN: 978-3-319-15515-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   129.00
Price excludes VAT (USA)
Hardcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 11.1

Notes

  1. 1.

    Unfortunately one cannot observe a consistent usage of the concept ’collective action’; in some debates (cp. Bratman 2014, 10) notions like joint action are used to distinguish from collective agency. I will use the notion ‘collective action’ and ‘joint action’ interchangeably.

  2. 2.

    Many philosophers claim that actions are always ‘intentional under some description’ (cp. Davidson 1980). There is wide agreement that this is true for individual human actions.

  3. 3.

    Mass phenomena should be an object of future research.

  4. 4.

    Compare Butterfill’s example of two strangers painting a large bridge red in his contribution to this volume.

  5. 5.

    The second question will not be addressed in detail in this paper.

  6. 6.

    This is not meant to be a detailed exegesis of Bratman’s theory – I just want to highlight some essential requests.

  7. 7.

    One exception is Butterfill (2012). This paper objects that Bratman’s view is unable to explain simple cooperative interactions between very young children, who do not yet have a full understanding of other minds. Further contributions on these questions can be found in Tollefsen (2005), Pacherie (2013). Cp. Bratman (2014, 104ff).

  8. 8.

    A possible objection could suggest claiming a gradual development of the ability to act. But then the discrimination in-between behavior and action will become blurred.

  9. 9.

    For details see Strasser 2004.

  10. 10.

    There might be opponents claiming that planning requires intentionality. I will use a broader notion of planning claiming that if a system is able to find a way from a starting state to a goal state it is legitimate to call this information processing planning.

  11. 11.

    For more see Saygin et al. 2000.

  12. 12.

    See the contribution of Wachsmuth to this volume. More references can be found: http://pub.uni-bielefeld.de/person/73476

References

  • Bratman, Michael E. 1992. Shared cooperative activity. The Philosophical Review 101(2): 327–341.

    CrossRef  Google Scholar 

  • Bratman, Michael E. 1997. I intend that we J. In Contemporary action theory, Social action, vol. 2, ed. R. Tuomela and G. Holmstrom-Hintikka, 49–63. Dordrecht: Kluwer.

    Google Scholar 

  • Bratman, Michael E. 2014. Shared agency: A planning theory of acting together. Oxford: Oxford University Press.

    CrossRef  Google Scholar 

  • Braubach, Lars, Alexander Pokahr, Daniel Moldt, and Winfried Lamersdorf. 2005. Goal representation for BDI agent systems. In PROMAS 2004, LNAI 3346, ed. R.H. Bordini et al., 44–65. Heidelberg: Springer.

    Google Scholar 

  • Butterfill, Stephen. 2012. Joint action and development. The Philosophical Quarterly 62(246): 23–47.

    CrossRef  Google Scholar 

  • Butterfill, Stephen, and Corrado Sinigaglia. 2014. Intentions and motor representation in purposive action. Philosophy and Phenomenological Research 88(1): 119–145.

    CrossRef  Google Scholar 

  • Davidson, Donald. 1980. Essays on actions and events. Oxford: Oxford University Press.

    Google Scholar 

  • Knoblich, Günther, Stephen Butterfill, and Natalie Sebanz. 2010. Psychological research on joint action: Theory and data. In Psychology of learning and motivation, vol. 51, ed. B. Ross, 59–101. Burlington: Academic.

    Google Scholar 

  • Mattar, Nikita, and Ipke Wachsmuth. 2012. Small talk is more than chit-chat: Exploiting structures of casual conversations for a virtual agent. In KI 2012: Advances in artificial intelligence, Lecture notes in computer science, vol. 7526, ed. Birte Glimm and Antonio Krüger, 119–130. Berlin: Springer.

    CrossRef  Google Scholar 

  • Pacherie, Elisabeth. 2013. Intentional joint agency: Shared intention lite. Synthese 190(10): 1817–1839.

    CrossRef  Google Scholar 

  • Saygin, Ayse P., Ilyas Cicekli, and Varol Akman. 2000. Turing test: 50 years later. Minds and Machines 10(4): 463–518.

    CrossRef  Google Scholar 

  • Schmid, Hans Bernhard. 2009. Plural action: Essays in philosophy and social science. Dordrecht: Springer.

    CrossRef  Google Scholar 

  • Searle, John. 1990. Collective intentions and actions. In Intentions in communication, ed. P.R. Cohen et al., 401–415. Cambridge, MA: MIT Press.

    Google Scholar 

  • Strasser, Anna. 2004. Kognition künstlicher Systeme. Frankfurt: Ontos-Verlag.

    Google Scholar 

  • Tollefsen, Deborah. 2005. Let’s pretend: Children and joint action. Philosophy of the Social Sciences 35(75): 74–97.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Strasser .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Strasser, A. (2015). Can Artificial Systems Be Part of a Collective Action?. In: Misselhorn, C. (eds) Collective Agency and Cooperation in Natural and Artificial Systems. Philosophical Studies Series, vol 122. Springer, Cham. https://doi.org/10.1007/978-3-319-15515-9_11

Download citation