Skip to main content

Social Robots as Echo Chambers and Opinion Amplifiers

Limits to the Social Integrability of Robots

  • Chapter
  • First Online:
Emotional Machines

Abstract

Using a practice-theoretical perspective on sociality, we investigate which social practices are reserved for humans. We argue that especially those practices that require participants to reciprocally recognize each other as persons clash with the conceptual understanding of robots. Furthermore, the paper provides reasons why this understanding of robots can be defended against a conception that wants to attribute the status of persons to robots based on their behavior. The simulated evaluative attitudes of robots are not rooted in the robots themselves but turn out instead to be merely opinion amplifiers of their developers or sociotechnical echo chambers of the users. However, we also argue that building robots that can perfectly simulate recognition claims nevertheless poses a problem since such devices would distort our social practices.

We would like to thank Laura Martena, Hauke Behrendt, Tom Poljanšek and Anne Clausen for helpful comments and exciting discussions that greatly contributed to the improvement of this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See for example Sophia’s YouTube channel and especially this video: https://www.youtube.com/watch?v=NwNULnXk9b0.

  2. 2.

    One detailed account that analyzes the necessary and “rule-like” conditions for participating in a social practice was suggested by Hauke Behrendt (2018). Behrendt develops a concept of social “inclusion” (into a practice) that has four relational elements. He claims that a full conceptualization of (social) inclusion must determine (i) the subject of inclusion, (ii) the object of inclusion, (iii) the Instance of inclusion and (iv) the rules of inclusion. (Behrendt, 2018, pp. 162–182). We agree with Behrendt that the objects of social inclusion are social practices and further that the instance that formally “licenses” the inclusion or participation in a social practice is in a lot of cases not some bureaucratic agency but rather other participants within the practice. Behrendt understands the conditions for being a principal subject of inclusion as having the necessary properties and capacities for “[…] being a sufficiently competent social Agent” (Behrendt, 2018, p. 165 f.). Rules of inclusion can then further limit access to social practices with respect to further features, (ascribed) properties or capacities that a subject need to possess in order to be included into a social practice. We suggest to tweak Behrendt’s account of social inclusion into social practices a little bit for the purpose of this paper. Instead of conceptualizing the necessary conditions for principally being a subject of potential participation in a social practice as more or less general “social competences” we propose to distinguish between the functional conditions of realization of a social practice and sufficient further status conditions, whose realization must be ascribed “on top”.

  3. 3.

    Correctly speaking, for a lot of cases the theoretical reconstruction of the normative sensitive behavior of participants can be made by supposing “success conditions” which are an explication of the implicit knowing-how of the participants).

  4. 4.

    For the distinction between “manifest image” and “scientific image” see Sellars, 1962. The manifest image can be described as our ordinary understanding of ourselves and the world in which normativity, reasons and rationality plays a role. The scientific image is, roughly speaking, our explanation of the world via theoretical and stipulated, basic entities.

  5. 5.

    It may be questioned that romantic love really requires that the partner is able to break up the relationship (see Misselhorn, 2021). This does, however, not touch the more fundamental problem to which we point.

  6. 6.

    Thanks to Anne Clausen for pointing out that Hegel’s account of relations of recognition is embedded in a situation of the necessities of the human life form.

References

  • AETHON. (2020). Retrieved December 12, 2020, from https://aethon.com

  • Behrendt, H. (2018). Das Ideal einer inklusiven Arbeitswelt: Teilhabegerechtigkeit im Zeitalter der Digitalisierung. Campus.

    Google Scholar 

  • Bertram, G. W., & Celikates, R. (2015). Towards a conflict theory of recognition: On the constitution of relations of recognition in conflict: Towards a conflict theory of recognition. European Journal of Philosophy, 23(4), 838–861.

    Article  Google Scholar 

  • Bourdieu, P. (1979). Entwurf einer Theorie der Praxis: Auf der ethnologischen Grundlage der kabylischen Gesellschaft. Suhrkamp.

    Google Scholar 

  • Brain Bar. (2020). Retrieved December 12, 2021, from https://www.youtube.com/watch?v=Io6xuGmS5pM

  • Brandom, R. (1994). Making it explicit: Reasoning, representing, and discursive commitment. Harvard University Press.

    Google Scholar 

  • Danaher, J. (2020). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.

    Article  Google Scholar 

  • Darling, K. (2017). “Who’s Johnny?” Anthropomorphic framing in human–robot interaction, integration, and policy. In P. Lin, G. Bekey, K. Abney, & R. Jenkins (eds.), Robot ethics 2.0. Oxford University Press.

    Google Scholar 

  • Dennett, D. (1988). Conditions of personhood. In M. F. Goodman (Ed.), What is a person? (pp. 145–167). Humana Press.

    Chapter  Google Scholar 

  • Elgin, C. Z. (2018). The epistemic normativity of knowing-how. In U. Dirks & A. Wagner (Eds.), Abel im Dialog (pp. 483–498). De Gruyter.

    Chapter  Google Scholar 

  • Evans, D. (2019). Wanting the impossible: The dilemma at the heart of intimate human–robot relationships. In Y. Wilks (ed.), Close engagements with artificial companions. John Benjamins.

    Google Scholar 

  • Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 80(10), 563–570.

    Google Scholar 

  • Giddens, A. (1984). The constitution of society. Blackwell.

    Google Scholar 

  • Hanson Robotics. (2020). Retrieved December 12, 2020, from https://www.hansonrobotics.com/sophia/

  • Honneth, A. (2010). Kampf um Anerkennung: Zur moralischen Grammatik sozialer Konflikte. Suhrkamp.

    Google Scholar 

  • Ikäheimo, H. (2007). Recognizing persons. Journal of Consciousness Studies, 14(5–6), 224–247.

    Google Scholar 

  • Ikäheimo, H. (2014). Anerkennung. De Gruyter.

    Book  Google Scholar 

  • Ikäheimo, H., 2019. Intersubjective recognition and personhood as membership in the life-form of persons. The Social Ontology of Personhood, Draft 2019.08

    Google Scholar 

  • Iser, M. (2019). Recognition. In E. N. Zalta (ed.), The Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/sum2019/entries/recognition/

  • Laitinen, A. (2002). Interpersonal recognition: A response to value or a precondition of personhood? Inquiry, 45(4), 463–478.

    Article  Google Scholar 

  • Levy, D. (2008). Love and sex with robots: The evolution of human–robot relationships. Harper Perennial.

    Google Scholar 

  • Misselhorn, C. (2018). Grundfragen der Maschinenethik. Reclam, 5th ed. 2022.

    Google Scholar 

  • Misselhorn, C. (2021). Künstliche Intelligenz und Empathie. Vom Leben mit Emotionserkennung, Sexrobotern & Co. Reclam.

    Google Scholar 

  • Misselhorn, C. (2022). Artificial moral agents. Conceptual issues and ethical controversy. In S. Voeneky et al. (eds.), The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives (in print).

    Google Scholar 

  • Poljanšek, T., Störzinger, T. (2020). Of waiters, robots, and friends. Functional social interactions vs. close interhuman relationships. Culturally Sustainable Social Robotics, 68–77.

    Google Scholar 

  • Reckwitz, A. (2003). Basic elements of a theory of social practices. Zeitschrift Für Soziologie, 32(4), 282–301.

    Article  Google Scholar 

  • Schatzki, T. R. (2008). Social practices: A Wittgensteinian approach to human activity and the social. Cambridge University Press.

    Google Scholar 

  • Seibt, J. (2018). Classifying forms and modes of co-working in the ontology of asymmetric social interactions (OASIS). In M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, & M. Nørskov (eds.), Envisioning robots in society—Power, politics, and public space frontiers in artificial intelligence and applications.

    Google Scholar 

  • Searle, J. (1996). The construction of social reality. Penguin.

    Google Scholar 

  • Sellars, W. (1962). Philosophy and the scientific image of man. In R. Colodny (Ed.), Frontiers of science and philosophy (pp. 35–78). University of Pittsburgh Press.

    Google Scholar 

  • Sellars, W. (1997). Empiricism and the philosophy of mind. Harvard University Press.

    Google Scholar 

  • Stahl, T. (2013). Immanente Kritik: Elemente einer Theorie sozialer Praktiken Theorie und Gesellschaft. Campus.

    Google Scholar 

  • Williams, B. (1965). Ethical consistency. Proceedings of the Aristotelian Society, 39, 103–124.

    Google Scholar 

  • Zoetic AI. (2020). Retrieved December 12, 2020, from https://www.kiki.ai

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catrin Misselhorn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Misselhorn, C., Störzinger, T. (2023). Social Robots as Echo Chambers and Opinion Amplifiers. In: Misselhorn, C., Poljanšek, T., Störzinger, T., Klein, M. (eds) Emotional Machines. Technikzukünfte, Wissenschaft und Gesellschaft / Futures of Technology, Science and Society. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-37641-3_10

Download citation

Publish with us

Policies and ethics