Advertisement

Social Robots and Recognition

  • Marco NørskovEmail author
  • Sladjana Nørskov
Editorial Notes

Social robotic solutions are currently being studied, developed, and tested on a large scale. From workspace collaborators to caretakers of our children and elderly to intimate companions—social robots are imagined to enter into virtually all spheres of social interaction. They are heralded as solutions as well as cautioned against with respect to a variety of socio-political problems such as demographic challenges and inequality (cf. Ford 2015) and stress the plasticity of our established conceptual frameworks (e.g., Seibt 2017, 2018; Nørskov 2015). Although appearance and functionality of these machines vary, what they have in common is that they draw on our fundamental relational capacities (e.g., Turkle 2012). A central topic in this context is recognition, here understood as the acknowledgement of the other as individual/collective and robot/human. At least since Hegel’s infamous master-slave dialectics, this type of recognition—which goes beyond the notion of discrimination between mere objects—has been prominent in the philosophical literature. However, only recently, it seems to have started to gain broader traction in the philosophically informed discourse on social robotics (cf. Coeckelbergh 2015; Gertz 2016; Nørskov 2011 for examples of research drawing on Hegel’s concept of recognition). Given the vital nature of social recognition—i.e., it being essential to our flourishing as humans—and under the assumption that human-machine interaction will increase, it becomes an urgent task to critically and constructively assess the status and transformational potential of recognition of/by social robots.

The prospects of extensive integration of social robots into practices where they become players in the human game of recognition (in its various conceptual nuances) raise numerous questions related to the socio-ontological, (machine-)ethical, and socio-political issues. To (mis)recognize a robot or to be (mis)recognized by a robot may entail modified or even new ways of relating to others. Recognition emerging from human-robot interaction encounters has the power to affect our self-understanding as individuals and collectives and transform our interpersonal relationships and communities. The contributions to this special issue address some of these aspects of recognition.

Based on virtue ethics and social recognition theory, Cappuccio et al. explain and discuss the psychological mechanisms related to the moral consideration for robots and argue that social robots should be treated as moral patients. The authors explain how human-robot relationships can have a moral valence due to their pragmatic situatedness, pointing to the dynamics and spontaneity of the (pseudo-)social relationships between robots and humans that entail a (quasi) mutual social recognition. The authors elaborate a central contradiction that moral consideration for robots is likely to face, namely the “anthropomorphizing while de-humanizing robots paradox”. This paradox, they explain, arises because cultural standards dictate that humans treat robots as instruments while at the same time encouraging emotional investment in robots, which results in cognitive dissonance. Thus, the authors point out that there is a need to find a way to avoid the emergence of de-humanizing dispositions.

In their contribution to the special issue, Brinck and Balkenius challenge the assumption that people prefer to interact with robots that look and act as humans. They argue against designing robots in a way that takes advantage of the human tendency to emotionally engage and form relationships with others. The authors therefore contemplate a way in which robots could be designed without relying on emotional engagement as a means of triggering and maintaining HRI by for instance initiating and maintaining HRI solely through an imitation function based on attentional engagement by gaze, vocalization, and touch. Embodied recognition, the authors argue, should be a starting point for HRI, while mutual recognition between a human and a robot as functional equals is fundamental to a successful human-robot collaboration and joint action.

Positioned at the intersection of art and technology, the contribution by Vorster analyzes and discusses recognition through a cultural lens. The author examines the works of artist Zhou Song and how different representations of social robots affect a human self-understanding as individuals and collectives. This contribution points to the tensions between life and non-life, fragility and strength, progress and destruction, etc. in the way robots as opposed to humans are represented in the works of Zhou Song. Exposing these tensions allows the artist to explore how recognition of the human body versus the robot is facilitated and disrupted, invoking a reflection upon our current frameworks of self and other.

1 Conclusion and Outlook

The contributions to this special issue set the scene for further investigation of the topic of recognition of/by social robots from the perspectives of philosophy, psychology, cognitive sciences, and art. This mix of perspectives is illustrative of social robotics being a truly interdisciplinary field. Ideas presented in the special issue are valuable both for decision makers, researchers, and policy makers. Decision makers in the robotic industry can use these ideas to inform the design and development of social robots, while researchers can further advance the perspectives and ideas offered here and apply them to different contexts in which social robots are present. Not least, issues of relevance to policy makers, related to what is ethically in-/appropriate and socially un-/desirable, are discussed.

In contexts in which humans and robots may have to or are already collaborating, such as work places, the exchange relationships between humans and robots are bound to have an effect not only on the social dynamics of the entire organization, including new experiences of (mis)recognition that shape the practical identity of humans and robots, but also the normative expectations that stem from this identity. In work places, organizational members often need to sacrifice their own goals and interests to achieve collective goals (Ellemers et al. 2004). This entails a risk of the polarization of the individual and the collective, which may be even more pronounced in settings based on human-robot collaborations. If the exchange relationships transform character, because robots have become a part of the relationship, this may create a challenge as to how to ensure that the internalized values and social identities of human organizational members are based on a conception of self in collective rather than in individual terms.

Recognition is a “vital human need” (Taylor 1992, p. 26) and according to Hegel, even more fundamental as “[s]elf-consciousness exists in and for itself when, and by the fact that, it so exists for another; that is, it exists only in being acknowledged” (Hegel 1977, p. 111). The question still remains how the need for recognition, both in the normative and the psychological sense, will be met in the different settings that require humans and robots not only to coexist but also to interact and collaborate closely. Recognition unavoidably encompasses relationships of power (van den Brink and Owen 2007) that can have an effect not only on individual affect, behavior, and cognition (Keltner et al. 2003) but also on group processes and outcomes (Greer et al. 2017). This issue is thus relevant beyond the dyadic human-robot relationship, and highlights the necessity for a deeper understanding of the structural properties of a social system that is based on humans and robots, and to the way reciprocal interactions between humans and robots enable and constrain each other.

Notes

References

  1. Coeckelbergh, M. (2015). The tragedy of the master: automation, vulnerability, and distance. Ethics and Information Technology, 17(3), 219–229.CrossRefGoogle Scholar
  2. Ellemers, N., de Gilders, D., & Haslam, S. A. (2004). Motivating individuals and groups at work: a social identity perspective on leadership and group performance. Academy of Management Review, 29(3), 459–478.CrossRefGoogle Scholar
  3. Ford, M. (2015). Rise of the robots: technology and the threat of a jobless future. New York: Basic Books.Google Scholar
  4. Gertz, N. (2016). The Master/iSlave dialectic: post (Hegelian) phenomenology and the ethics of technology. In J. Seibt, M. Nørskov, & S. S. Andersen (Eds.), What social robots can and should do: proceedings of robophilosophy 2016/TRANSOR 2016 (pp. 136–144). Amsterdam: IOS Press Ebooks.Google Scholar
  5. Greer, L. L., Van Bunderen, L., & Yu, S. (2017). The dysfunctions of power in teams: a review and emergent conflict perspective. Research in Organizational Behavior, 37, 103–124.CrossRefGoogle Scholar
  6. Hegel, G. W. F. (1977). Hegel’s phenomenology of spirit (trans: Miller, A.V.). Oxford: Oxford University Press.Google Scholar
  7. Keltner, D., Gruenfeld, D. H., & Anderson, C. (2003). Power, approach, and inhibition. Psychological Review, 110, 265–284.CrossRefGoogle Scholar
  8. Nørskov, M. (2011). Human-robot interaction: Episteme and enslavement. In Prolegomena to social robotics: philosophical inquiries into perspectives on human-robot interaction (pp. 65–95). Aarhus: Aarhus University.Google Scholar
  9. Nørskov, M. (2015). Revisiting Ihde’s fourfold “technological relationships”: application and modification. Philosophy & Technology, 28(2), 189–207.CrossRefGoogle Scholar
  10. Seibt, J. (2017). Towards an ontology of simulated social interaction: varieties of the “as if” for robots and humans. In R. Hakli & J. Seibt (Eds.), Sociality and normativity for robots: philosophical inquiries into human-robot interaction (pp. 11–39). New York: Springer Publishing Company.Google Scholar
  11. Seibt, J. (2018). Classifying forms and modes of co-working in the ontology of asymmetric social interactions (OASIS). In M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, & M. Nøskov (Eds.), Envisioning robots in society - power, politics, and public space: proceedings of Robophilosophy 2018 (pp. 133–146). Amsterdam: IOS Press Ebooks.Google Scholar
  12. Taylor, C. (1992). The politics of recognition. In A. Gutmann (Ed.), Multiculturalism: examining the politics of recognition (pp. 25–73). Princeton: Princeton University Press.Google Scholar
  13. Turkle, S. (2012). Alone together: why we expect more from technology and less from each other. New York: Basic Books.Google Scholar
  14. van den Brink, B., & Owen, D. (2007). Recognition and power: Axel Honneth and the tradition of critical social theory. Cambridge: Cambridge University Press.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Philosophy and History of IdeasAarhus UniversityAarhusDenmark
  2. 2.Hiroshi Ishiguro LaboratoryAdvanced Telecommunications Research Institute InternationalKyotoJapan
  3. 3.Department of Business Development and TechnologyAarhus UniversityAarhusDenmark

Personalised recommendations