Skip to main content

Towards an Ontology of Simulated Social Interaction: Varieties of the “As If” for Robots and Humans

  • Chapter
  • First Online:
Sociality and Normativity for Robots

Part of the book series: Studies in the Philosophy of Sociality ((SIPS))

Abstract

The paper develops a general conceptual framework for the ontological classification of human-robot interaction. After arguing against fictionalist interpretations of human-robot interactions, I present five notions of simulation or partial realization, formally defined in terms of relationships between process systems (approximating, displaying, mimicking, imitating, and replicating). Since each of the n criterial processes for a type of two-agent interaction \(\mathfrak{I}\) can be realized in at least six modes (full realization plus five modes of simulation), we receive a (6n × n) × (6n × n) matrix of symmetric and asymmetric modes of realizing \(\mathfrak{I}\), called the “simulatory expansion” of interaction type \(\mathfrak{I}\). Simulatory expansions of social interactions can be used to map out different kinds and degrees of sociality in human-human and human-robot interaction, relative to current notions of sociality in philosophy, anthropology, and linguistics. The classificatory framework developed (SISI) thus represents the field of possible simulated social interactions. SISI can be used to clarify which conceptual and empirical grounds we can draw on in order to evaluate capacities and affordances of robots for social interaction, and it provides the conceptual means to build up a taxonomy of human-robot interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that pragmatist analytical ontology is not committed to the facticity or even the possibility of rational discourse, just to its utility as regulative idea and regulated praxis.

  2. 2.

    For the sake of simplification I shall throughout this paper assume that an interaction has just two participants (i.e., two human participants, or a human and a robot).

  3. 3.

    For the sake of the argument in this section I operate here with a simplified version of Searle’s definition of social reality: “For all kinds Z, instances of kind Z is part of social reality iff there are X, Y, and C: X takes Y to count as a Z in circumstances C,” (cf. Searle, 2010).

  4. 4.

    Here I bracket the question whether “anthropomorphizing” is the right label for make-believe projections of this kind. Treating something as companion or foe does not necessarily imply treating it as human being. Especially if one applies the “non-exceptionalist” notions of sociality I discuss below, one might argue that even though human beings are the primary instances of social actors, our long-standing practice of projecting social roles onto natural things and artifacts is a way to “socialize” the world, not to “anthropomorphize” it.

  5. 5.

    Throughout this paper I use the term “agent” in the broad disjunctive sense where it refers either to agents proper, i.e., conscious living things that can act on intentions, or to inanimate or living items that are causal origins of events that resemble (and thus can be treated as) actions, i.e., the doings of agents proper.

  6. 6.

    In the context of this paper I take it that these two facts are self-evident elements of the “logic” of social practices; a more detailed discussion of the semantics of fictional discourse in application to the performative-ascriptive predicates for social and moral roles is in preparation.

  7. 7.

    A promise given by a fictional character of a stage play—e.g., Romeo’s promise to Juliet to return the following night—is a behavior that counts as promise with respect to our actual social conventions. But due to the referential shifts introduced by the context of the stage play, the actor playing Romeo makes no commitment at all beyond the commitment to playing Romeo—the commitment to return is not the actor’s fictional commitment but the commitment of the fictional character Romeo. It is an act that in the context of the fiction counts as the promise of the fictional character x that p—and in this the only coherent sense, I think, we can give to the idea of a “fictional commitment.”—My views here benefitted from many discussions with Stefan Larsen, in the course of his PhD project on the status of robot sociality, see Larsen (How to Build a Robot and Make It Your Friend. PhD dissertation. Aarhus University, Denmark, 2016); Larsen offers a detailed investigation of the fictionalist conventions of the theatre and uses them as heuristic metaphor for the description of human-robot interactions.

  8. 8.

    By contrast with the infelicitous formulation in (1), when Breazeal describes a colleague’s interaction with his infant daughter she keeps the fictional scope correctly focused on the descriptive predicate “socially fully aware and responsive [agent]”: “Simply stated, he treats his infant as if she is already fully socially aware and responsive—with thoughts, wishes, intents, desires, and feelings that she is trying to communicate to him as any other person would” (Breazeal, 2002, p. 30).

  9. 9.

    Unless one champions a purely behaviorist account of social interactions, the definition of a social interaction will include behavioral performance conditions, but also state additional conditions relating to the agent’s intentions and understanding of norms.

  10. 10.

    Since the performance conditions for social interactions relate only to behavior and not to the intentions of the agents involved, behaving as if one were to perform a certain social action A that is part of a social interaction \(\mathfrak{I}\) is tantamount to exhibiting behavior that counts as the relevant part of social action \(\mathfrak{I}\). The agent might intend to perform another action, e.g., she might intend to do B = [pretend doing A], but this does not detract from the fact that the performance conditions for A have been fulfilled.

  11. 11.

    In Seibt (2014b) I use the contrast “friend” vs. “person” to highlight the difference between descriptive and performative-ascriptive terms—in hindsight, this was infelicitous, since even though the term “friend” is partly descriptive, the performative-declarative elements of the meaning of friendship arguably are dominant. The predicates “friend” and “person” belong on the same side and should be contrasted with (predominantly) descriptive predicates such as “child” or “woman.”

  12. 12.

    That we cannot uphold a fictionality gap for social interactions is also implicitly reflected in concerns about the expectable cultural change effected by social robotics. Authors who warn against the pervasive use of social robots are not worried about humans losing themselves in realms of fictionality. Rather, they fear a degradation of social interactions, due to an increased functionalization of social relations (cf. Sharkey & Sharkey, 2012) where “the performance of connection seems connection enough” (cf. Turkle, 2011, p. 26). The worry is that we increasingly will reduce to definition of a social action to its performance conditions, i.e., that we will abandon our current understanding that the concept of a social action comprises both conditions relating to behavior and conditions relating to intentional states. While we currently criticize each other for performing bad social actions by going through the motions, by not living up to the concept of the social action in question, such criticism will vanish—or so the argument goes—and with it the social standards for performing social actions well, i.e., sincerely.

  13. 13.

    For an overview of these requirements of standard accounts of social interaction see Hakli (2014). Hakli discusses what I call “the soft problem” in a differentiated fashion that also involves larger epistemological perspectives on conceptual change and the conventionality of conceptual contents.

  14. 14.

    Relevant technical details, especially also on the part-relation for processes I shall use in the following definitions, can be found in Seibt (200520092014a).

  15. 15.

    For details cf. Seibt (2014a).

  16. 16.

    The following five definitions are simplified—and, I hope, thereby improved—versions of those presented in Seibt (2014b). In the following I use capital letters as variables for process types and Greek letters as variables for instances of a process type. Note, though, that in General Process Theory there are, strictly speaking, no “instances” or “tokens”, since I consider the type-token (kind-instance) distinction to mark extremal regions on a gradient scale of specificity; in order to simplify the exposition here I stay with the traditional idiom and speak of highly specific, localized processes as instances (tokens) of kinds (types) of processes.

  17. 17.

    For present purposes I must rely on an intuitive understanding of the theoretical predicates “functional” and “non-functional”; a more precise statement of the envisaged distinction is quite involved, especially from within a naturalist metaphysics where all properties, even qualia, are in some sense “functionalized.”— Here I am also neglecting specific issues of realizations that arise when the process in question is an action and intentional scope comes into play. In Seibt (2014b) I formulate the right-hand side of the biconditional as a disjunction: action A is realized by a process system Σ if the system generates an instance of A or if Σ realizes instances of all subprocesses B 1, , B n of A. The disjunctive formulation is to account for variations in intentional scope, e.g., the difference between the holistic performance of an action by an expert versus the summative performance of the same action by a novice.

  18. 18.

    More precisely the definiendum should be formulated as: “process system Σ functionally replicates A P as realizable in Σ”; here and in the following definitions I take this restriction to be understood.

  19. 19.

    As noted above, Footnote 4, instead of “anthropomorphizing” we should rather speak of a human tendency to “socialize” the environment. Above I pointed out that the performance conditions for human social behavior cannot take intentional states into account. The phenomenon we commonly call “anthropomorphizing” indicates, I think, that the performance conditions for social behavior operate with observable criteria that are schematic, involving generic Gestalts. As it appears, judging from our practices of interpretation, the upward position of the corners of a mouth does not need to be an intentional smile to count as a smile, nor does it need to resemble a human smile in all regards—a mouse can smile, and so can a sponge, a car, or a tree. That we use such general observable indicators of socially relevant actions and emotional states could be explained in evolutionary terms as follows. Surely it is preferable for humans to risk erroneous inclusions into the space of social interactions rather than erroneous exclusions; if we mistake something for a social agent, the error can be corrected without incurring social repercussions, but if we fail to recognize a social agent as such, this would amount to a social offense.

  20. 20.

    The suggested classification of forms of simulation also should prove useful in the discussion of design issues in social robotics. For example, we can use it to compare and evaluate robotic simulations of an action A P in terms of degrees of simulation; here one might refine SIM-5 by introducing degrees of approximation; or it may be used to plan design goals, e.g., in order to decide whether there are any ethical reasons to aim for higher degrees of simulation than mimicking.

  21. 21.

    Apart from Aristotle’s conceptual analysis of interactions (cf. Gill, 2004), the most important source and resource for a future ontology of interactions is the work of Mark Bickhard, who combines empirical and conceptual research to promote “interactivism” both as a paradigm for empirical research and as a comprehensive theoretical stance or metaphysical view; cf. e.g. Bickhard (2009a, 2009b) and his chapter in this volume.

  22. 22.

    Since the two definitions are to be as generic as possible, no requirements for dynamic, temporal, or spatial relationships among the parts of an interaction have been added. That is, I am assuming here that the processes that are the parts of an interaction may occur all or partly simultaneously, or with overlap, or sequentially in series. But we can easily introduce types and subtypes of interactions i by specifying which temporal, spatial, and dynamic relationships need to hold among the parts of the interaction in question. To simplify the exposition I omit here and in the following definitions specifications of the partition levels at which the parts of A P are situated; as mentioned above, the embedding ontological framework of General Process Theory operates with a non-transitive part relation and parts are indexed to partition levels.

  23. 23.

    For an overview over current accounts see e.g. Setiya (2007).

  24. 24.

    To keep all options open, I will here also assume that it is conceptually possible to entertain the thesis that future robots may be able to imitate or functionally replicate intending to do X.

  25. 25.

    Abbreviations for the modes of simulation are used as names for occurrences that simulate the action in the relevant column.

  26. 26.

    For example, surveying the set of asymmetric simulatory expansions of an interaction2 one could investigate whether there are dependencies among expansions (“if this partial action is merely displayed, then that must be mimicked”, etc.) or, vice versa, one could try to identify certain clusters of asymmetric simulatory expansions (“this sort of simulation succeeds for all interactions2 that are short term/in educational contexts/involve touch etc.”). Answers can be further tailored to technical possibility versus practical feasibility, or practical versus moral desirability.

  27. 27.

    Cf. Turkle (2011). See Hakli (2014) for a discussion of possible methodological strategies of accommodating these phenomena for a theory of sociality.

  28. 28.

    To call such preconscious, non-intended occurrences “non-agentive” may be problematic—one might agree with my criterion for agency above that if an occurrence is an action then it must be possible to intend it, but deny that the implication also holds in the other direction; all processes that occur as parts of an intended action are agentive, one might say, even though they are not intended. I must bracket this issue here.

  29. 29.

    Phenomenological analyses of forms of responsiveness long have drawn attention to the importance of this difference, but there is also increasing interest in “second person cognitive science,” (cf. e.g. Reddy, 2008; Schilbach et al., 2013).

  30. 30.

    Elsewhere (see Seibt 2018) I argue that for the purposes of attributing responsibility in a sufficiently differentiated fashion we need to distinguish between: (i) the second person point of view of the human interactor with the robot; (ii) the internal third person point of view of the roboticist who designs the interaction; (iii) the external third person point of view of the observer of a human-robot interaction; and (iv) the omniscient third person point of view of the cultural community evaluating the human-robot interaction and its effects on the surrounding context relative to the community’s norms and values.

  31. 31.

    The conceptual tools of SISI are particularly basic (since it is grounded in a foundational ontology) yet precise and—due to its simple combinatorial strategies—highly expressive. It is therefore possible to translate into the classificatory framework of SISI other proposals of distinctions in capacities for moral agency (see e.g., Wallach & Allen, 2009), or in asymmetric forms of collective agency (see in particular the interesting contributions to Misselhorn, 2015); the details of these embeddings are yet to be worked out.

References

  • Bernstein, D., & Crowley, K. (2008). Searching for signs of intelligent life: An investigation of young children’s beliefs about robot intelligence. The Journal of the Learning Sciences, 17(2), 225–247.

    Article  Google Scholar 

  • Bickhard, M. H. (2009a). Interactivism: A manifesto. New Ideas in Psychology, 27, 85–89.

    Article  Google Scholar 

  • Bickhard, M. H. (2009b). The interactivist model. Synthese, 166, 547–591.

    Article  Google Scholar 

  • Björnsson, G. (2011). Joint responsibility with individual control. In Moral Responsibility Beyond Free Will and Determinism (Vol. 27, pp. 181–199). New York: Springer.

    Google Scholar 

  • Blakemore, S.-J., Bristow, D., Bird, G., Frith, C., & Ward, J. (2005). Somatosensory activations during the observation of touch and a case of vision–touch synaesthesia. Brain, 128(7), 1571–1583.

    Article  Google Scholar 

  • Breazeal, C. (2002). Designing Sociable Robots. Cambridge, MA: MIT Press.

    Google Scholar 

  • Breazeal, C. (2003). Towards sociable robots. Robotics and Autonomous Systems, 42, 167–175.

    Article  Google Scholar 

  • Cabibihan, J.-J., Javed, H., Ang, M. Jr., & Aljunied, S. M. (2013). Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism. International Journal of Social Robotics, 5(4), 593–618.

    Article  Google Scholar 

  • Cerulo, K. A. (2009). Nonhumans in social interaction. Annual Review of Sociology, 35, 531–552.

    Article  Google Scholar 

  • Clark, H. H. (2008). Talking as if. In 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2008 (pp. 393–393). Piscataway: IEEE.

    Google Scholar 

  • Coeckelbergh, M. (2010). Health care, capabilities, and ai assistive technologies. Ethical Theory and Moral Practice, 13(2), 181–190.

    Article  Google Scholar 

  • Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. Houndmills, Basingstoke / New York: Palgrave Macmillan.

    Google Scholar 

  • Danish Ethical Council. (2010). Sociale robotter. Udtalelse fra det etiske råd. Retrieved from http://www.etiskraad.dk/~/media/Etisk-Raad/Etiske-Temaer/Optimering-af-mennesket/Publikationer/Udtalelse-om-sociale-robotter.pdf. (Accessed 13 Sep 2016)

  • Dautenhahn, K. (2014). Human-Robot Interaction. In M. Soegaard & R. F. Dam (Eds.), The encyclopedia of human-computer interaction (2nd ed.). Aarhus/Denmark: The Interaction Design Foundation. (Available online: https://www.interaction-design.org/encyclopedia/human-robotinteraction.html)

  • Enfield, N. J., & Levinson, S. C. (Eds.). (2006). Roots of human sociality. New York: Berg Publishers.

    Google Scholar 

  • Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A Survey of socially interactive robots. Robotics and Autonomous Systems, 42, 143–166.

    Article  Google Scholar 

  • Frith, C., & Frith, U. (2012). Mechanisms of metacognition. Annual Review of Psychology, 63, 287–313.

    Article  Google Scholar 

  • Gilbert, M. (2008). Social convention revisited. Topoi, 27(1–2), 5–16.

    Article  Google Scholar 

  • Gilbert, M. (2014). Joint commitment: How we make the social world. Oxford: Oxford University Press.

    Google Scholar 

  • Gill, M. L. (2004). Aristotle’s distinction between change and activity. Axiomathes, 14(1), 3–22.

    Article  Google Scholar 

  • Goodwin, C. (2006). Human sociality as mutual orientation in a rich interactive environment: Multimodal utterances and pointing in aphasia. In N. J. Enfield & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 97–125). Oxford/New York: Berg Publishers.

    Google Scholar 

  • Gunkel, D. (2012). The machine question. Cambridge, MA: MIT Press.

    Google Scholar 

  • Hakli, R. (2014). Social robots and social interaction. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014 (Vol. 273, pp. 105–115). Amsterdam: IOS Press.

    Google Scholar 

  • Hutto, D. D. (2012). Folk psychological narratives: The sociocultural basis of understanding reasons. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kahn, P. H., Freier, N. G., Friedman, B., Severson, R. L., & Feldman, E. N. (2004). Social and moral relationships with robotic others? In 13th IEEE International Workshop on Robot and Human Interactive Communication, 2004 (ROMAN 2004). (pp. 545–550). Piscataway: IEEE.

    Google Scholar 

  • Kahn, P. H., Friedman, B., Perez-Granados, D. R., & Freier, N. G. (2004). Robotic pets in the lives of preschool children. In CHI’04 Extended Abstracts on Human Factors in Computing Systems (pp. 1449–1452). New York: ACM Press.

    Google Scholar 

  • Kalbe, E., Schlegel, M., Sack, A. T., Nowak, D. A., Dafotakis, M., Bangard, C., …Kessler, J. (2010). Dissociating cognitive from affective theory of mind: a TMS study. Cortex, 46(6), 769–780.

    Google Scholar 

  • Laitinen, A. (2011). Recognition, acknowledgement, and acceptance. In H. Ikäheimo & A. Laitinen (Eds.), Recognition and social ontology (pp. 309–348). Leiden/Boston: Brill.

    Chapter  Google Scholar 

  • Leyzberg, D., Avrunin, E., Liu, J., & Scassellati, B. (2011). Robots that express emotion elicit better human teaching. In Proceedings of the 6th International Conference on Human-Robot Interaction (pp. 347–354).

    Google Scholar 

  • Mameli, M. (2001). Mindreading, mindshaping, and evolution. Biology and Philosophy, 16(5), 595–626.

    Article  Google Scholar 

  • Misselhorn, C. (Ed.). (2015). Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (Vol. 122). Cham: Springer.

    Google Scholar 

  • Petersson, B. (2013). Co-responsibility and causal involvement. Philosophia, 41(3), 847–866.

    Article  Google Scholar 

  • Reddy, V. (2008). How infants know minds. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J., & Bodley Scott, S. E. (2010). Seeing it their way: evidence for rapid and involuntary computation of what other people see. Journal of Experimental Psychology: Human Perception and Performance, 36(5), 1255.

    Google Scholar 

  • Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13–24.

    Article  Google Scholar 

  • Schegloff, E. A. (2006). Interaction: The infrastructure for social institutions, the natural ecological niche for language, and the arena in which culture is enacted. In N. J. Enfield & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 70–96). Oxford/New York: Berg Publishers.

    Google Scholar 

  • Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36(4), 393–414.

    Article  Google Scholar 

  • Searle, J. R. (2010). Making the social world: The structure of human civilization. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Seibt, J. (2005). General processes–A study in Ontological category construction. Konstanz, Germany: Habilitations thesis at the University of Konstanz.

    Google Scholar 

  • Seibt, J. (2009). Forms of emergent interaction in general process theory. Synthese, 166(3), 479–512.

    Article  Google Scholar 

  • Seibt, J. (2014a). Non-transitive parthood, leveled mereology and the representation of emergent parts of processes. Grazer Philosophische Studien, 91, 165–191.

    Google Scholar 

  • Seibt, J. (2014b). Varieties of the ‘as if’: Five ways to simulate an action. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable robots and the future of social relations: Proceedings of Robo-Philosophy 2014 (Vol. 273, pp. 97–105). Amsterdam: IOS Press.

    Google Scholar 

  • Seibt, J. (2018, forthcoming). The Ontology of Simulated Social Interaction–How to Attribute Sociality, Collective Agency, and Responsibility in Human-Robot Interaction. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Robophilosophy: Philosophy of, for, and by social robotics. MIT Press.

    Google Scholar 

  • Setiya, K. (2007). Reasons without rationalism. Princeton: Princeton University Press.

    Google Scholar 

  • Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40.

    Article  Google Scholar 

  • Sparrow, L., & Sparrow, R. (2006). In the hands of machines? The future of aged care. Minds and Machines, 16(2), 141–161.

    Article  Google Scholar 

  • Sullins, J. P. (2008). Friends by design: A design philosophy for personal robotics technology. In P. Kroes, P. E. Vermaas, A. Light, & S. A. Moore (Eds.), Philosophy and design: From engineering to architecture (pp. 143–157). Dordrecht: Springer.

    Google Scholar 

  • Tuomela, R. (2013). Social ontology: Collective intentionality and group agents. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.

    Google Scholar 

  • Vallor, S. (2011). Carebots and caregivers: Sustaining the ethical ideal of care in the 21st century. Philosophy & Technology, 24, 251–268.

    Article  Google Scholar 

  • Veruggio, G. (2006). The EURON roboethics roadmap. In 6th IEEE-RAS International Conference on Humanoid Robots, 2006 (pp. 612–617).

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Walton, K. L. (1990). Mimesis as make-believe: On the foundations of the representational arts. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Walton, K. L. (2005). Metaphor and prop oriented make-believe. In M. E. Kalderon (Ed.), Fictionalism in metaphysics (pp. 65–87). Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johanna Seibt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Seibt, J. (2017). Towards an Ontology of Simulated Social Interaction: Varieties of the “As If” for Robots and Humans. In: Hakli, R., Seibt, J. (eds) Sociality and Normativity for Robots. Studies in the Philosophy of Sociality. Springer, Cham. https://doi.org/10.1007/978-3-319-53133-5_2

Download citation

Publish with us

Policies and ethics