Abstract
The paper develops a general conceptual framework for the ontological classification of human-robot interaction. After arguing against fictionalist interpretations of human-robot interactions, I present five notions of simulation or partial realization, formally defined in terms of relationships between process systems (approximating, displaying, mimicking, imitating, and replicating). Since each of the n criterial processes for a type of two-agent interaction \(\mathfrak{I}\) can be realized in at least six modes (full realization plus five modes of simulation), we receive a (6n × n) × (6n × n) matrix of symmetric and asymmetric modes of realizing \(\mathfrak{I}\), called the “simulatory expansion” of interaction type \(\mathfrak{I}\). Simulatory expansions of social interactions can be used to map out different kinds and degrees of sociality in human-human and human-robot interaction, relative to current notions of sociality in philosophy, anthropology, and linguistics. The classificatory framework developed (SISI) thus represents the field of possible simulated social interactions. SISI can be used to clarify which conceptual and empirical grounds we can draw on in order to evaluate capacities and affordances of robots for social interaction, and it provides the conceptual means to build up a taxonomy of human-robot interaction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that pragmatist analytical ontology is not committed to the facticity or even the possibility of rational discourse, just to its utility as regulative idea and regulated praxis.
- 2.
For the sake of simplification I shall throughout this paper assume that an interaction has just two participants (i.e., two human participants, or a human and a robot).
- 3.
For the sake of the argument in this section I operate here with a simplified version of Searle’s definition of social reality: “For all kinds Z, instances of kind Z is part of social reality iff there are X, Y, and C: X takes Y to count as a Z in circumstances C,” (cf. Searle, 2010).
- 4.
Here I bracket the question whether “anthropomorphizing” is the right label for make-believe projections of this kind. Treating something as companion or foe does not necessarily imply treating it as human being. Especially if one applies the “non-exceptionalist” notions of sociality I discuss below, one might argue that even though human beings are the primary instances of social actors, our long-standing practice of projecting social roles onto natural things and artifacts is a way to “socialize” the world, not to “anthropomorphize” it.
- 5.
Throughout this paper I use the term “agent” in the broad disjunctive sense where it refers either to agents proper, i.e., conscious living things that can act on intentions, or to inanimate or living items that are causal origins of events that resemble (and thus can be treated as) actions, i.e., the doings of agents proper.
- 6.
In the context of this paper I take it that these two facts are self-evident elements of the “logic” of social practices; a more detailed discussion of the semantics of fictional discourse in application to the performative-ascriptive predicates for social and moral roles is in preparation.
- 7.
A promise given by a fictional character of a stage play—e.g., Romeo’s promise to Juliet to return the following night—is a behavior that counts as promise with respect to our actual social conventions. But due to the referential shifts introduced by the context of the stage play, the actor playing Romeo makes no commitment at all beyond the commitment to playing Romeo—the commitment to return is not the actor’s fictional commitment but the commitment of the fictional character Romeo. It is an act that in the context of the fiction counts as the promise of the fictional character x that p—and in this the only coherent sense, I think, we can give to the idea of a “fictional commitment.”—My views here benefitted from many discussions with Stefan Larsen, in the course of his PhD project on the status of robot sociality, see Larsen (How to Build a Robot and Make It Your Friend. PhD dissertation. Aarhus University, Denmark, 2016); Larsen offers a detailed investigation of the fictionalist conventions of the theatre and uses them as heuristic metaphor for the description of human-robot interactions.
- 8.
By contrast with the infelicitous formulation in (1), when Breazeal describes a colleague’s interaction with his infant daughter she keeps the fictional scope correctly focused on the descriptive predicate “socially fully aware and responsive [agent]”: “Simply stated, he treats his infant as if she is already fully socially aware and responsive—with thoughts, wishes, intents, desires, and feelings that she is trying to communicate to him as any other person would” (Breazeal, 2002, p. 30).
- 9.
Unless one champions a purely behaviorist account of social interactions, the definition of a social interaction will include behavioral performance conditions, but also state additional conditions relating to the agent’s intentions and understanding of norms.
- 10.
Since the performance conditions for social interactions relate only to behavior and not to the intentions of the agents involved, behaving as if one were to perform a certain social action A that is part of a social interaction \(\mathfrak{I}\) is tantamount to exhibiting behavior that counts as the relevant part of social action \(\mathfrak{I}\). The agent might intend to perform another action, e.g., she might intend to do B = [pretend doing A], but this does not detract from the fact that the performance conditions for A have been fulfilled.
- 11.
In Seibt (2014b) I use the contrast “friend” vs. “person” to highlight the difference between descriptive and performative-ascriptive terms—in hindsight, this was infelicitous, since even though the term “friend” is partly descriptive, the performative-declarative elements of the meaning of friendship arguably are dominant. The predicates “friend” and “person” belong on the same side and should be contrasted with (predominantly) descriptive predicates such as “child” or “woman.”
- 12.
That we cannot uphold a fictionality gap for social interactions is also implicitly reflected in concerns about the expectable cultural change effected by social robotics. Authors who warn against the pervasive use of social robots are not worried about humans losing themselves in realms of fictionality. Rather, they fear a degradation of social interactions, due to an increased functionalization of social relations (cf. Sharkey & Sharkey, 2012) where “the performance of connection seems connection enough” (cf. Turkle, 2011, p. 26). The worry is that we increasingly will reduce to definition of a social action to its performance conditions, i.e., that we will abandon our current understanding that the concept of a social action comprises both conditions relating to behavior and conditions relating to intentional states. While we currently criticize each other for performing bad social actions by going through the motions, by not living up to the concept of the social action in question, such criticism will vanish—or so the argument goes—and with it the social standards for performing social actions well, i.e., sincerely.
- 13.
For an overview of these requirements of standard accounts of social interaction see Hakli (2014). Hakli discusses what I call “the soft problem” in a differentiated fashion that also involves larger epistemological perspectives on conceptual change and the conventionality of conceptual contents.
- 14.
- 15.
For details cf. Seibt (2014a).
- 16.
The following five definitions are simplified—and, I hope, thereby improved—versions of those presented in Seibt (2014b). In the following I use capital letters as variables for process types and Greek letters as variables for instances of a process type. Note, though, that in General Process Theory there are, strictly speaking, no “instances” or “tokens”, since I consider the type-token (kind-instance) distinction to mark extremal regions on a gradient scale of specificity; in order to simplify the exposition here I stay with the traditional idiom and speak of highly specific, localized processes as instances (tokens) of kinds (types) of processes.
- 17.
For present purposes I must rely on an intuitive understanding of the theoretical predicates “functional” and “non-functional”; a more precise statement of the envisaged distinction is quite involved, especially from within a naturalist metaphysics where all properties, even qualia, are in some sense “functionalized.”— Here I am also neglecting specific issues of realizations that arise when the process in question is an action and intentional scope comes into play. In Seibt (2014b) I formulate the right-hand side of the biconditional as a disjunction: action A is realized by a process system Σ if the system generates an instance of A or if Σ realizes instances of all subprocesses B 1, …, B n of A. The disjunctive formulation is to account for variations in intentional scope, e.g., the difference between the holistic performance of an action by an expert versus the summative performance of the same action by a novice.
- 18.
More precisely the definiendum should be formulated as: “process system Σ ∗ functionally replicates A P as realizable in Σ”; here and in the following definitions I take this restriction to be understood.
- 19.
As noted above, Footnote 4, instead of “anthropomorphizing” we should rather speak of a human tendency to “socialize” the environment. Above I pointed out that the performance conditions for human social behavior cannot take intentional states into account. The phenomenon we commonly call “anthropomorphizing” indicates, I think, that the performance conditions for social behavior operate with observable criteria that are schematic, involving generic Gestalts. As it appears, judging from our practices of interpretation, the upward position of the corners of a mouth does not need to be an intentional smile to count as a smile, nor does it need to resemble a human smile in all regards—a mouse can smile, and so can a sponge, a car, or a tree. That we use such general observable indicators of socially relevant actions and emotional states could be explained in evolutionary terms as follows. Surely it is preferable for humans to risk erroneous inclusions into the space of social interactions rather than erroneous exclusions; if we mistake something for a social agent, the error can be corrected without incurring social repercussions, but if we fail to recognize a social agent as such, this would amount to a social offense.
- 20.
The suggested classification of forms of simulation also should prove useful in the discussion of design issues in social robotics. For example, we can use it to compare and evaluate robotic simulations of an action A P in terms of degrees of simulation; here one might refine SIM-5 by introducing degrees of approximation; or it may be used to plan design goals, e.g., in order to decide whether there are any ethical reasons to aim for higher degrees of simulation than mimicking.
- 21.
Apart from Aristotle’s conceptual analysis of interactions (cf. Gill, 2004), the most important source and resource for a future ontology of interactions is the work of Mark Bickhard, who combines empirical and conceptual research to promote “interactivism” both as a paradigm for empirical research and as a comprehensive theoretical stance or metaphysical view; cf. e.g. Bickhard (2009a, 2009b) and his chapter in this volume.
- 22.
Since the two definitions are to be as generic as possible, no requirements for dynamic, temporal, or spatial relationships among the parts of an interaction have been added. That is, I am assuming here that the processes that are the parts of an interaction may occur all or partly simultaneously, or with overlap, or sequentially in series. But we can easily introduce types and subtypes of interactions i by specifying which temporal, spatial, and dynamic relationships need to hold among the parts of the interaction in question. To simplify the exposition I omit here and in the following definitions specifications of the partition levels at which the parts of A P are situated; as mentioned above, the embedding ontological framework of General Process Theory operates with a non-transitive part relation and parts are indexed to partition levels.
- 23.
For an overview over current accounts see e.g. Setiya (2007).
- 24.
To keep all options open, I will here also assume that it is conceptually possible to entertain the thesis that future robots may be able to imitate or functionally replicate intending to do X.
- 25.
Abbreviations for the modes of simulation are used as names for occurrences that simulate the action in the relevant column.
- 26.
For example, surveying the set of asymmetric simulatory expansions of an interaction2 one could investigate whether there are dependencies among expansions (“if this partial action is merely displayed, then that must be mimicked”, etc.) or, vice versa, one could try to identify certain clusters of asymmetric simulatory expansions (“this sort of simulation succeeds for all interactions2 that are short term/in educational contexts/involve touch etc.”). Answers can be further tailored to technical possibility versus practical feasibility, or practical versus moral desirability.
- 27.
- 28.
To call such preconscious, non-intended occurrences “non-agentive” may be problematic—one might agree with my criterion for agency above that if an occurrence is an action then it must be possible to intend it, but deny that the implication also holds in the other direction; all processes that occur as parts of an intended action are agentive, one might say, even though they are not intended. I must bracket this issue here.
- 29.
- 30.
Elsewhere (see Seibt 2018) I argue that for the purposes of attributing responsibility in a sufficiently differentiated fashion we need to distinguish between: (i) the second person point of view of the human interactor with the robot; (ii) the internal third person point of view of the roboticist who designs the interaction; (iii) the external third person point of view of the observer of a human-robot interaction; and (iv) the omniscient third person point of view of the cultural community evaluating the human-robot interaction and its effects on the surrounding context relative to the community’s norms and values.
- 31.
The conceptual tools of SISI are particularly basic (since it is grounded in a foundational ontology) yet precise and—due to its simple combinatorial strategies—highly expressive. It is therefore possible to translate into the classificatory framework of SISI other proposals of distinctions in capacities for moral agency (see e.g., Wallach & Allen, 2009), or in asymmetric forms of collective agency (see in particular the interesting contributions to Misselhorn, 2015); the details of these embeddings are yet to be worked out.
References
Bernstein, D., & Crowley, K. (2008). Searching for signs of intelligent life: An investigation of young children’s beliefs about robot intelligence. The Journal of the Learning Sciences, 17(2), 225–247.
Bickhard, M. H. (2009a). Interactivism: A manifesto. New Ideas in Psychology, 27, 85–89.
Bickhard, M. H. (2009b). The interactivist model. Synthese, 166, 547–591.
Björnsson, G. (2011). Joint responsibility with individual control. In Moral Responsibility Beyond Free Will and Determinism (Vol. 27, pp. 181–199). New York: Springer.
Blakemore, S.-J., Bristow, D., Bird, G., Frith, C., & Ward, J. (2005). Somatosensory activations during the observation of touch and a case of vision–touch synaesthesia. Brain, 128(7), 1571–1583.
Breazeal, C. (2002). Designing Sociable Robots. Cambridge, MA: MIT Press.
Breazeal, C. (2003). Towards sociable robots. Robotics and Autonomous Systems, 42, 167–175.
Cabibihan, J.-J., Javed, H., Ang, M. Jr., & Aljunied, S. M. (2013). Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism. International Journal of Social Robotics, 5(4), 593–618.
Cerulo, K. A. (2009). Nonhumans in social interaction. Annual Review of Sociology, 35, 531–552.
Clark, H. H. (2008). Talking as if. In 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2008 (pp. 393–393). Piscataway: IEEE.
Coeckelbergh, M. (2010). Health care, capabilities, and ai assistive technologies. Ethical Theory and Moral Practice, 13(2), 181–190.
Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. Houndmills, Basingstoke / New York: Palgrave Macmillan.
Danish Ethical Council. (2010). Sociale robotter. Udtalelse fra det etiske råd. Retrieved from http://www.etiskraad.dk/~/media/Etisk-Raad/Etiske-Temaer/Optimering-af-mennesket/Publikationer/Udtalelse-om-sociale-robotter.pdf. (Accessed 13 Sep 2016)
Dautenhahn, K. (2014). Human-Robot Interaction. In M. Soegaard & R. F. Dam (Eds.), The encyclopedia of human-computer interaction (2nd ed.). Aarhus/Denmark: The Interaction Design Foundation. (Available online: https://www.interaction-design.org/encyclopedia/human-robotinteraction.html)
Enfield, N. J., & Levinson, S. C. (Eds.). (2006). Roots of human sociality. New York: Berg Publishers.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A Survey of socially interactive robots. Robotics and Autonomous Systems, 42, 143–166.
Frith, C., & Frith, U. (2012). Mechanisms of metacognition. Annual Review of Psychology, 63, 287–313.
Gilbert, M. (2008). Social convention revisited. Topoi, 27(1–2), 5–16.
Gilbert, M. (2014). Joint commitment: How we make the social world. Oxford: Oxford University Press.
Gill, M. L. (2004). Aristotle’s distinction between change and activity. Axiomathes, 14(1), 3–22.
Goodwin, C. (2006). Human sociality as mutual orientation in a rich interactive environment: Multimodal utterances and pointing in aphasia. In N. J. Enfield & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 97–125). Oxford/New York: Berg Publishers.
Gunkel, D. (2012). The machine question. Cambridge, MA: MIT Press.
Hakli, R. (2014). Social robots and social interaction. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014 (Vol. 273, pp. 105–115). Amsterdam: IOS Press.
Hutto, D. D. (2012). Folk psychological narratives: The sociocultural basis of understanding reasons. Cambridge, MA: MIT Press.
Kahn, P. H., Freier, N. G., Friedman, B., Severson, R. L., & Feldman, E. N. (2004). Social and moral relationships with robotic others? In 13th IEEE International Workshop on Robot and Human Interactive Communication, 2004 (ROMAN 2004). (pp. 545–550). Piscataway: IEEE.
Kahn, P. H., Friedman, B., Perez-Granados, D. R., & Freier, N. G. (2004). Robotic pets in the lives of preschool children. In CHI’04 Extended Abstracts on Human Factors in Computing Systems (pp. 1449–1452). New York: ACM Press.
Kalbe, E., Schlegel, M., Sack, A. T., Nowak, D. A., Dafotakis, M., Bangard, C., …Kessler, J. (2010). Dissociating cognitive from affective theory of mind: a TMS study. Cortex, 46(6), 769–780.
Laitinen, A. (2011). Recognition, acknowledgement, and acceptance. In H. Ikäheimo & A. Laitinen (Eds.), Recognition and social ontology (pp. 309–348). Leiden/Boston: Brill.
Leyzberg, D., Avrunin, E., Liu, J., & Scassellati, B. (2011). Robots that express emotion elicit better human teaching. In Proceedings of the 6th International Conference on Human-Robot Interaction (pp. 347–354).
Mameli, M. (2001). Mindreading, mindshaping, and evolution. Biology and Philosophy, 16(5), 595–626.
Misselhorn, C. (Ed.). (2015). Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (Vol. 122). Cham: Springer.
Petersson, B. (2013). Co-responsibility and causal involvement. Philosophia, 41(3), 847–866.
Reddy, V. (2008). How infants know minds. Cambridge, MA: Harvard University Press.
Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J., & Bodley Scott, S. E. (2010). Seeing it their way: evidence for rapid and involuntary computation of what other people see. Journal of Experimental Psychology: Human Perception and Performance, 36(5), 1255.
Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13–24.
Schegloff, E. A. (2006). Interaction: The infrastructure for social institutions, the natural ecological niche for language, and the arena in which culture is enacted. In N. J. Enfield & S. C. Levinson (Eds.), Roots of human sociality: Culture, cognition and interaction (pp. 70–96). Oxford/New York: Berg Publishers.
Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36(4), 393–414.
Searle, J. R. (2010). Making the social world: The structure of human civilization. Oxford: Oxford University Press.
Seibt, J. (2005). General processes–A study in Ontological category construction. Konstanz, Germany: Habilitations thesis at the University of Konstanz.
Seibt, J. (2009). Forms of emergent interaction in general process theory. Synthese, 166(3), 479–512.
Seibt, J. (2014a). Non-transitive parthood, leveled mereology and the representation of emergent parts of processes. Grazer Philosophische Studien, 91, 165–191.
Seibt, J. (2014b). Varieties of the ‘as if’: Five ways to simulate an action. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Sociable robots and the future of social relations: Proceedings of Robo-Philosophy 2014 (Vol. 273, pp. 97–105). Amsterdam: IOS Press.
Seibt, J. (2018, forthcoming). The Ontology of Simulated Social Interaction–How to Attribute Sociality, Collective Agency, and Responsibility in Human-Robot Interaction. In J. Seibt, R. Hakli, & M. Nørskov (Eds.), Robophilosophy: Philosophy of, for, and by social robotics. MIT Press.
Setiya, K. (2007). Reasons without rationalism. Princeton: Princeton University Press.
Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40.
Sparrow, L., & Sparrow, R. (2006). In the hands of machines? The future of aged care. Minds and Machines, 16(2), 141–161.
Sullins, J. P. (2008). Friends by design: A design philosophy for personal robotics technology. In P. Kroes, P. E. Vermaas, A. Light, & S. A. Moore (Eds.), Philosophy and design: From engineering to architecture (pp. 143–157). Dordrecht: Springer.
Tuomela, R. (2013). Social ontology: Collective intentionality and group agents. Oxford: Oxford University Press.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
Vallor, S. (2011). Carebots and caregivers: Sustaining the ethical ideal of care in the 21st century. Philosophy & Technology, 24, 251–268.
Veruggio, G. (2006). The EURON roboethics roadmap. In 6th IEEE-RAS International Conference on Humanoid Robots, 2006 (pp. 612–617).
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Walton, K. L. (1990). Mimesis as make-believe: On the foundations of the representational arts. Cambridge, MA: Harvard University Press.
Walton, K. L. (2005). Metaphor and prop oriented make-believe. In M. E. Kalderon (Ed.), Fictionalism in metaphysics (pp. 65–87). Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Seibt, J. (2017). Towards an Ontology of Simulated Social Interaction: Varieties of the “As If” for Robots and Humans. In: Hakli, R., Seibt, J. (eds) Sociality and Normativity for Robots. Studies in the Philosophy of Sociality. Springer, Cham. https://doi.org/10.1007/978-3-319-53133-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-53133-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-53131-1
Online ISBN: 978-3-319-53133-5
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)