1 Introduction

Paramount results of psychological and neuroscientific research created a surge of interest in human body experience [5, 6]. Understanding how humans experience their bodies and extend their body representations to integrate tools [9, 10] is not only fundamentally challenging but also promising to push a lot of applications, especially in robotics [2, 24].

A striking example is the rubber hand illusion, which describes the embodiment of artificial body parts (or even non-anthropomorphic objects) by human individuals due to crossmodal integration of vision, touch, and proprioception [6]. Discussing this matter in cognitive science and artificial intelligence domains requires caution regarding terminology: “embodiment” is here understood to describe whether a human has a sense of ownership, location, and agency over an artifact [14] rather than an agent having a physical body as in “embodied” and situated cognition [25] or human-computer interaction [18]. Contemporary research discusses how technical means could shape this multisensory integration and how robotics can help to shed light on fundamental, human-related research questionsin return [2, 19].

When artifacts are intelligent agents themselves, e.g., assistive robots, the spatial and temporal alignment between interaction strategies, human cognitive reasoning, and motor control to achieve common aims is a challenging task. This is particularly true if both agents, the human user and the machine, learn adapting to each other, i.e., mutual adaptation [2, 20, 26]. Taking the human part as an inspiration, e.g., via cognitive models, could help to achieve a seamless interaction with assistive robots, i.e., a joint human-robot body experience, as well as to provide autonomous (humanoid) robots with human-like behavior generation [22].

This article summarizes the ideas and key findings of the author’s habilitation thesis [1] and discusses them from an artificial intelligence perspective. The interdisciplinary research questions focused by the thesis and this article are:

  • How does human body experience relate to robot, control, and (haptic) interface design? How can this be considered in development?

  • Which human-in-the-loop experiments can help to empirically examine the users’ experiences?

  • Could human-like artificial body intelligence be realized? Would that be desirable?

Section 2 presents experimental approaches to probe the underpinning mechanisms of human body experience and contributes by explaining how they might be influenced and shaped by robot design and control. This is particularly important for the development of human-machine interfaces: Section 3 discusses results from the experiments of the thesis and puts forward design recommendations. Considering technical influence factors, Section 4 furthermore outlines the value and shortcomings of cognitive modeling in this regard and how artificial intelligence techniques might improve their capabilities. Finally, Section 5 concludes the article suggesting future directions towards endowing robots with an artificial body intelligence.

2 Probing Human Body Experience

As understanding if and how non-corporal objects are embodied by humans is of high interest for fundamental psychological and neuroscientific research as well as for engineering applications, the last decades brought up a multitude of experimental approaches. Rubber limb illusion paradigms play a key role in these endeavors: A rubber limb, e.g., a hand, is placed in sight of the participant while both, the hidden real and the fake limb are haptically stimulated [5, 14]. Most participants experience the feeling to embody the fake hand, which has been shown via objective and subjective assessment [6].

Based on these good prospects, interdisciplinary research started applying technical means to probe the complexity and plasticity of human bodily experience [1]. Two common ways to extend the experimental possibilities are the involvement of robotic devices and virtual reality. Both have different benefits as virtual reality opens up a very broad space of investigation, whereas using robotic devices potentially allows for direct transfer to technical implementation, e.g., in prosthetics or telerobotics [2]. Both approaches can turn existing psychological paradigms into interactive human-in-the-loop experiments. Those allow for the consideration of mutual adaptation if implementing intelligent agents and can feedback insights and new research questions from application into fundamental science [2, 19]. However, the design of such experiments is subject to challenging requirements of which five design factors were shown to be of crucial importance: hiding the real limb, anatomical plausibility, visual appearance, temporal delay, and software-controlled experimental conditions [1].

3 Human-Machine Interfaces

The authors’ habilitation thesis reports on robotic hand and leg illusion experiments as well as virtual hand illusion studies [1]. Several studies using such technical augmentations outlined how robot design and control influence embodiment and, particularly, the relevance of (haptic) human-machine interfaces: a similar contribution of haptic and motor feedback to embodiment has been found in the upper limbs [11]. Moreover, embodiment was shown to be an appropriate assessment metric to consider users’ experiences in interface design using human-in-the-loop approaches [8] and outlined to be promising for the lower limbs as well [17]. It should be noted that non-instrumental aspects of haptic feedback might decisively contribute to device embodiment and should be considered in interface design, e.g., the mediation of affective and social information [3].

Agency, which describes whether humans feel to be in control of their actions, can be seen as a subfactor of embodiment and seems suitable as an objective measure of task-appropriate and intuitive assistance [7]. This provides additional experimental approaches and metrics to human-centered robotics and interface design and underlines the potential of technically augmented psychological paradigms to improve human-robot interaction.

4 Cognitive Body Models

While the experimental evaluation of human-robot body experience can provide remarkable information to design, a broader and fundamental understanding of the underpinning psychological effects is of scientific interest. This, in turn, has technical potential through capturing it with artificial intelligence techniques. In this respect, cognitive models of body experience are discussed to be applied in robotics [22]. Drawing from latest results from cognitive science research, Bayesian or connectionist models, predictive coding, and cognitive architectures are promising routes to understand how humans experience their bodies and how we might make consider that in robot design [1, 12].

If we manage to build models of human perceptual and cognitive processes with respect to body experience, this could enable various novel technical possibilities [1]. Thinking of assistive robots, e.g., prostheses, we could use cognitive models to probe whether the device is embodied or not and how we could adapt to improve the users’ body experiences via control [22]. Still, a theoretical framework to explain limb embodiment is lacking, particularly when considering structurally varying bodies, e.g., in case of amputation [4]. Modular modeling frameworks combining bottom-up multisensory integration with cognitive reasoning and top-down adaptation, e.g., learning a person’s predispositions and experiences, could be an approach to represent body experience flexibly [4, 13]. Beyond applications in assistive robotics, this might also help to endow humanoid robots with human-like body representations, which could improve their interaction capabilities and provide them with more human-like action-perception versatility [22].

So far, Bayesian cognitive models of multimodal sensory processing in rubber limb illusion paradigms have been shown to roughly predict experimental outcomes [21, 23]. However, key issues of cognitive body experience models, i.e., limitations of accuracy, individualizability, and online capabilities, remain despite promising suggestions for methodical extension [4, 13, 22].

5 Towards Artificial Body Intelligence

The outlined potential of providing robots with their own body experience or making assistive devices develop a joint human-robot body experience with their users might call for providing robots with a body intelligence. To mimic the flexibility of the human analogon, a robotic body representation would need to plasticly align to new environmental situations. This adaptive representation should account for surrounding (human) interaction partners, but also structural changes of the robot’s own body. To this end, modular modeling frameworks could integrate task-specific algorithms for perceptual functions, sensory integration, and cognitive reasoning as outlined in Fig. 1. As suggested by Bliek et al. [4], a top-down modulation of the (sub)models’ prior knowledge through learning methods seems reasonable.

Fig. 1 presents a potential structure for artificial body intelligence based on the considerations of Bliek et al. [4]. A bottom-up path describes the processes of sensation and perception, multisensory integration, and cognition: multimodal sensory data is gathered and integrated to a common sensory representation [16] to update the situation-specific body experience. A top-down prior modulation could continuously update body and environment knowledgeand might also be provided to update the memory on cognitive level.

Fig. 1
figure 1

Potential structure for artificial body intelligence combining a bottom-up path from sensation and perception, via multisensory integration to cognition, which might receive top-down updating of priors and, possibly, cognitive memory

Aiming at a broader, potentially general, artificial body intelligence, one might consider to not only adapt parameters, but also make structural changes. Considering Marr’s levels of analysis [15, 27], such modifications could happen on the algorithmic and the computational level. While the former could mean to exchange submodels and subalgorithms for different tasks (suggestions are provided in Fig. 1), the latter would imply to structurally align the topology of the body representation model, i.e., adding, removing, or rearranging modules.

This might lead to an artificial body intelligence that mimics the flexibility of the human analogon and could plasticly align in case of structural alterations of their body [4]. Moreover, it could adapt to changes of environment it is situated in and to human interaction partners. The experimental results presented in this article can guide aligning robot hardware and software accordingly. Whereas this article focuses on the perception-related part of the human action-perception loop, artificial agents, e.g., robots, would not only require a human-like body experience but also motor control [26]. To inform the required developments, future human-in-the-loop experiments will also need to consider ecologically valid scenarios and long-term observation in daily life [1].