Multi-Embodiment of Digital Humans in Virtual Reality for Assisting Human-Centered Ergonomics Design
We present a multi-embodiment interface aimed at assisting human-centered ergonomics design, where traditionally the design process is hindered by the need of recruiting diverse users or the utilization of disembodied simulations to address designing for most groups of the population. The multi-embodiment solution is to actively embody the user in the design and evaluation process in virtual reality, while simultaneously superimposing additional simulated virtual bodies on the user’s own body. This superimposed body acts as the target and enables simultaneous anthropometrical ergonomics evaluation for both the user’s self and the target. Both virtual bodies of self and target are generated using digital human modeling from statistical data, and the animation of self-body is motion-captured while the target body is moved using a weighted inverse kinematics approach with end effectors on the hands and feet. We conducted user studies to evaluate human ergonomics design in five scenarios in virtual reality, comparing multi-embodiment with single embodiment. Similar evaluations were conducted again in the physical environment after virtual reality evaluations to explore the post-VR influence of different virtual experience.
KeywordsMulti-embodiment Embodied interaction Ergonomics evaluation Digital human
Our human body is the interface between ourselves and the world, with which we intake perceptual information, make cognitive decisions, and perform actions on the basis of our understanding of our own body . However, as each individual human is uniquely gifted with a different body, it may become a barrier for us to comprehend the body capabilities of a different individual .
This barrier complicates matters in a situation closely related to our everyday lives, which is product and environment design of our environment . It is often desired to have the designers and engineers to create products that can accommodate the most groups of the population, as we could see benefits such as increased efficiency, comfort, and safety within the environment when proper ergonomics considerations are taken . For example, we have seen evidence of how different user’s anthropometry influence product design in furniture . The challenge for designing is therefore accounting for the diverse population with physical body deviation.
Our vision is therefore a hybrid approach, where the diverse population’s body is simulated as DHM and is embodied to the user as a superposition while the user’s original body embodiment is retained. We present a VR multi-embodiment interface aimed at assisting the ergonomics design process by taking different body’s anthropometry into account (Fig. 1). We consider the multi-embodiment (ME) as an augmentation, as the user is augmented with an extra body that would attempt in real-time the same action as the user but in an ergonomically optimized manner. ME is different from single embodiment (SE) where the user embodies only one body, either self or an altered target simulation (which we call alteration). We envision that this underlying difference of augmentation versus alternation, where in augmentation our body is maintained across the physical environment and VR, would enable the utilization of our body as “the body of reference” for other bodies in the physical environment, even in their absence.
We are therefore intrigued to explore whether the augmentation approach of multi-embodiment could 1) assist ergonomics design to the same extend or exceed performance as alteration approaches and 2) generalize well in post-VR exposure where the user could make ergonomics judgment in the physical environment for another person’s absent body.
A multi-embodiment body interface that takes the approach of augmentation for assisting ergonomics design.
Address reachability and accessibility as illustrative application, which are issues present in ergonomically different people.
A user study to compare between our approach with conventional alteration approach for ergonomics design in VR.
Explore augmented perception in post-VR influence of using our unaltered body as body of reference for ergonomics evaluation of other bodies in the physical environment.
Our work is related to the following research areas: 1) using body for affordance judgment, 2) perception of affordance in VR, 3) virtual assisted ergonomics design, and 4) augmented body image for training.
Body for Affordance Judgment
Our human body not only is accountable how we physically interact with our world, but also constitutes the basis of how we define our world. In recent years, this conception of the relationship between the body and the world has been formulated as embodied cognition . Based on this perception of the world and our understanding of our body, we construct our judgment of affordance , the properties of the world that affords to be acted upon, of our surrounding environment.
It is discussed from an embodied perception conception that our understanding of our body morphology influences our perceived affordance . Early studies have shown that humans judge differently the affordance of a climbable stair based on their height and leg length , passing through apertures from body sizes , or affordable to sit from leg length . Changing the morphological parameters of our body therefore have effects on our perceived size of the world [16, 44] and therefore affordance, such as walking under barriers with altered body height .
These findings show promising evidence that our body plays a major role in our perception of affordance of the environment. Moreover, it is suggested that we could deduct the affordance of an observed different body than us . Our research builds on this conception that body is a factor for affordance and creates an interface to augment ourselves with multi-embodiment for assisting affordance judgment in VR.
Affordance Perception in VR
VR provides an appropriate pipeline for studying the change in affordance perception due to the changing environment or body perceptual information as VR allows us to manipulate our perceptual cues . In VR, it is relatively easier than physical environment to alter our body morphology, such as hands , feet , body size , and body height , so that we perceive the affordance of graspable object, crossable gap or aperture, or action decision on whether to duck or step over a pole.
However, a particular concern has been that the spatial perception in VR is found to be compressed compare to the real world . This underestimation could be due to measurement method, technical factors of head-mount-display (HMD), compositional factor of the degree of replication of the virtual to the real world . An approach to improve this margin of error is by introducing a embodied body avatar in the VE [27, 34]. Furthermore, by involving embodied action, the affordance judgment is further improved [20, 27].
From these researches, we can see the potential of using VR as an platform for affordance judgment with the benefit of agile prototyping of changing body morphology and the VE. VR can therefore be an ideal approach for affordance based design  for ergonomically efficient environment, which we address through our research in an multi-embodied system.
Virtual Assisted Ergonomics Design
Historically, ergonomics design consisted of physical fitting trials involving ergonomics experts and a diversity of test users were often employed , but could be time-consuming. There has thus been an increase in extending ergonomics design to computer-aided design (CAD) and DHM methods due to the ease of virtual prototyping  and virtual fitting trials . However, this disembodied approach could be difficult to provide an accurate human stimulation as their movements are programmed , and there could be concern of detachment between the designer and the users represented by the virtual agents , where the emotional detachment could hinder accurate design .
Embodied, interactive VR has become an emerging platform in ergonomics design. Pontonnier et al.  investigated the difference between ergonomics evaluation in physical environment and VR with results suggesting that although VR is slightly inferior to physical environment, the difference is insignificant and the potential of VR is greater. VR is widely utilized in manufacture , industrial workstation  design and usability evaluation. It is also getting attention on universal design, for evaluation against a target user group [18, 48]. For a universal-prone design goal for evaluating against different ergonomic bodies, these researches usually employed diverse users of the population or simulate the perceptual information, so a general user embodies the body of the target population.
In this research, we take a different approach, where our perceptual information is not altered to embody the target. Rather, the target’s body is augmented upon our body so we employ a multi-embodiment interface.
Augmented Body Image
With VR, it is relatively easy for us to alter and augment our body image, and still feel body ownership (e.g., ), which may also influence our cognitive and behavior process. In particular, closely related to our research is the augmentation of the body image with extra bodies or limbs. Augmenting our visual sensation with extra body images in either displaced or co-located location has been utilized for action learning. YouMove is an AR mirror which superimposes an abstract stick body to assist in dance training . Han et al.  developed AR-Arm, superimposing extra hands as indicator to train user’s correct hand movement for learning Tai-chi. Yan et al.  took an out-of-body approach to show both the instructor and user’s body image.
While augmenting extra body image has been focused on action learning, in our research the focus is on the spatial perception with the augmented body image. This could be plausible as we have discussed that body is a reference for affordance and therefore ergonomics design. Furthermore, our embodied approach of the augmented extra body could possibly strength the connection between the self-body and the augmented body.
We developed a multi-embodiment interface that superimposes extra virtual bodies to the user’s own body in VR, so that the user embodies more than one body, with the goal of assisting ergonomics evaluation and design in a VE. We do so by generating the extra body with DHM from population statistical data. The extra body’s movement in VR is calculated from weighted inverse kinematics by specifying the end effectors on the user’s two hands and feet. In addition, human bone joint constraints can be specified to limit the movement capacity of the extra body, e.g., imposing joint constraints on the lower limbs to simulate a wheelchair occupant’s body.
Understanding another person’s bodily information, e.g., anthropometric dimensions and muscle strength, is essential in understanding how to design and develop products for that person. With VR, recently we have seen various approaches of stimulating the users for them to feel as though embodying a different body. We could then make ergonomics judgment using this different virtual body in VR. However, this approach of transitioning ourselves into a different body, through which completely altered our perceptual information, could be problematic as well when we consider further about the post-VR, everyday life application of this approach in the physical environment.
In the physical environment, our perceptual information is not altered and is embodied with our body that accompanied us for many years; therefore, even after experiencing and understanding another person’s body in VR, it is a possible concern that we may “overwrite” the altered experience in the VR as we gradually revert back to our original perceptual information in the physical environment. It is therefore a barrier of expanding the VR ergonomics evaluation into our original everyday life.
Our approach therefore is to augment our perceptional information, rather than completely altering to a different perceptual information from a different body. This augmentation is therefore the multi-embodiment interface, where the user maintains his original perceptual information and body, but is augmented with extra bodies that move and interact with the environment along with the user. The system automatically handles the movement simulation of the extra body in relation to the user so the user can interact in VR naturally with their original body. This way, the user possesses a common reference point between the physical environment and the virtual environment, i.e., the user’s own body. We envision that through “using our body as the reference,” the augmentation experienced in VR could be persisted in the physical environment so that we may “remember” the different body’s ergonomics information, e.g., reachability, in the physical environment even without any augmentation.
Embodying Digital Humans
It is widely known that presenting a virtual avatar for the users to embody into has multi-dimensional benefits for the overall experience . In most current VR experience, the focus has been more on enabling the agency of the avatar, while the avatar may not be a close representation of the users in dimension. This approach is suitable for most situations, as the sense of agency can induce a stronger sense of ownership even for relatively abstract avatars . However, for the application of VR to ergonomics design, the proper anthropometric representation would be crucial. As aforementioned, each individual person has different anthropometric factors such as size and shape, which influences our affordance and ergonomics judgment. Therefore, in our system, users are embodied into digital humans that are a closer representation of their own anthropometric factors.
Generating Self-Digital Human
The user’s digital human avatar is generated from the implementation of “Dhaiba” . Dhaiba is capable of generating very detailed and customized human model based on each individual’s measurement and the anthropometric dimensions database, accounting for the generalized user population and agile prototyping. In our system, we specify the height and weight scale of the user. Dhaiba is then able to construct a generalized DHM from the anthropometric dimension database.
A static DHM is only part of the embodiment, which we need to allow user agency to strengthen the sense of embodiment. To achieve the embodied visuomotor correlation of the DHM in the VE and the user’s actual body movement, we employ the method of full body motion capturing the user’s movement. The user wears a motion capture suit, and the captured marker position is streamed into our software to animate the DHM.
The DHM is divided into two modules: the skin surface mesh generated from the anthropometric data as discussed in the previous section, and the armature link module, which defines the skeletal joint of the digital human. The inverse kinematics computation from the captured markers updates the rotation of each digital human joint in synchrony with the user. The skin surface mesh is then computed as the linearly weighted sum of the joint movement according to the Skeletal Subspace Deformation algorithm , enabling visuomotor correlation as the user moves.
In the multi-embodiment interface, in addition to embodying the self-DHM as described in the previous sections, an extra DHM is also superimposed on the user. The generation of the extra DHM is using the same method as the self-DHM, but is specified with different anthropometric data to simulate the multi-embodiment of two different bodies. In addition, the movement of the extra DHM is calculated in a different manner than self.
Inverse Kinematics of the Superimposed DHM
To calculate for the posture of the superimposed DHM, we use a weighted inverse kinematics (IK) method by defining the reaching end effectors on the superimposed DHM’s hands and feet. The end effectors are moved together along with the user’s motion-captured hands and feet (three on each hand and feet). The feet end effectors have larger weights to keep the DHM from floating (Fig. 4 left, the taller DHM is user self). Additional joint constraints of the DHM can be applied so as to constrain the IK calculation if desired. For example, in the wheelchair DHM discussed later in the user study, the joints of the lower limb are constrained (Fig. 4 right).
Real-Time Modifying Objects in VR
We opted for a embodied scaling and translation approach, where the user’s embodied movement of their arms could modify the width and height of the objects (Fig. 5) or translate them. This is triggered through the Oculus remote controller and tracking the user’s left-hand position with the OptiTrack cameras.
We conducted a user study in two phases to validate the multi-embodiment interface. Our main goal was to study 1) how does the multi-embodiment (ME) interface compare to single embodiment (SE) in assisting ergonomics design in VR and 2) can exposure to a VR multi-embodiment have effect on a post-VR evaluation in the physical environment without augmentation. We observed the accuracy of the ergonomic judgment and completion time in comparison between ME and SE.
We recruited 8 male participants. They are aged between 22 to 31 years old (AVG: 24.75, SD: 3.06). Their height was between 163 to 184 centimeters (AVG: 173.1, SD: 7.0). All participants participated in both phases of the study, where they first evaluated user study 1 followed by user study 2.
For the current studies, our focus on the ergonomics design is in the area of reachability and accessibility, which are greatly influenced by the human anthropometry. To this end, to provide a common evaluation target for all participants, we have defined two targets (Fig. 7). First is a five-year-old, 110-cm-tall (population statistical mean) kid. Second is a 160-cm-tall wheelchair person, with seated height of 135.6 cm, and an eye level of 124.7 cm (within one SD ) and the width of the wheelchair is 84.9 cm.
The kid was evaluated for reachability, while the wheelchair person is evaluated for accessibility. The participants’ tasks were to evaluate and real-time scale or move several furniture, with the goal of designing one dimension for each furniture that is usable when both the participant’s self and the target are to be taken into consideration, e.g., designing one refrigerator that is reachable by the kid but not so low as to strain the back of an adult user.
Study 1: VR Evaluation
Our first study was conducted in a complete virtual environment with motion-captured participants. Each participant participated in 10 trials: 5 target scenarios (4 kid + 1 wheelchair) \(\times\) 2 conditions (ME/SE). Since the same five scenarios were used for both conditions in a within-subject design, we conducted the two conditions on two different days to counter some of the carryover effects.
In the ME condition, the user’s self-DHM is motion-captured, and the target DHMs (kid or wheelchair occupant) is superimposed with postures estimated from weighted IK. For the wheelchair occupant DHM, joint constraints were put on the DHM’s lower limbs to exclude them from being simulated by the IK, and the simulated wheelchair moves in accordance with the participant’s captured mass center (pelvis), i.e., as the participant walks in the VE, the wheelchair moves in synchronization. In the SE condition, rather than multi-embodiment, the participant alters between self-DHM and target kid DHM using the Oculus Remote Controller, and the participant’s motion capture directly controls the currently embodied DHM. For SE wheelchair scenario, the participants were required to sit and move around in an wheelchair to simulate the embodiment of the virtual wheelchair DHM. However, due to safety concerns induced by altering between sitting as a wheelchair occupant and standing as self while wearing a HMD, and also that the minimal interval for both self and wheelchair is the maximum of the two, the participants were embodied as wheelchair occupant the whole time, instead of altering between, for the accessibility evaluation.
For each trial, the participants were instructed to evaluate and design the height of the target furniture or interval for accessibility (through scaling or moving them with embodied movement), so it can be easily reachable or accessible by both self and the target body. Each trial is completed when the participants are satisfied with their design and signal the experimenter, and we measure the completion time. To validate the accuracy of their design, the deviation of height of their design from an optimal design (to be detailed in a following section) is compared.
Study 2: Post-VR Physical Environment Evaluation
A potential drawback of SE in VR is that as SE embodies the user into a different body in VR, then as the user returns to the real environment, the difference in the body in relation to the environment would override the experience gained in VR. Concretely, as an example, while a person may successfully make ergonomics evaluation for a kid in VR using SE (using the embodied kid body, he has become shorter), he may have difficulty in making ergonomic evaluation for the kid in the physical environment as the person does not embody a kid body in reality.
In comparison, one envisioned benefit of ME is that, since it maintains our original body and across VR and physical environment, and augments it with additional body that is referenced to the body of the user, we could then use this to our benefit to perceive the relation between our body and the target body. If this relation could be learned, then it could be possible that as we move our body in our physical environment, we could imagine the superimposed body as learned in VR and therefore make reachability and accessibility judgments for others using just our body (without any augmentation). This post-VR evaluation in the physical environment is therefore the aim of the second user study.
Immediately after the VR user study (for each day), the participants conducted the evaluations for the same scenarios, but this time in the physical environment. As in the physical environment the furniture could not be altered easily, the participants were instructed to use their hands to signal their estimated optimal design, and the hands’ 3D position is recorded as the participant’s preferred modification height or interval. The same objective measurements of completion time and design deviation were recorded.
A total of 20 trials was conducted per participant ((4 reachability + 1 accessibility) \(\times\) 2 (VR/post-VR) \(\times\) 2 (ME/SE)). We collected measurements from the completion time in each trial, and the final altered dimension that the participants deem as appropriate for both self and the target. Four VR trial data (2 ME 2 SE) were unretrievable due to corrupted data files, leaving a total of 76 VR trials and 80 post-VR trials.
VR Completion Time
We observed that the completion time in ME trials was significantly lower than SE (Fig. 9). Anova analysis revealed that there was a significant main effect on the test condition (ME/SE) on the completion time (F(1, 78) = 8.79, p < 0.01). During our observation of the participants, we noticed that VR trials were completed noticeably faster due to participants ability to see both their own reachability and the target’s reachability at the same time, which may have helped them to reach their decided dimension faster. On the other hand, during SE trials, we observed many participants seemed they could not make up their minds on the compromised optimal dimension between self and the target. For example, participants repeatedly scaled the target furniture up or down back and forth as they transitioned between self and the kid. During the interview after both ME/SE experiments were conducted, three participant noted that they definitely felt that ME was effective in helping them reaching a decision faster, although one participant noted that while ME might be faster, SE might yield better accuracy (which we will examine in a following section).
Physical Environment Completion Time
To explore whether there is a difference in the real-world application after exposure to differed VR training methods, we conducted physical environment trials after the VR trials and recorded completion time. The real-environment trials were noticeably faster than the VR trials, with average trial completion time of 19.7 s as compared to the VR of 56 s. This is foreseeable from factors such as: foreign to a VR experience or could actually alter the dimensions in VR. However, although we observed the evaluation approaches of participants were different, e.g., crouching down in the real environment to simulate kid’s eye level after exposure to SE, kept standing still after exposure to ME, we did not observe a significant effect on condition from ANOVA on physical environment completion time (Fig. 9).
Although we found that ME was significantly faster than SE in VR trials, we had concerns that it may accompany with less accuracy to the optimal dimension, due to less decision time and lack of actually taking the target’s perspective. We therefore measured the participants-designed dimension of each trials and compare them to the optimal dimension.
First, we describe how we reached our definition of optimal dimension used in the user study. For reachability, we define the optimal compromised dimension as the reaching height that requires the least combined whole body joint torque of both the participant’s self and the target kid. The reason is that a minimized joint torque in turn minimizes our musculo-skeletal biomechanical stress, which is ideal for the comfort of our body .
For the accessibility, we define the optimal spatial width between the shelf and the table as the recommended pass through width of wheelchairs . As our wheelchair is wider than that in the guideline, our scaled optimal width is 94.9 cm.
Physical Environment Dimension
To measure whether conditions of ME/SE might influence our post-VR evaluation in the physical environment, where in our everyday lives we have no augmentation, the physical environment dimension evaluations were conducted after the VR experiments for each condition. We observed again that the majority of participant reachability evaluations were lower than optimal, regardless of the condition (Fig. 11). However, we did not observe significant main effect of either ME or SE have on applying to the real-world evaluation, similar to the VR study.
VR Physical Deviation
From the general feedback of the users, three expressed definite interest incline toward the multi-embodiment. Positive feedback include preferring seeing both self and the target simultaneously without needing to transition, within which one participant mentioned he felt he forgets the dimension for the other body every time he transitioned. In particular, the majority of participants preferred ME in the accessibility scenario, where they could simply walk around and the embodied occupant follows. On the other hand, three participants expressed clear preference with SE, indicating they enjoyed being transitioned to the kids body with a different eye level or that sitting on a wheelchair made them more confident in making their judgments on the accessibility. One participant suggested a hybrid between ME and SE, as sometimes during ME he felt the target body was occluding him from seeing his own body.
Physical Environment Evaluation Observations
One most intriguing observation we made during the two user studies is how the condition in the VR, either ME or SE, influenced how participants evaluated the following physical environment.
Three out of the four participants who started with condition SE, when asked to make the evaluations again in the physical environment for the kid target they experienced in VR, immediately asked the experimenter for the height of the kid (to which we replied unable due to nature of experiment). All participants then began to crouch down, trying to simulate different heights and test for reachability. When asked about their strategy during the interview after the experiment, they noted that they were trying to recreate seeing what the kid would see.
On the other hand, none of the four participants who started with condition ME asked the experimenter for the height detail of the target kid. Three of the participants did not adjust their height levels during the entire real-world evaluation. One participant crouched down, but noted was rather trying to test for the lower reachability rather than simulate the kid, as he also crouched during the VR trials. The participants described that they made their judgments based on trying to “imagine” where and how the kid would reach with respect to their own arm movements.
Body as Reference in Physical Environment
First, SE requires altering our perceptual information and action capabilities to simulate that of the target. However, this could be difficult in simulating normal users to be certain targets, especially when there exists discrepancy between the user’s and target’s action capabilities, e.g., embodying a physically disabled or slow target. ME is unhindered by this limitation, as the user could just move normally, and the targets could be simulated with accuracy via computers.
Furthermore, we speculate from an action-perception affordance view  that the perceived affordance change in SE could be temporally, specifically only during the alternation; that is, when the user returns to the original body, the original action capability overrides what was experienced in SE, and therefore, the SE approach may not bode well in the real environment. As the action capabilities of the users are consistent in ME throughout the VE and physical environment, the users could use that as a reference to evaluate for other targets’ action capabilities.
Although from our experiment we did not observe a significance of designed dimension between ME and SE, we speculate this could be from a brief amount of exposure time (ME: 3 min 52 s, SE: 5 min 28 s) or that a within-subject experimental procedure could have biased the physical environment evaluations as the experimental scenarios (furniture) were the same. Nevertheless, our other finding was that there was a significance in the reduced deviation between VR and real in ME compared to SE. Since we also found that ME could perform as well as SE in VR (no significant difference), that means through increased exposure and accuracy training in ME in VR, we may begin to see a greater significance between ME and SE in the ergonomics evaluation in the physical environment.
Limitation and Future Work
Our system introduced a multi-embodiment interface with the goal of assisting simultaneous evaluations of different bodies and enhancing post-VR awareness. A noticeable limitation with the current implementation is that the IK would pull the target body forward when the user tries to reach far, and as the target bodies used in the user studies were of a lower height than the participant, occlusion by the target body is prone to happen. Possible solutions include constraining joint angles on the torso, or center of mass, within a margin of the user’s torso to reduce occlusion. Also, although IK could be sufficient as a initial exploration of simulating the superimposed body, a more ergonomics nature approach, such as taking the range of motion or joint torques into consideration when simulating the body’s motion, could have potential as a next step of the multi-embodiment interface.
One interesting potential that is yet to be explored in the current study is superimposing various bodies that could be simulated to precision through the help of computing systems, which is one clear merit of the multi-embodiment interface. Imagine if we want to evaluate for muscle strength, or muscle reaction time of different individuals, which would be difficult for single embodiment unless, we utilize external exoskeletons to hinder our original muscles. Although our focus has been on anthropometry, multi-embodiment could also show the correct simulation of different muscle strength, so we could use our body biomechanical factors as the reference for others.
Moreover, while the current study was multi-embodiment in VR and expand to post-VR, an AR approach of multi-embodying in the real environment could also be beneficial. The benefits of the AR approach is that the users would not have to “remember” the VR experience for the real-world evaluation and instead could see vividly the different body’s affordance.
Multi-embodiment interface is a system that superimposes extra DHMs on the person, where the superimposed DHM is simulated to move by inverse kinematics calculation. The DHMs are generated from a statistical population data to give a more accurate anthropometrical representation (which was critical given the focus of the current study) for anthropometrical ergonomic evaluations. Benefits of ME include (1) able provide a more consistent perceptual information between the virtual and the real, therefore a better tendency of eliciting awareness in the post-VR real world and (2) utilization of computer simulations so we can more easily embody targets that are more difficult to embody.
We conducted user studies to investigate whether the multi-embodiment interface could enable users to conduct reachability and accessibility evaluations in VR as well as extending to physical environment in comparison with the more conventional method of single embodiment. Our observation was that multi-embodiment was significantly faster than the single embodiment with no significant difference in the made evaluation. However, currently we observed no significant difference between multi-embodiment and single embodiment in post-VR evaluation from the user study. Nonetheless, we observed significant main effect in the correlation between VR evaluation and post-VR evaluation in multi-embodiment, which shows its potential over single embodiment by using our body as the reference in both VR and physical environment. We also observed interesting phenomenon of how participants changed their methods of physical environment evaluation based on their VR experience. Future directions of this research include multi-embodying diversified DHMs, as well as investigate its potential in AR scenarios.
Compliance with Ethical Standard
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
- 1.Anderson F, Grossman T, Matejka J, Fitzmaurice G (2013) Youmove: enhancing movement training with an augmented reality mirror. In: Proceedings of the 26th annual ACM symposium on User interface software and technology, pp. 311–320. ACMGoogle Scholar
- 4.Brewerton J, Darton D, Foster L (1997) Designing lifetime homes. Joseph Rowntree Foundation, YorkGoogle Scholar
- 7.Clarkson PJ, Coleman R, Keates S, Lebbon C (2013) Inclusive design: design for the whole population. Springer Science & Business Media, BerlinGoogle Scholar
- 11.Endo Y, Tada M, Mochimaru M (2014) Dhaiba: Development of virtual ergonomic assessment system with human models. In: Proceedings of the 3rd International Digital Human Modeling Symposium, Paper vol. 58Google Scholar
- 12.Gibson JJ (2014) The ecological approach to visual perception: classic edition. Psychology Press, HoveGoogle Scholar
- 15.Han PH, Chen KW, Hsieh CH, Huang YJ, Hung YP (2016) Ar-arm: Augmented visualization for guiding arm movement in the first-person perspective. In: Proceedings of the 7th Augmented Human International Conference 2016, p. 31. ACMGoogle Scholar
- 16.van der Hoort B, Guterstam A, Ehrsson HH (2011) Being barbie: the size of ones own body determines the perceived size of the world. PloS One 6(5):e20–195Google Scholar
- 17.Jun E, Stefanucci JK, Creem-Regehr SH, Geuss MN, Thompson WB (2015) Big foot: using the size of a virtual foot to scale gap width. ACM Trans Appl Percept (TAP) 12(4):16Google Scholar
- 19.Lin Q, Rieser J, Bodenheimer B (2012) Stepping over and ducking under: The influence of an avatar on locomotion in an hmd-based immersive virtual environment. In: Proceedings of the ACM Symposium on Applied Perception, pp. 7–10. ACMGoogle Scholar
- 20.Lin Q, Rieser J, Bodenheimer B (2015) Affordance judgments in hmd-based virtual environments: stepping over a pole and stepping off a ledge. ACM Trans Appl Percept (TAP) 12(2):6Google Scholar
- 23.Magnenat-Thalmann N, Thalmann D (1990) Human body deformations using joint-dependent local operators and finite-element theory. Making Them Move. Morgan Kaufmann, San Mateo, pp 243–262Google Scholar
- 30.Nishida J, Takatori H, Sato K, Suzuki K (2015) Childhood: Wearable suit for augmented child experience. In: Proceedings of the 2015 Virtual Reality International Conference, p. 22. ACMGoogle Scholar
- 31.Ogawa N, Ban Y, Sakurai S, Narumi T, Tanikawa T, Hirose M (2016) Metamorphosis hand: dynamically transforming hands. In: Proceedings of the 7th Augmented Human International Conference 2016, p. 51. ACMGoogle Scholar
- 33.Pheasant S, Haslegrave CM (2016) Bodyspace: anthropometry, ergonomics and the design of work. CRC Press, Boca RatonGoogle Scholar
- 34.Phillips L, Ries B, Kaeding M, Interrante V (2010) Avatar self-embodiment enhances distance perception accuracy in non-photorealistic immersive virtual environments. In: Virtual Reality Conference (VR), 2010 IEEE, pp. 115–1148Google Scholar
- 38.Proffitt DR, Linkenauger SA (2013) Perception viewed as a phenotypic expression. In: Prinz W, Beisert M, Herwig A (eds) Action science: Foundations of an emerging discipline. MIT Press, Cambridge, pp 171–198Google Scholar
- 42.Shapiro L (2010) Embodied cognition. Routledge, LondonGoogle Scholar
- 43.Steed A, Pan Y, Zisch F, Steptoe W (2016) The impact of a self-avatar on cognitive load in immersive virtual reality. In: Virtual Reality (VR), 2016 IEEE, pp. 67–76Google Scholar
- 46.Warren WH (1984) Perceiving affordances: visual guidance of stair climbing. J Exp Psychol: Human Percept Perform 10(5):683Google Scholar
- 47.Warren WH Jr, Whang S (1987) Visual guidance of walking through apertures: body-scaled information for affordances. J Exp Psychol: Human Percept Perform 13(3):371Google Scholar
- 50.Willemsen P, Colton MB, Creem-Regehr SH, Thompson WB (2009) The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. ACM Trans Appl Percept (TAP) 6(2):8Google Scholar
- 51.Yan S, Ding G, Guan Z, Sun N, Li H, Zhang L (2015) Outsideme: Augmenting dancer’s external self-image by using a mixed reality system. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 965–970Google Scholar