1 Introduction

Robot navigation is one of the most important functions for human-symbiotic autonomous mobile robots that are expected to provide various services such as logistics [1], housework for elderly or handicap people [2, 3], personal assistance in an office [4], and attending to a person in a museum or airport lobby [5]. Such mobile robots require the ability to safely move with reasonable efficiency in arbitrary human-coexisting environments, which include relatively open environments, e.g., a city squire (Fig. 1a); dynamically fluid environments, e.g., a station concourse (Fig. 1b); high-density stationary environments, e.g., an eating area at a conference [6] (Fig. 1c); and quick and rapid path change environments, e.g., a supermarket [7] (Fig. 1d).

Fig. 1
figure 1

Human-coexisting environments. a Relatively open environment: city squire b dynamically fluid environment: station concourse. c high-density stationary environment: eating area at conference. d quick and rapid path-change environment: supermarket

The general purpose of robot navigation is to find a collision-free path from a start to a goal (a subgoal in the context of global path planning) for a robot in a workspace containing multiple humans and obstacles. To achieve this, many researchers have studied on path planning methods, and their mathematical approaches include a dynamic window approach [8], A-star algorithm [9], and rapidly-exploring random trees (RRT) [10] for more efficient and robust path planning. Many revised methods have been proposed to solve problems, such as reducing calculation time [11] and robust path planning [12]. Also, many researchers have tried to make an accurate human-motion model by using large amounts of data [13] and a path estimation model based on long short-term memory (LSTM) [14]. Currently, many researchers study human–robot interaction focusing on robot navigation. To avoid injuring humans, most studies use only the control strategy approach, in which the robots completely detour or stop when they risk colliding with humans [15].

However, in many real-world situations, a path to a goal without any physical contact with humans and/or obstacles is sometimes difficult to find, particularly in human-crowded spaces. In this case, the robot either makes no forward progress or takes extreme evasive action to avoid collisions [16]. These situations happen when every path is expected to be obstructed by humans due to high human density and/or massive uncertainty in human trajectory estimation or observed information. This is called the freezing problem [16]. This problem cannot essentially be solved by increasing the accuracy of human-motion models. However, the robot would find a way if it were allowed to manipulate blocking objects. Thus, many studies have focused on robot navigation among movable obstacles to solve the above freezing problem [17]. The robot pushes objects such as a table, chair, or sofa and clears a reasonable path, considering planning time, length of the transit and/or transfer paths, number of manipulated objects, and total number of displacements of all objects [18,19,20]. This can be regarded as a movement strategy with object manipulation, but those studies just focus on furniture, not humans as dynamic and psychological agents.

To solve the freezing problem, the robot must consider not only objects but also humans as a kind of obstacles. We know, however, that humans are not objects, so we need to fully consider both person’s physical safety and psychological comfort. The research field that deals with robot navigation among humans is called as human-aware navigation [21]. In human-aware navigation, most studies have attempted to minimize annoyance and stress, which has resulted in making the interaction more comfortable for humans. Others strive to make robots behave more naturally within their abilities or in accordance with cultural norms. Terms like ‘comfort’ and ‘natural’ are used loosely in the literature, so to clarify word meanings, we use the following definitions [22].

  • Comfort The absence of annoyance and stress for humans in interaction with robots.

  • Naturalness The similarity between robots and humans in low-level behavior patterns.

  • Sociality The adherence to explicit high-level cultural conventions.

Another aspect of human-aware navigation is cooperation with humans, such as joint collision avoidance. The robot and humans mutually adjust their trajectories to make room for navigation [23]. Consequently, we have found that two technical advancements are required. One is to treat humans as movable obstacles with intentions and to safely manipulate them. The other is to consider human psychology. To cooperate safely with humans, the robots must be able to understand humans’ behaviors and the surrounding environment. As stated above, the conventional approach that finds a collision-free path will not enable the robot to coordinate with humans even when it tries to pass through a narrow corridor, and this will cause the freezing problem [24,25,26,27]. To solve this problem, the robot needs the ability to cooperate with humans. Considering that humans may not always notice the robot, it need to be able to convey its motion–intention via some methods, such as pointing to a path, making sounds, and interacting physically [28].

We have thus focused on robot navigation among humans as movable and/or reactive obstacles with intentions. We investigated human behavior toward robot-initiated contact by a robot’s arm [29] and social acceptance of it [30]. In one study [29], we explored an unconventional idea wherein a robot tries to navigate a more efficiently by having an obstructing human move away by means of contact. First, basic human-reaction experiments were conducted, and the robot could successfully induce a human to move in a desired direction. We then proposed a motion planning method considering inducement by contact and verified it through real-world experiments. The results showed that the proposed method could be used for safer and more efficient navigation in a relatively static crowded environment. In another study [30], we investigated humans’ subjective response towards robot-initiated touch during navigation. We conducted a two (warning vs. no warning) between-subject experiments with 44 people in which a mobile robot physically contacted an unaware and obstructing human to clear a way towards its goal. The results showed that a verbal warning prior to contact yielded more favorable responses. We also found that the humans did not find contact to be uncomfortable and were not opposed to robot-initiated contact if deemed necessary.

We refer to a robot action to encourage a human to change his/her cognitive (e.g., awareness of robot), physical (e.g., standing position), and psychological (e.g., comfort) states as ‘inducement,’ and the inducement has various types of modalities such as body movement (appeal to the visual sense), speech (appeal to the auditory sense), and touch/contact (appeal to the haptic/kinesthetic sense), as listed in Table 1. The inducement also has different strengths such as weak notification of robot intention by body movement as well as strong notification of robot intention by physical interaction. These inducements should be selected depending on the situation of the environment, human, and robot. Moreover, the robot must not only induce humans, but also perceive the inducement from humans. Thus, the next step is to develop a method to dynamically select an inducement method suited to the situation, considering its modality and strength. Some researchers recently investigate machine learning based navigation for rough terrain mobility [31], deep reinforcement learning for map-free navigation [32], and distributed multi-agent navigation [33]. However, current robots cannot fully understand contextual and semantic information. We thus focus on constructing a fundamental framework of interactive navigation, by deriving key factors for the situation-adaptive multimodal inducement selector. In this study, we target nine situations where the robot passes through a corridor where two humans stand, as a fundamental scenario where both the robot and humans should interactively navigates, though we must ultimately evaluate the proposed system in more diverse situations. Through the experiments, we investigate whether the proposed system solves the freezing problem, provides a safe and efficient trajectory, and improves humans’ psychological reaction to robots. The evidence was limited to robot planner and hardware design we used as well as certain scenes, contexts, and participants, but we obtained favorable outcome for the purpose of this study.

Table 1 Fundamental inducement methods that vary with modality, influence, and strength

The rest of the paper is structured as follows: Section 2 details the related works in robot navigation. Section 3 describes the parameters for interactive navigation. Section 4 develops scene categorization as pre-processing, and Sect. 5 embodies the situation-adaptive inducement selector. Section 6 then explains our experimental conditions, and Sect. 7 presents the results. Section 8 discusses the proposed inducement framework, and Sect. 9 summarizes our findings and discusses our future work.

2 Related and Required Works

Here, we will clarify the contribution of our approach again. Even though there has been considerable studies towards human-aware navigation, to the best of our knowledge, no studies has targeted interactive navigation with situation-adaptive multimodal inducement. Further, no studies have considered using touch/contact as a means of inducing an obstructing human to move.

2.1 Robot-Initiated Touch/Contact in Robotics Field

Many applications of robotic systems requires robots to physically contact humans, especially in healthcare where robots are being used for skin care [34], surgery [35], and patient care [36]. Employing robots for taking care of the elderly is one of the major goals in robotics [37]. Studies within this field have looked into various psychological implications of robot initiated touch. In [38], the effects of robot touch were investigated in a nurse-patient interaction scene. The studies have assessed the effects of ‘warning type’ prior to contact. These studies however focus on aware participants who deliberately seek particular services. These cases involve situations where robot touch itself is the main means to achieve the desired objective. Emotional responses to robot touch are explored in social robots expected to perform as social agents. These studies have investigated the effect of a robot’s touch on people’s motivation [39] and trust during human–robot interaction [40]. They involve participants who are aware of the robot’s intention to touch. Another area where robot contact is studied is contact safety. Naturally, these studies have little to do with human’s subjective responses to contact but rather focus on the dangers and mitigating measures of human–robot contact or collision [41].

Even though many studies have investigated different aspects (psychological effects and safety) of robot touch, these studies belong to completely different categories of either ‘affective contact’ or ‘contact safety.’ As stated above, our situation involves an unaware participant who is engrossed in his/her own activity and is obstructing a robot that is trying to move towards its destination. There are no visual cues as to when and where the physical contact will occur. The lack of study into this topic has inspired us to do our own investigation into the cognitive, physical, and psychological response towards robot-initiated touch during navigation.

2.2 Inducement for Conveying Intent in Robot Navigation

Communicating and understanding intention relies on the combination of various components within kinesics, such as body posture and facial expressions [42]. There are studies on conveying directional intent by using whole body motions [43] as well as gaze [44]. An explicit audiovisual communication method to aid robotic navigation among humans has been proposed because the audiovisual means can elicit attention from nearby agents even when they are focusing on something else. Plenty of studies have analyzed the use and effects of explicit intent communication in human–robot interactions. These studies involve directly interacting agents whose interaction forms the core of the task.

In studies on the effects of intent communication in robot navigation, various auxiliary communicating means of preliminary announcement and indication to convey directional intention have been proposed, including using lamps, light ray, and projection [45, 46]. Szafir et al. [47] implemeted light-emitting diode (LED) based indicators in a quadcopter to test four different designs for intent communication. May et al. [28] utilized ‘implicit joint attention using gaze’ and ‘turn indicators’ by adopting the semantics of a car’s turning indicators. Watanabe et al. [48] implemented a light projection system in an autonomous wheelchair that shows the future trajectory. Chadalavada et al. [49] proposed using an LED projector to communicate the internal state of automatic guided vehicles. We have also presented the results of shoulder-based light and display signals [50, 51].

Even though not only verbal but also non-verbal intent communication methods in various robotic fields have been proposed, there are no inducement selection methods that can dynamically select inducements with modality and strength suitable for different situations. The lack of such a method could lead to freezing problems, unnaturalness, inefficient robot movement, and human discomfort.

2.3 Required Work: Situation-Adaptive Inducement

Most existing research focuses on collision-avoidance based passive navigation. Some researchers have studied intention conveyance methods such as light signal, display indication, and voice communication. However, these methods have not been actively and explicitly used for robot navigation. Moreover, conventional mobile robots have no or one modality/method to convey their intention. In contrast, our mobile robot using interactive navigation could select suitable inducements with different modalities and strengths, so that it cooperatively navigates in human-existing environments comfortably, naturally, and socially. The substantive role of inducement is to encourage a human to change his/her cognitive, physical, and psychological states, so the robot should utilize an inducement method with suitable strength and modality depending on the situation. Furthermore, the above-mentioned conventional studies limit their analyses to a simple passing-by scenario. However, as represented in Fig. 1, typical daily environments involve people interacting in a myriad of passing/crossing scenarios. Hence, the intent communication mechanisms must be tested across different settings to analyze their true effectiveness. In summary, we try to prove that interactive navigation with multimodal inducement can solve the freezing problem, provide a safe and efficient trajectory, and improve humans’ psychological reaction to robot through pass-by scenarios in a corridor.

3 Parameters of Interactive Navigation

We classify the parameters of interactive navigation, such as inputs, outputs, and processes, and consider various inducement methods that forms the core of our approach.

3.1 Inputs: Human and Environmental Information

The everyday environment for interactive navigation has many parameters. Here, to formulate interactive navigation, we derive six basic items to represent an arbitrary work space, as listed in Table 2. These parameters, even though not perfectly exhaustive and comprehensive, help to describe any given environment.

Table 2 Fundamental items for defining scenarios in human-aware robot navigation that include parameters, content, and explanation
  1. (a)

    Space attribute This is roughly classified into a free space, e.g., a city square, or a constraint space, e.g., an office corridor. This affects basic parameters of global path planning, e.g., selection of a path with fewer human density at higher velocity.

  2. (b)

    Available width This is the maximum width of space in a structure. This affects the available navigation strategies, which means that narrower space provides fewer choices of path planning and inducement.

  3. (c)

    Number of humans This is classified into 0, 1, or 2 or more. This affects the complexity of navigation strategy, which means that a higher number of people provides higher ordered interactive navigation.

  4. (d)

    Human attribute Age, gender, occupation, etc. are important for social navigation, but they are not easy to obtain from current sensors. This study focuses on making a basic framework able to adopt the most basic parameters obtained by distance sensors. We adopt a human’s position, posture, and velocity and robot’s awareness. This affects local path planning.

  5. (e)

    Reactive change This means whether a human’s state was changed by robot inducement or not. If the robot perceives a reactive change, it needs to quickly execute reactive re-planning.

  6. (f)

    Relationship There are relationship between humans, e.g., friends. This affects path selection considering social adequateness, which means that a robot should not pass between humans if they are conversing.

3.2 Outputs: Trajectory and Inducement

Inducement is intended to encourage the human to sense and act for a robot to navigate comfortably, naturally, and socially by conveying its intention suitably for the situation. By referring to observations of human behaviors and conventional navigation studies [29, 30], we derived six inducement methods considering typical behaviors of a mobile robot with an arm. Light and display signals [50, 51] and using gaze and face expression [42, 44] can be regarded as a visual inducement. In this study, we adopt humanlike inducements by considering the sociality. As stated above, interactive navigation must be bidirectional. If a human uses gaze inducement, the robot must receive the intention from the observation. In current technologies, however, detecting gaze using sensors installed on movable robots and estimating the intention, e.g., moving direction, are not easy. In this study, we thus omitted those inducements. Table 1 lists the purpose of inducement, usage modality, influence on human state, and inducement strength.

  1. (1)

    Path indication via movement This expresses a space through which to pass by using a robot movement direction. This appeals to humans’ visual sense and influences their cognitive state with low strength.

  2. (2)

    Arm contraction This conveys the intention that the robot wants to safely pass through near a human. This appeals to humans’ visual sense and influences their cognitive and psychological state with low strength.

  3. (3)

    Deceleration This conveys the intention that the robot wants to safely pass through near a human. This appeals to humans’ visual sense and influences their cognitive and psychological state with low strength.

  4. (4)

    Verbal interaction (speech) This notifies a human of the robot and/or conveys thanks if the human makes way for the robot. This appeals to humans’ acoustic sense and influences their cognitive and psychological state with medium strength.

  5. (5)

    Passive touch This mitigates the impact of unexpected collision with humans by absorbing the force by using a robot arm. This can let humans know the existence of the robot and avoid it voluntarily. This appeals to humans’ haptic sense and influences their cognitive and physical state with medium strength.

  6. (6)

    Touch to notify This induces a human to voluntarily move for making a path for a robot by touching a human’s shoulder in the direction to widen the space. This appeals to humans’ haptic sense and influences their cognitive and physical state with high strength.

3.3 Process: Situation-Adaptive Inducement Selection

According to Sects. 3.1 and 3.2, the trajectory and inducement method (output) must be selected on the basis of human and environment data (input), considering comfort, naturalness, and sociality. The whole interactive navigation framework is shown in Fig. 2. It adopts a distance and space width as key parameters for deciding inducement and trajectory (the reason is explained in Sect. 4). If the robot detects an obstacle, the robot checks if it is a human or not. If it is a human, the robot computes whether it can pass through (there is an enough space to pass) or not. On the basis of the result and human information listed in Table 2, the robot re-plans its trajectory and inducement by referring to the situation-adaptive inducement model. If the robot succeeds in passing through by executing an inducement, it heads toward its goal. If the human continues to block the robot’s path in spite of it executing an inducement, the robot finds another route. Each function is explained in the following sections.

Fig. 2
figure 2

Interactive human-aware navigation framework with situation-adaptive multimodal inducement. Input information is human and environment information and output is trajectory and inducement method. The system includes human and environment measurement, path selection, and trajectory and inducement selection modules. On the basis of the system output, robots dynamically interact with humans

4 Pre-processing: Scene Classification

There are many ways to measure human and environmental information, such as a stereo camera, depth camera, laser range finder (LRF), and global positioning system (GPS). Moreover, there are many ways to understand scenes such as simultaneously localization and mapping (SLAM) using petri nets [52] and considering relocation performance [53]. As stated in Sect. 1, this study aims to propose a preliminary framework of interactive navigation and focuses just on distance and width information as key parameters. The proposed system thus utilizes point cloud data from LRF (UTM-30LX) to measure and identify a scenario as listed in Table 2.

4.1 Human and Environmental Information

It is essential to accurately detect human attributes for any robot navigation task. There are various methods for human detection that depend upon the robot’s sensors. A one way is to detect leg-like shapes from the laser data [54, 55]. Fusing the data from both the camera and laser generally yields more precise results [56, 57]. Figure 3a shows the system flow of human and space detection. The system is a simple implementation based on the following assumption: the orientation of the body is the same as the gaze direction. First, the LRF scans through the environment and then filters out values that suggest human-like objects. We assume the 5–15 successive point clouds as human in our experimental setting, as shown in Fig. 3b. In addition, we estimate eclipses from point clouds and derivate long and short axes. The human posture is estimated by the angle of those axes. The detection system has no function to identify the front and back of the human because we provide such information beforehand in experiments. Finally, on the basis of the above mentioned assumption and the effective field of vision of humans (± 100°), the system determines whether the human is aware of the robot. We can also identify the existence of human movement. If there are two humans facing each other, determined by their relative position and posture, the system assumes that they have a social relationship (a pair) for simplicity, though such estimation is quite difficult in real cases. For the experiments, we used a pre-built map of an environment that includes only walls. On the basis of the above data, the system calculates distances among humans and walls, as shown in Fig. 3c.

Fig. 3
figure 3

Environmental sensing and classification. It is based on point clouds obtained from laser range finder. a System flow, b human detection, and c space recognition

4.2 Space Categorization

The interactive navigation system then categorizes distances between humans as well as between a human and wall into four types (wide, sufficient, narrow, and too narrow) as passing space candidates, as shown in Fig. 4, in accordance with the necessity of selecting different inducements. Basically, the space can be divided into can-pass or cannot-pass since passing through is the purpose. Can-pass can be divided into attention-unnecessary or attention-necessary toward surrounding humans for safety. Cannot-pass can be divided into can-pass by twisting body or completely cannot-pass. These four categories correspond to the above four types.

Fig. 4
figure 4

Definition of four types of space category S, according to relationship among robot, human, and environment

Here, we analyze a space width \( w \) to pass between two objects. According to human statistics, the average maximum width of a human \( w_{Hmax} \) is about 500 mm, which is the width between both shoulders. Comfortable passing between humans requires marginal distance, which is called personal space, and its coefficient \( \alpha_{P} \) is 1.3 [58]. When passing through a narrow space, humans typically twist their body in the lateral direction so as to reduce the body width. The average minimum width of a human \( w_{Hmin} \) is about 350 mm, which is the thickness between the front and back. For safe passing, in this study, we adopt a safety margin to surroundings \( m_{S} \) that has a distance of 50 mm. We can adopt this to our robot by just using the maximum \( w_{Rmax} \) (950 mm) and minimum widths of robot \( w_{Rmin} \) (850 mm). Thus, we can define the wide-sufficient threshold as \( T_{w - s} \) = \( w_{Rmax} \times \alpha_{P} \) and the narrow-too narrow threshold as \( T_{n - t} \) = \( w_{Rmin} + m_{S} \). On the basis of the width \( w \) and threshold \( T \), space \( S \) can be divided into four categories as follows (Algorithm 1).

figure a
  • Wide\( S_{w} \) The space through which the robot passes safely without attention to surroundings.

  • Sufficient\( S_{s} \) The space through which the robot passes safely by moving at the center.

  • Narrow\( S_{n} \) The space through which the robot passes only by twisting its body.

  • Too narrow\( S_{t} \) The space through which the robot cannot pass at anything.

4.3 Region Categorization

The interactive navigation system then categorizes distances between the human nearer to the robot when passing and the robot into six (no relation, approach, get-close, beside, get-away, and separate), as shown in Fig. 5, in accordance with the necessity of selecting different inducement. Basically, the region should be categorized based on the effective range of each inducement. Thus, the region can be divided into no relation, approach/separation where far-inducement, i.e., path indication and deceleration, is available, get-close/get-away where near-inducement, i.e., arm contraction, deceleration, speech, and active touch, is available, and beside where proximal-inducement, i.e. passive contact, is available.

Fig. 5
figure 5

Definition of six types of region category R, according to relationship among robot, human, and environment

Here, we analyze a distance between the human nearer to the robot when passing and robot \( l \) before passing and \( l^{\prime } \) after passing. The robot should finish changing a path within a certain distance \( l_{c} \), which is set to \( w_{Hmax} \) (500 mm), considering the psychological personal space and uncertainty of human movement [58], and this region is called the get-close region \( R_{c} \). In relation to this, we set a path-change-start distance \( l_{a} \), which is set to 1000 mm from \( R_{c} \), by considering the effective range of path indication. We can thus define \( l_{c} < l < l_{c} + l_{a} \) as the approach region \( R_{a} \). The region where the robot passes beside a human is defined as the beside region \( R_{b} \). Behaviors in \( R_{b} \) are also important for human safety and comfort. Regions after passing are defined as get-away \( R_{w} \) and separate regions \( R_{s} \), symmetrical to \( R_{c} \) and \( R_{a} \) in \( R_{b} \). \( R_{s} \) is an inverse process in \( R_{a} \). On the basis of the distance and threshold, region \( R \) is divided into six (Algorithm 2).

figure b
  • No relation\( R_{n} \) The region where the robot is not affected by humans.

  • Approach\( R_{a} \) The region where the robot changes a path considering human comfort and effect of inducement.

  • Get-close\( R_{c} \) The region where the robot passes near the human along a straight path at a constant speed before coming beside the human, to not discomfort his/her.

  • Beside\( R_{b} \) The region where the robot passes beside the human along a straight path at a constant speed.

  • Get-away\( R_{w} \) The region where the robot passes near the human along a straight path at a constant speed after being beside the human, the same as \( R_{c} \).

  • Separate\( R_{s} \) The region where the robot returns to its original straight path from the coordination path.

5 Processing: Inducement and Trajectory

In this section, we embody the trajectory and inducement selection method, which is based on both the space category \( S \) and region category \( R \).

5.1 Trajectory

Trajectory can be determined from \( S \) and \( R \). The desired path from the current to the goal position is first calculated on the basis of a simple waypoint navigation method [59]. Actually, humans sometimes hesitate to decide own paths, move slightly and dynamically in even a standing state, and negotiate navigation space each other [60]. To make navigation more robust and natural, the path planner must consider those human dynamics. As stated above, this study focuses on proposing an interactive navigation framework with situation-adaptive multimodal inducement, so we adopted a simple waypoint navigation. For path generation, we can define the initial \( P_{i} \) and goal \( P_{g} \), and the robot can head from \( P_{i} \) to \( P_{g} \). When the robot needs to change the path, another three waypoints (\( P_{c} \), \( P_{b} \), \( P_{f} \)) are geometrically required. As stated later, we define that the robot passes by the human in parallel to the wall in the neighborhood region for human’s comfort, so it requires another two points (\( P_{s} \) and \( P_{e} \)). We thus define seven waypoints, as shown in Fig. 6. The waypoints are discretely set on the basis of \( S \) and \( R \). As stated in Sect. 4, the robot detects the existence of a human, human attributes, and wall position by using point clouds from LRF and then calculates \( S \). The robot orientation is kept forward during movement, so the waypoints include only the coordination of \( x \) and \( y \). First, beside-s \( P_{bs} \) and beside-e \( P_{be} \) are defined by human information, as stated in Sect. 4.1. Passing width \( w \) and depth \( d \) are also calculated from human and environmental positions. \( P_{bs} \) is a point where the robot comes \( R_{b} \). \( P_{be} \) is a point where the robot exits \( R_{b} \). \( P_{bs} \) and \( P_{be} \) are set at the boundaries of \( R_{c} \)\( R_{b} \) and \( R_{b} \)\( R_{w} \), respectively. The seven waypoints are defined as follows (Algorithm 3).

figure c
Fig. 6
figure 6

Fundamental path planning based on waypoint navigation method, which includes seven waypoints from start to goal

  • Initial\( P_{i} \)and goal\( P_{g} \)\( P_{i} \) is a current robot position, and this can be measured by processing point clouds obtained from LRF. \( P_{g} \) is a destination point that the robot must reach, and this is pre-determined on the basis of local path planning. We assume here that the \( y \)-coordination of \( P_{i} \) and \( P_{g} \) is the same.

  • Change\( P_{c} \)and finish\( P_{f} \)\( P_{c} \) is a point where the robot starts to change its path. \( P_{f} \) is a point where the robot starts to run along its original path. \( P_{c} \) and \( P_{f} \) are set at the boundaries of \( R_{n} \)\( R_{a} \) and \( R_{s} \)\( R_{n} \), respectively.

  • Start\( P_{s} \)and end\( P_{e} \)\( P_{s} \) is a point where the robot changes direction so as to pass by the human in parallel to the wall. \( P_{e} \) is a point where the robot starts to change direction so as to return its original path. \( P_{s} \) and \( P_{e} \) are set at the boundaries of \( R_{a} \)\( R_{c} \) and \( R_{w} \)\( R_{s} \), respectively. The system sets the center of the selected space as the passing point. These points are set for comfort and moderate passing in the neighboring region.

  • Beside\( P_{b} \)\( P_{b} \) is a point where the robot passes at the center of the besides region \( R_{b} \). This is necessary for stable calculation. \( P_{b} \) is set at the center of \( R_{b} \).

The system defines the above waypoints, and generates a trajectory to smoothly connect each waypoint on the basis of the static objects such as walls and dynamic objects such as humans by continuously scanning its environment. If the system detects a human obstructing the robot and the robot’s position has been changed, the robot immediately re-plans its trajectory and inducement method in real-time.

5.2 Inducement

On the basis of the identified situation, the interactive navigation system dynamically determines suitable inducement methods from visual, auditory, and haptic inducement.

5.2.1 Visual Inducement

Visual inducement includes pass indication via body movement, arm contraction, and deceleration, as shown in Fig. 7a. The pass indication is the most basic non-verbal communication with weak inducement. This can convey a space through which the robot wants to pass. The arm contraction and deceleration are quite important for moving safely by clarifying the robot’s intention and avoiding causing negative feelings by psychological inducement. Basically, using visual inducement would be natural in ordinary situations.

Fig. 7
figure 7

Various inducement methods: a visual: pass indication, b auditory: verbal interaction, and c haptic: passive and active contact

5.2.2 Auditory Inducement

Auditory inducement includes voice interaction, as shown in Fig. 7b. When the robot passes near the human, the robot says ‘Excuse me’ as psychological and cognitive inducement to make the human notice the robot. When the space is too narrow, the robot uses a stronger auditory inducement to convey its intention and change the human physical state, such as ‘Please let me pass.’ Basically, when a space for passing through cannot be created by visual inducement and/or when psychological and cognitive factors must be strengthened, using auditory inducement is natural. When the human makes way for the robot, the robot says “Thank you.”

5.2.3 Haptic Inducement

Haptic inducement includes active touch for notification and passive touch for safety. In our previous work [29], humans moved approximately along the lines of the direction of contact force, and the back and shoulder were suitable contact points. The use of this inducement is specifically intended for stationary humans who are not aware of the presence or intention of the robot, as shown in Fig. 7c. To initiate contact inducement, the interactive navigation system needs to ensure that the two following criteria are fulfilled.

  • Contact-point reachability The robot needs to set itself in a suitable position so that it can make a proper and safe contact with its arm. We consider making contact on a human’s back or upper arm. The reachability can be calculated on the basis of the robot configuration.

  • Safety The safety associated with the amount of contact force was set to 50 N from our previous work [29]. We also need to consider the available space for the human to move along the direction of contact force. Pushing a human already near a wall will risk making him/her collide with the wall.

When the space is \( S_{t} \), the system selects one contact way on the basis of the human position and posture. If the robot finds an appropriate position, the system checks the possible collision of the human with other humans and obstacles from the reactive movement of the human.

5.3 Total System

In summary, the interactive navigation system finds and selects the maximum width of space for the robot to pass through out of all spaces between existing entities. Table 3 lists with the relationships among \( S \), \( R \), speed, and inducement. Algorithm 4 shows the interactive navigation system to determine the trajectory (waypoint \( p \) and speed \( v \)) and inducement. Basically, the speed decrease as the distance and space decreases and inducement is selected based on the effective range of each inducement (explained in Sect. 4.3) and is used in order of low, medium, and high strength.

figure d
Table 3 Inducement and trajectory selection based on space and region category
  • Wide The robot passes through at its normal speed (1.0 m/s) using the trajectory calculated by the path planning function. Only the path indication and small deceleration (0.8 m/s) are executed.

  • Sufficient The robot passes through at a decreased speed (0.5 m/s) while contracting its arm. If the robot passes in the front of a human, the robot says “Excuse me” to express its robot’s appreciation.

  • Narrow The robot passes at a significantly slower speed (0.3 m/s) while saying, “I will pass”. Here, the robot maintains an arm posture for passive contact to prepare for unexpected collision just in case

  • Too narrow The robot decreases its speed (0.5 m/s) and says, “Excuse me, please let me pass.” If the human does not respond to the robot, the robot stops behind the human and touches him/her for notification. After confirming that the human has made way for the robot, the robot says, “Thank you”, and passes through.

6 Experimental Conditions

We implemented the proposed interactive navigation system with a situation-adaptive inducement selector to a mobile robot with a human-contactable arm. In this section, we describe embodying the experimental conditions.

6.1 Robot Specification

For experiments, we developed a human collaborative omni-wheeled mobile base, as shown in Fig. 8. Even though it is a platform designed for general robot navigation studies, it is primarily intended for studying robot navigation using multimodal inducement such as visual, auditory, and haptic. This is an average human-sized (height and width) robot. The body does not have any degrees of freedom (DOFs), but the arm has two DOFs, allowing it to perform certain arm gestures and make contact when necessary. It can speak via Microsoft API using a speaker. It has also a LRF, force angle, and ultrasonic sensors, and force sensitive resistors (FSR).

Fig. 8
figure 8

Mobile robot platform with human-contactable arm

6.2 Conditions and Situations

6.2.1 Concept

The environmental conditions are classified into nine basic scenarios in an attempt to form a possibly comprehensive set that embraces essential aspects of any given environmental scenario. The specific parameters are set as follows:

  1. (a)

    Space attribute 2.4 m width corridor (general width).

  2. (b)

    Available width Variables.

  3. (c)

    Number of humans Two.

  4. (d)

    Human attribute Variable.

  5. (e)

    Reactive change Variable.

  6. (f)

    Relationship Conversing.

The template of the situation is shown in Figs. 9 and 10 (left upper). We created nine situations by changing parameters (b, d, and e), as shown in Fig. 10 and listed in Table 4. There is two humans in the corridor: one (H1) is a subject and the other (H2) is an experimenter. The subjects were briefed that the robot coexists in the corridor. During experiments, H2 always talks with H1 (e.g., what is yesterday’s dinner) for distraction. In situations 1–4, the humans have different positions and postures (the maximum space and its position). The situations are designed for evaluating basic comfort, naturalness, and sociality, so H1 and H2 do not respond voluntarily. In situations 5–9, the humans have different positions and postures and change them during the experiments. These situations are designed for evaluating interactive trajectory planning. H2 thus moves to give way when the robot reaches the first cross mark (Fig. 10) and vocally induces the behavior of H1, as shown in Fig. 10e–i. After the inducement by H2 finished, H1 is asked to freely react to the behavior of the robot. In situations 1, 4, and 9, subjects cannot see the robot. To prevent subjects from noticing that the robot is approaching by hearing its moving sounds, we turn on a recorded sound of city’s noise from a speaker. In situation 4, to prevent him from hearing the auditory request from the robot, louder surrounding noise is provided to H1.

Fig. 9
figure 9

Example of experimental scenario. Robot tries to pass along a corridor where two humans are standing

Fig. 10
figure 10

Nine passing scenarios (situations 1–9). In all situations, two people exist in a 2.4-m-wide corridor. Red and orange lines show trajectories for the proposed and conventional systems, respectively. 1 is subject and 2 is experimenter. Details are explained in the text and Table 4. (Color figure online)

Table 4 Environmental conditions. Situation explains the space category in H2–H1 and H1–wall. Situations 5–9 change their space categories. Trajectory explains selected space by conventional and proposed systems. Evaluation explains three points for mainly evaluating each situation

6.2.2 Situations and Robot Behaviors

To evaluate the proposed interactive navigation system, we develop a fundamental path planner that can make clear the difference with the proposed system, and it acts as follows: (1) always avoid and/or stop (inducement is prohibited, mainly for freezing problem), (2) always pass through the maximum width space (mainly for psychological response), and (3) do not change the path from the initial planned path (mainly for social acceptance). The trajectory and inducement for the conventional and proposed systems are listed in Table 4.

  • Situation 1 (pass behind human) The robot goes through the widest space, H1–wall (\( S_{w} \)). The robot behaves with weak consideration. This situation evaluates an inducement on the basis of naturalness and sociality.

  • Situation 2 (pass between humans) The robot passes through H1–H2 (\( S_{s} \)). The robot behaves with medium consideration. We evaluate an inducement on the basis of naturalness and sociality when a robot passes between humans.

  • Situation 3 (detour considering sociality) The robot passes through H1–wall (\( S_{s} \)) although H1–H2 (\( S_{s} \)) is wider. We evaluate effectiveness of a detour considering sociality in spite of reducing movement efficiency.

  • Situation 4 (making pass by touch) The robot touches H1 to make H1–wall (\( S_{t} \)) wider for passing. This situation evaluates a robot-initiated touch in spite of the possibility of worsening the H1’s psychological state.

  • Situation 5 (dynamic adaptation) The robot starts to pass through H1–wall (\( S_{n} \)). On the way, H1 gives way (\( S_{s} \)). We evaluate an interactive navigation for establishing communication and efficient movement.

  • Situation 6 (dynamic adaptation) The robot starts to pass through H1–H2 (\( S_{s} \)). On the way, H1 gives way (\( S_{w} \)). This situation evaluates an interactive navigation, the same as situation 5.

  • Situation 7 (dynamic adaptation) The robot starts to pass through H1–H2 (\( S_{n} \)). On the way, H1 gives a way by widening H1–wall (\( S_{s} \)). This situation evaluates an interactive navigation, the same as situation 5.

  • Situation 8 (dynamic adaptation) The robot starts to pass through H1–wall (\( S_{n} \)). On the way, H1 gives way by widening H1–H2 (\( S_{s} \)). This situation evaluates an interactive navigation, the same as situation 5.

  • Situation 9 (acceptance of passive touch) The robot passes the widest space, H1–wall (\( S_{w} \)). H1 moves back when the robot is passing (\( S_{n} \)) and unintentionally collides with the robot. The robot mitigates collision force by damping control of the arm. This situation evaluates an acceptance of robot contact during navigation.

6.3 Evaluation Methods

Evaluation points differ among situations, as listed in the rightmost column in Table 4. The evaluations are divided into objective and subjective ones (psychological reactions). As we know, psychological response drastically differs among individuals. It is better to adopt a variety of subjects, but to reduce the influence of this difference as much as possible, we choose 11 subjects who are the third-year students (male, age: 20–21 years old) majoring in mechanical engineering at Waseda University who were knowledgeable about robotics. To reduce the order effect, the experimental order was randomized for each subject.

6.3.1 Objective Evaluation

For all situations, we analyze the trajectories of the robot and H1 for the conventional and proposed systems. These trajectories were recorded using a three-dimensional motion capturing system (Raptor-E Digital RealTime System) [61]. In particular, for situations 4 and 9 in which the freezing problems occurs, we use the success rate \( SR \), defined as the ratio of the total number of trials to the number of successfully passes. For situations 4–9 for the proposed system, the robot can pass along an efficient path without stopping since the human makes way for the robot due to the robot inducement. Thus, we defined the movement efficiency score \( L \) on the basis of the hesitation signal [62], which is given by

$$ L = 1 - \mathop \int \limits_{{T_{S} = 0}}^{{T_{G} }} \frac{{\left| { \vec{v}_{t} - \vec{v}_{t - 1} } \right|}}{{\left| {\vec{v}_{t - 1} } \right|}}dt, $$
(1)

where \( \vec{v}_{t} \) and \( \vec{v}_{t - 1} \) are the robot movement vector at times t and t − 1, respectively. \( T_{S} \) and \( T_{G} \) means the start time and goal time, respectively. A higher value means higher efficiency.

6.3.2 Subjective Evaluation: Human Psychology

The inducement should affect not only physical and cognitive, but also psychological states. For example, for situation 3, the efficiency of the robot should decrease because it behaves (detours) for sociality. For all situations, we thus analyze human psychology. We recorded the feeling of the subjects after the robot passed on the basis of their questionnaire answers. The subjects selected one feeling on a 7-point scale (− 3, − 2, − 1, 0, + 1, + 2, + 3), categorized into positive (+ 3, + 2, + 1), neutral (0), and negative (− 1, − 2, − 3). Moreover, for evaluating their impressions of the robot behavior, the subjects were asked to rate the robot as ‘natural’, ‘humanlike’, ‘friendly’, ‘altruistic’, and ‘safe’, on a 7-point scales (− 3, − 2, − 1, 0, + 1, + 2, + 3) as done in previous studies [63,64,65].

7 Result and Analysis

In this section, we analyze the experimental results in human–robot passing scenarios, by focusing on multimodal inducement from the robot. Figure 11 shows snapshots of movement of the humans and robot for typical situations.

Fig. 11
figure 11

Snapshots of movement of human and robot (left: conventional system and right: proposed system). a Situation 3: detour considering sociality, b situation 4: making path by touch, and c situation 8: dynamic adaptation

7.1 Objective Evaluation

7.1.1 Unfreezing by Haptic Interaction (Situations 4 and 9)

Figure 12 shows the trajectories of the robot and H1 for the conventional and proposed systems. Figure 13a shows the success rate \( SR \). In situation 4, the robot tried to pass behind H1 where the widest space was, but the space was too narrow for the robot to pass through. The robot using the conventional system stopped until H1 noticed and made way for the robot, as Fig. 11b left shows. In fact, no subjects noticed the robot during the experiment, so the robot could not pass (SR = 0), as Fig. 12d (orange line) shows. In contrast, the robot using the proposed system stopped behind H1 and touched H1 with its arm, as Fig. 11b right shows. Due to the touch, all subjects could notice the robot and make the space for the robot to pass through (SR = 1.0), as Fig. 12d (red line) shows. The t test revealed the significant difference between two systems for \( L \) (t(18) = 35.1, p < 0.001), as Fig. 13b shows. These results show that touch is effective to solve the freezing problem in situations where the robot has difficulty passing through. The psychological response was improved slightly even if the robot voluntarily touched a human (explained in Sect. 7.2). This is because the subjects thought that they obstructed the robot, so they felt apologetic toward it. Even strong and explicit inducement can be acceptable when utilized in a suitable situation.

Fig. 12
figure 12

Trajectory of subject and robot in the proposed and conventional systems during navigation for nine situations

Fig. 13
figure 13

Objective evaluation, which includes a success rate and b movement efficiency

In situation 9, for the robot using the proposed system the human stopped moving backward due to passive contact, and the robot conveyed its intention by auditory inducement while ensuring safety. This passive contact caused the subject to think that he obstructed the robot by unintentionally contacting it, so the subject made way for the robot (SR = 1.0). On the other hand, for the robot using the conventional system, the human stopped going backwards due to collision with the robot, but some subjects did not give way because the robot did not convey an intention explicitly. As the result, the robot often became stuck (SR = 0.36). From the Chi-squared test, \( SR \) in the proposed system recognized significantly higher (χ2(1) = 17.4, p < 0.001) than those in the conventional system.

7.1.2 Dynamic Adaptations (Situations 5, 6, 7, and 8)

Figure 13b shows the movement efficiency score L. In situation 8, the robot detected that subjects changed their positions, and this changed the widest space from behind H1 (\( S_{n} \)) to between humans (\( S_{w} \)). The robot using the proposed system modified its pass to proceed between humans, as shown in Figs. 11c (right) and 12h (L = 0.63). In contrast, the robot using the conventional system could not adjust a path in accordance with changes in the subject’s position, as shown in Figs. 11c (left) and 12h (L = 0.42). The t-test revealed the significant difference between the conventional and proposed system (t(12) = 4.71, p < 0.001). For the conventional system, H1 moved again to his original position because the robot did not change its path. In this situation, the robot could select a more efficient trajectory by dynamic adaptation. Similarly, for situations 5–7, the robot using the proposed system could adjust its trajectories in accordance with the subject’s movements. However, the improvement degree of L depends on the situation setting. In situations 5 and 6, L was slightly improved, but in situation 7, L was decreased. They are designed for evaluating human psychological response to dynamic adaptation, so the feelings and impressions left by the systems are analyzed in Sect. 7.2.

7.1.3 Basic Behaviors (Situations 1, 2, and 3)

The snapshots for situation 3 are shown in Fig. 11a, and the trajectories are shown in Fig. 12c. The proposed system identified the social context (a pair) from the body orientations of the obstructing humans. Even though the widest path was identified as between the humans, the selected path was an amicable detoured one, as shown in Figs. 11a right and 12c. The conventional system just produced a trajectory to travels between the humans, as shown in Figs. 11a left and 12c. In situations 1 and 2, the robot could select the maximum width space and determine the speed depending on the width. In sum, the proposed system could determine the trajectory and inducement in accordance with the situations in the corridor. Since these situations are designed for evaluating human psychological response, the feelings and impressions left by the system are analyzed in Sect. 7.2.

7.2 Subjective Evaluation

7.2.1 Feelings of Humans (Situations 1–9)

The feelings of subjects in all nine situations and their average are shown in Fig. 14. From the average score in Fig. 14 (upper leftmost), we found that the robot using the proposed system, which could use inducements suitable for different situations, generated better psychological reactions than the robot using the conventional system. The analysis of variance (ANOVA) indicated the significant differences among three categories in both systems (F(2, 24) = 3.40, p < 0.01). Moreover, the Tukey’s test for multiple comparison revealed the significant differences between the neutral and negative in the proposed system (p < 0.05) and between the positive and negative in the conventional system (p < 0.05).

Fig. 14
figure 14

Comprehensive evaluation of human feelings: positive, negative, and neutral. The proposed system generated more positive feelings and fewer negative feelings than the conventional system for all situations (+p<0.1; *p < 0.05; **p < 0.01)

In situation 3, the robot using the conventional system passed between humans, so it generated a much higher negative feelings than the robot using the proposed system, which detoured behind H1 for sociality, as seen in Fig. 14c. Many subjects did not notice that the robot using the proposed system pass by them, so the negative feeling was drastically reduced and positive feeling increased slightly. The Chi squared test indicated that the results between two systems had the significant differences (χ2(2) = 7.01, p < 0.05), and the residual analysis revealed that neutral (p < 0.05) and negative (p < 0.05) between two systems had the statistical differences. In situation 4, the robot actively touched H1, which was expected to discomfort him. However, the results showed the robot using the proposed system scored slightly better the robot using the conventional system, which made the robot just stop and wait for a path to be made. This is because the subjects thought that they obstructed the robot’s pass, so they felt apologetic toward the robot. This tendency was the same in situation 9. In situation 2, the conventional system did not decrease the velocity but the proposed system provided inducement such as deceleration, speech, and body contraction. Therefore, the robot using the proposed system had much higher positive and lower negative score than the robot using the conventional system. The Chi squared test indicated that the results between two systems had the significant differences (χ2(2) = 13.1, p < 0.01), and the residual analysis revealed that positive (p < 0.05) and negative (p < 0.01) between two systems had the statistical differences. The responses in situations 5–8 had the same tendency: the proposed system decreased negative feelings and increased positive feelings. In situation 1, the subjects did not notice the robot, so there was little difference between the proposed and conventional systems. The Chi squared test indicated that the results between two systems for situations 6–8 had the significant differences, and the residual analysis revealed that positive and negative between two systems had the statistical differences, respectively (each value is shown in Fig. 14).

7.2.2 Impressions of Robot Behavior (Situations 1–9)

We found that interactive navigation with multimodal inducement provided better feelings than conventional navigation. For deeper analysis to evaluate naturalness, sociality, and comfort, we quantified the impressions of the robot behavior, and Fig. 15 shows the results.

Fig. 15
figure 15

Comprehensive evaluation of human impressions of the robot as natural, humanlike, friendly, altruistic, and safe. The proposed system generated more positive and less negative impressions than conventional system for all situations, and impressions differ among situations (+p<0.1; *p < 0.05; **p < 0.01)

In situation 3, the subject felt the robot using the conventional system was dangerous since it passed between humans. On the other hand, they felt the robot that passed behind them behaved naturally. The Chi squared test indicates that the negative results (−) in the conventional system (χ2(4) = 29.2, p < 0.01) and positive results (+) in proposed systems (χ2(4) = 24.0, p < 0.01) had significant differences, respectively. The Ryan’s nominal significance level revealed that friendly and safe (−) for the conventional system had the statistical difference (p < 0.1) and natural and others (+) for the proposed system had the statistical difference (p < 0.1). In situation 4, the robot using the conventional system stopped, so it left almost no impression. Although the robot using the proposed system touched the subject, they responded that it was humanlike and safe. Few subjects answered that the robot was not friendly, but this seems to be due to individual differences. This was the same tendency in situation 9. These results showed that the robot touch during navigation could be acceptable in a suitable situation. The robot using the proposed system was rated as friendlier and safer than the robot using the conventional system in situation 5, friendlier and more altruistic in situation 6, friendlier and much more natural in situation 7, and friendlier and safer in situation 8. In situations 5–8 where the robot behaved interactively, the human felt the robot using the proposed system was friendly and humanlike. In situation 2, deceleration made subjects rate it as natural and friendly. In situation 1, the subjects did not notice the robot, so there is little difference between the proposed and conventional systems. Basically, this means that the human feels the robot that does not obstruct him/her naturally. The Chi squared test indicates that the results (+) in the conventional system (χ2(4) = 14.8, p < 0.01) and results (+) in proposed system (χ2(4) = 43.3, p < 0.01) had significant differences, respectively. The Ryan’s nominal significance level revealed that natural and safe (+) for the proposed system had statistical difference (p < 0.01).

7.2.3 Summary: Feelings and Impressions

Positive and negative psychological responses to robots using the proposed and conventional systems are summarized as increased, decreased, and almost the same in Table 5. The proposed interactive navigation system increased the positive feelings and impressions and decreased the negative feelings and impressions. For example, making a path by touching a human (situation 4) decreased negative feelings and increased positive impressions. Detours considering sociality (situation 3) decreased negative feelings and impressions and increased positive impressions. Dynamic adaptation (situations 5–8) also improved feelings and impressions.

Table 5 Human psychological responses to robot using proposed system compared with those to robot using conventional system

In summary, we found from Sects. 7.2.17.2.3 that deceleration, speed, and body contraction improved impressions of naturalness and friendliness. Social path planning, i.e., not passing between humans, improved naturalness. Re-planning in accordance with humans’ reactions made the robot appear altruistic to humans. It is notable that active and passive touch did not degrade feelings or impressions.

8 Discussions and Required Future Works

By combining the inducements from the robot such as speech and arm contraction with the actions to avoid negative psychological responses such as deliberate contact, the robot could improve human psychological responses. For example, the robot says “thank you” to the human who makes way for it in order to improve the robot’s efficiency. We found that adequate inducement could encourage humans to change their cognitive, physical, and psychological states. In this study, we adopted a simple waypoint navigation method, and its position and velocity were determined on the basis of a geometric model by reference to personal space. We found that the proposed interactive navigation system could generate a reasonable trajectory despite its simplicity. In future, we need to investigate a more adaptive trajectory planner, which can deal with different situation, culture, and so on.

To truly realize interactive navigation, the relationship among the human states to be change (cognitive, physical, and psychological), inducement modality (visual, auditory, and haptic), and inducement strength must be carefully designed. We modeled them on the basis of observation of human–human relationships, since this study is a preliminary study of an interactive navigation framework with situation-adaptive multimodal inducement. Note that this relationship should change depending on advanced environmental factors, such as presence of other robots and movement of crowd, and a higher level context, such as a quite museum, very noise station platform, and busy hospital. A more dynamic and theoretical definition and a learning function based on experience of the robot will be addressed in future work. Moreover, in this study, the subjects of experiments were confined to third-year university students (male, age: 20–21 years old) majoring in mechanical engineering to reduce the influence of individual differences as much as possible. We found that the experimental results revealed meaningful tendencies. In future, we need to test the system with a wider variety of subjects, advanced environmental factors, and contextual scenarios.

9 Conclusion

In this research, we proposed an interactive navigation system for the robot that uses a situation-adaptive inducement selector that select inducements on the basis of systematically categorized environmental and social contexts. Our system appreciates the importance of context-based behavior. For testing the system, we identified crucial parameters that characterize any general navigation scenario. On the basis of these parameters, we designed a total of nine scenarios for testing our proposed system. We compared the performance of our system with that of a conventional system across all the scenarios. The experimental results showed that the proposed system provided an alternative strategy for tackling the robot freezing problem and produced safer and more efficient trajectories than the conventional system. Moreover, the robot using our proposed system was rated more natural, humanlike, friendly, altruistic, and safe than the robot using the conventional system. Even though the revealed evidence was limited to robot planner and hardware design we used as well as certain scenes, contexts, and participants, we believe that the proposed system would be a fundamental framework that uses multimodal inducements depending on situations considering comfort, naturalness, and sociality for human-aware interactive navigation.

In the future, however, we need to develop a more adaptive trajectory planner. To determine an adequate inducement method, we need to obtained richer human and environmental information such as age, gender, and facial expression by applying scene and human understanding technologies. Moreover, to clarify the relationship between situations and inducements, a more dynamic and theoretical definition and a learning function based on experience of the robot will be addressed. To establish more robust and natural navigation, we introduce the path planner into human dynamics models such as negotiation and hesitation. Furthermore, we will investigate with a wider variety of subjects and contextual scenarios.