Experimental Brain Research

, Volume 195, Issue 3, pp 429–444 | Cite as

Measurement of instantaneous perceived self-motion using continuous pointing

  • Joshua H. Siegle
  • Jennifer L. Campos
  • Betty J. Mohler
  • Jack M. Loomis
  • Heinrich H. Bülthoff
Open Access
Research Article


In order to optimally characterize full-body self-motion perception during passive translations, changes in perceived location, velocity, and acceleration must be quantified in real time and with high spatial resolution. Past methods have failed to effectively measure these critical variables. Here, we introduce continuous pointing as a novel method with several advantages over previous methods. Participants point continuously to the mentally updated location of a previously viewed target during passive, full-body movement. High-precision motion-capture data of arm angle provide a measure of a participant’s perceived location and, in turn, perceived velocity at every moment during a motion trajectory. In two experiments, linear movements were presented in the absence of vision by passively translating participants with a robotic wheelchair or an anthropomorphic robotic arm (MPI Motion Simulator). The movement profiles included constant-velocity trajectories, two successive movement intervals separated by a brief pause, and reversed-motion trajectories. Results indicate a steady decay in perceived velocity during constant-velocity travel and an attenuated response to mid-trial accelerations.


Inertial cues Self-motion perception Passive transport Continuous pointing Spatial updating 


Accurate perception of one’s own motion through space is often required for the successful completion of goal-directed locomotor actions. Information specifying full-body self-motion comes from a variety of sources, including visual, auditory, proprioceptive, and vestibular cues. There is still much to be understood about how such information yields accurate and continuously updated knowledge of one’s location in space (spatial updating). In order to achieve a rich characterization of these processes, methods need to be developed in which perceived speed and location can be quantified in real time and in real-world units. We introduce a method in which recorded arm movements during a continuous pointing task are used to infer such values. Although this method is applicable to a variety of research questions, we use it here to study self-motion perception during passive transport, where the primary sources of information are inertial (acceleration-based). Such perception depends primarily on the detection of acceleration by the otolith organs in the inner ear, but may also involve other inertial mechanisms (e.g., the kidneys: Mittelstaedt 1996), somatosensory information such as pressure on the skin during acceleration, and vibrations, which can also serve as a direct indicator of traveling speed (Yong et al. 2007).

Three common experimental procedures used in previous studies of passive self-motion perception include collapsing the data from each trial into a single judgment, activating self-motion-perception systems during a separate response phase, and measuring perceived self-motion in arbitrary units. As will be demonstrated, our continuous-pointing method avoids many of the limitations inherent in these procedures, thereby increasing the richness and reliability of the information that can be captured.

Many previous studies have used discrete post-hoc judgments of perceived self-motion, most likely because the data are easier to obtain and the results easier to quantify. Examples of these judgments include: indicating the direction of movement (Benson et al. 1986), specifying the point at which a previously viewed target is passed (Harris et al. 2000; Lappe et al. 2007; Mittelstaedt and Glasauer 1991), making relative comparisons between two self-motion intervals as a percentage of one another (Sun et al. 2004a), pointing or saccading to a start location or a remembered target (Glasauer and Brandt 2007; Klier et al. 2007), and performing a two-interval forced-choice comparison (MacNeilage et al. 2007). Reducing each trial to a single judgment, however, results in a severe loss of information regarding how self-motion is perceived during the course of the movement itself.

In order to collect more detailed information about self-motion perception on a given trial, other investigators have favored post-hoc judgments in which a previously experienced trajectory is actively replicated (Berthoz et al. 1995; Glasauer et al. 2007; Grasso et al. 1999; Israël et al. 1993, 1997, 2005; Loomis et al. 1993; Mittelstaedt and Mittelstaedt 2001; Sun et al. 2004b) or a participant returns to a specified location following a series of rotations and translations (Loomis et al. 1993; Nico et al. 2002; Seemungal et al. 2007; Sholl 1989). Not only is this response in the same “modality” as the initial trajectory (meaning there is no conversion from, say, an experienced movement to a verbal judgment of displacement in meters), but the entirety of the response trajectory can be monitored, providing insight into how the initial trajectory was perceived. The main disadvantage with these measures, however, is that any errors in self-motion perception experienced during the initial trajectory may also be present in the response, thereby partially or fully canceling out any perceptual errors and potentially leading to the erroneous conclusion that self-motion has been perceived accurately. In addition, the process of having to remember the initially experienced trajectory adds noise and possibly biases to the judgments.

Other experimenters have instructed participants to continuously judge displacement or velocity while they are moving. Examples of such online judgments include tasks requiring participants to manipulate a joystick so that its displacement is proportional to instantaneous perceived velocity (Yong et al. 2007; Seidman 2008). Although such methods provide continuous information, measurements with this method are in arbitrary units. This can make it difficult to compare measurements of actual velocity (in m/s) with those of perceived velocity. Other online measures, such as eye tracking (Israël and Berthoz 1989; de Graaf et al. 1994; Angelaki 2004) can record gaze direction in real-world units, but have a questionable relationship to perceptual experiences because of the reflexive nature of the vestibulo-ocular reflex.

Here we introduce continuous pointing as a novel method to evaluate passive self-motion perception that avoids many of these disadvantages. This method requires participants to use their index finger to point continuously to the updated location of a previously viewed target while moving in the absence of vision. Based on the extent to which the arm is rotated around a vertical axis, we infer participants’ perceived target-relative location at each point during the trial using a simple equation (Fig. 1). The equation takes as its input the participant’s arm angle and outputs an estimate of perceived location. The resulting time series is then differentiated to yield an estimate of instantaneous perceived velocity in units of meters per second.
Fig. 1

General experimental technique. Participants begin each trial at one of several possible target-relative starting locations. After viewing the target, they moved in the absence of vision along the specified travel path. While they moved, participants pointed continuously to the mentally updated location of the target. Based on arm angle and the known value of ΔY, we computed ΔX, or perceived distance from the target, throughout each trial. As shown above, there may be a discrepancy between a participant’s perceived and actual location, a fact which indicates a misperception of self-motion

Not only does this method provide the same information afforded by the aforementioned post-hoc judgments (namely, the perceived distance traveled at the end of the trial), but it also yields information about a participant’s perceived velocity at every moment during a particular movement interval. Further, as will be demonstrated, continuous pointing enables perceived velocity to be measured in real-world units, allowing for a direct comparison between actual and perceived velocity. It is also possible to overlay the response patterns from two different trials by the same participant, thus making it easy to identify changes in self-motion perception that occur over the course of an experiment. In addition, simulating deviations from ideal pointing behavior reveals characteristic patterns in recovered perceived velocity. The presence of these patterns in the recorded time series may make certain perceptual errors easy to identify. For example, an ongoing debate centers around whether or not inertial information can be integrated accurately in the absence of other cues to self-motion (Berthoz et al. 1995; Seidman 2008). We can model the effects that leaky integration or other misperceptions have on the perceived velocity time series in order to make inferences about the specific errors made by the participants in our experiment.

The purpose of the current set of studies was to introduce and explore the advantages of using continuous pointing to measure the perception of passive whole-body translations. This extends the work of Campos et al. (2009) by applying continuous pointing to evaluate human passive self-motion perception in the absence of vision. Results from previous literature investigating the ability of human observers to accurately perceive self-motion with inertial information alone do not always agree. While studies that employ actively replicated trajectories as the primary response have suggested that the perception of traveled distance is accurate when only inertial cues are available (Israël et al. 1993; Berthoz et al. 1995), studies using continuous online judgments have demonstrated high-pass characteristics of the response to otolithic information (Telford et al. 1997; Seidman et al. 1998; Paige and Seidman 1999; Seidman 2008). This discrepancy may be due to the specific characteristics of inertial self-motion perception that are revealed using different methodologies. Because the continuous-pointing method will allow us to access participants’ subjective location in meters at every point during the trial, this method offers a more detailed assessment of how passive translations are perceived, possibly reconciling previous results.

Variations of continuous pointing have been used in the past to measure perceived egocentric distance (Loomis et al. 1992) and perceived rotations (Ivanenko et al. 1997; Siegler et al. 1999), but never before as a direct measure of perceived translation. Studying perceived passive translational velocity with continuous pointing is an entirely new application for this type of response. If this measure truly reflects participants’ experiences, continuous pointing has the potential to lead to a much richer understanding of the relationship between the sensory cues used during self-motion perception and what precisely is being perceived.

The continuous-pointing method requires the assumption that the target location is “spatially updated” while the participant moves, whether or not the perception of this movement is accurate or inaccurate. Loomis and his colleagues (Loomis et al. 2002; Loomis and Philbeck 2008) have reasoned that, when a viewed target ceases to be visible, its perceived location is temporarily represented by a “spatial image.” When a person moves relative to the environment, the egocentric coordinates of the spatial image are updated, allowing the person to know the target’s present direction relative to himself or herself. A conversion of the pointing data into the person’s perceived location is based on three additional assumptions: (1) visual perception of the initial target location is accurate, as would be expected given the full-cue viewing conditions of the current experiments, (2) the direction vector of perceived self-motion is initially aligned with that of the physical motion and exhibits little or no veer, and (3) neither spatial updating of the target direction nor the consequent pointing of the arm introduces any additional systematic error beyond that associated with perceived self-motion. These three assumptions imply that at any moment, the spatial image is coincident with the target, regardless of the perceived self-position (even if inaccurate).

There is abundant evidence supporting assumption 1, much of it coming from research by the fourth author, who has used spatial updating in conjunction with normal walking to measure visually perceived distance (for a review, see Loomis and Philbeck 2008). When egocentric distance cues are plentiful, people are able to perform a variety of “visually directed actions” with little systematic error. Visually directed actions include actions such as walking, pointing, and throwing to a target that are initiated after an initial preview of the target and then carried out in the absence of further perceptual information input about the target’s location. For example, blind walking toward targets has been found to be accurate when averaged over participants, even when the initial targets are 20–25 m distant (see Loomis and Philbeck 2008 for a summary of these data). Systematic walking errors are also quite small under full-cue viewing, even when participants move toward the targets along multi-segment indirect paths (Loomis et al. 1998; Philbeck et al. 1997). More pertinent to the present study are the results of one experiment (Loomis et al. 1992, Experiment 3) in which participants initially viewed a target up to 5.7 m distant and then walked blindly along various straight paths while attempting to continuously point at the target. The correct pointing directions ranged from 18° to 115° (relative to straight ahead), and the actual pointing directions, averaged over participants, all exhibited errors less than 9°, but were generally much smaller. This slight systematic error is consistent with the accurate perception of the distance and direction of the initially viewed target, as well as with accurate perceived self-motion, accurate spatial updating of the spatial image, and accurate pointing of the arm. Recent research by our group (Campos et al. 2009) has obtained the same result with pointing during blind walking. Importantly, when egocentric distance cues are reduced, visually directed action indicates the same systematic errors in perceived distance that other measures of perceived distance indicate (Hutchison and Loomis 2006; Loomis et al. 1998; Philbeck and Loomis 1997; Philbeck et al. 1997; Sahm et al. 2005; Wu et al. 2004).

With regard to assumption 2 (no perceived veer during passive transport over a straight path), we are aware of no direct evidence for or against perceived veer. When participants attempt to walk straight paths without vision, they veer idiosyncratically, with considerable variation on different occasions (Kallie et al. 2007). Kallie et al. (2007) concluded from three experiments and modeling of their data that physical veer while attempting to walk straight results from motor noise. Based on their work, we see no reason to expect that people perceive themselves to be veering when actually walking straight. Because passive transport does not even involve motor noise, there is no reason to expect that people perceive veer when passively transported over a straight path.

The assumption that spatial updating introduces no systematic error (part of assumption 3) is supported by some of the same research supporting assumption 1. Several studies have shown that when participants walk blindly to a target that has been seen or heard prior to walking, they move to the same terminal location whether walking straight to the target or along a multi-segment indirect path (Loomis et al. 1998, 2002; Philbeck et al. 1997). This congruence of the terminal locations for different paths holds whether the initial target location is perceived correctly or incorrectly. When there are perceptual errors, the terminal locations differ from the target location in a manner indicating initial errors in perceived distance. This result signifies that, whether the initial target position is perceived correctly or incorrectly, perceived self-motion of walking and spatial updating are performed without systematic error. In the current study, passive transport results in errors in perceived self-motion but we assume correct spatial updating of the spatial image for the perceived self-location at any one moment. As for the other part of assumption 3 regarding pointing, we present as evidence the accuracy of pointing observed during blind walking in the two aforementioned studies (Loomis et al. 1992; Campos et al. 2009) and the results of another study on pointing before and after blind walking (Fukusima et al. 1997).1

In Experiment 1 of this study, a remote-controlled robotic wheelchair transported participants within an open, fully tracked room. We used trajectories with different velocity profiles and target-relative starting locations to test the robustness of the method. The most basic trajectories included only continuous forward motion, the movement type used in most self-motion perception studies, which would allow us to compare our results to those of the previous literature. We also included trajectories with pauses and movement reversals, patterns that would allow us to investigate perceptual errors during more complex motion profiles, something for which we believe our method is ideally suited.

In Experiment 2, participants sat at the end of a large robotic arm (MPI Motion Simulator) which translated them back and forth while suspended approximately 3 m off the ground. Here, we used a new transport apparatus, new trajectories, and more controlled conditions to further explore specific features of self-motion perception observed in the first study. The trajectories were similar to those used in the first experiment, but scaled to fit the constraints of the MPI Motion Simulator. Overall, the results clearly reveal distinct errors in self-motion perception that could not have been as easily detected using other methods.

Experiment 1

Materials and methods


Five males and one female participated in Experiment 1 (mean age: 25.3 ± 3.1 years, age range: 22–30 years). The five male participants were right-handed and the female was left-handed according to self-report. All participants had self-reported normal or corrected-to-normal vision and self-reported normal vestibular functioning. Participants were recruited from the Max Planck Institute Subject Database and were naïve with respect to the purpose of the study. This research was performed in accordance with the ethical standards specified by the 1964 Declaration of Helsinki. Participants received compensation of 12€ following completion of the 1.5-h experimental session.

Stimuli and apparatus

Passive transport vehicles
Experiment 1 took place in the Cyberneum Tracking Lab, a 12 × 15-m free walking space (see: http://www.cyberneum.de/). Participants were transported by an Autonomous Modular Vehicle (BlueBotics, Lausanne, Switzerland) modified with an ergonomic seat (Recaro, Kirchheim unter Teck, Germany) to create a robotic wheelchair (Fig. 2a). The experimenters had exclusive wireless control over the behavior of the wheelchair and participants had access to an emergency stop button near their left hand. An experimenter walked beside the wheelchair for the duration of the experiment to ensure that the participant maintained a safe distance from all walls. Participants wore sound-isolating earbuds which played recorded wind sounds to mask ambient environmental noise and the noise from the wheelchair. The natural vibrations of the wheelchair could still be felt by the participants.
Fig. 2

Experiment 1 methods. a A participant riding in the robotic wheelchair from Experiment 1. b The pointing device used in Experiment 1. c Velocity profiles for all trajectories used in Experiment 1. Maximum speeds for Experiment 1 trajectories were 1.2 or 1.65 m/s. Top Constant-velocity trajectories. Middle Two-interval trajectories. Bottom Reverse-motion trajectories

Capture systems

Participants wore a helmet and held a custom-built pointing device in their right hand, both of which were fitted with reflective markers (Fig. 2b). Sixteen infrared cameras (Vicon MX13, Vicon Motion Systems, Oxford, UK) monitored the locations of these reflective markers at a rate of 484 Hz. Software (veLib, Tübingen, Germany) recorded the locations and orientations of the helmet and the pointer approximately 16 times per second. The index finger was secured to the pointing device with medical tape to ensure that pointing direction was consistent throughout the experiment and to reduce hand fatigue.


Participants pointed toward a white Styrofoam ball, 16 cm in diameter, mounted on a stand that was adjusted to the seated shoulder height of each individual participant.


At the start of each trial, the wheelchair moved participants to a starting location in one corner of the Tracking Lab. Participants initiated the trial by pointing with their right hand at the target, which was located in the center of the room. Once participants’ hands were steady, an experimenter pressed a button which initiated the motion-capture. The tracking system collected 2-s of data, at which point the wheelchair automatically began to move along one of several predefined trajectories (each described below). Participants received instructions to close their eyes as soon as the movement was initiated and to point continuously thereafter to the location of the target. A second experimenter in the Tracking Lab ensured that participants’ eyes were closed throughout the duration of the trial. Once the movement trajectory had been delivered, a second button press by the experimenter moved the wheelchair to the next starting location. This portion of the trial always involved some combination of rotational and translational movements, which signaled to participants that they could lower their pointing arm and also caused a mild disorientation. Participants were asked not to open their eyes, however, until the next starting location was reached; thus they received no feedback about the accuracy of their pointing behavior. There were four possible target-relative starting locations, either 3 or 4 m behind the target and with a 1.3 or 3.6 m lateral displacement (Fig. 1). After the completion of the experiment, participants filled out a short questionnaire. They were asked to describe the number of unique motion profiles and the shape of each profile they experienced.

Baseline accuracy

Baseline accuracy of the method was computed based on the data from a previous study in which 12 participants completed an identical task while being pushed in a conventional wheelchair with their eyes open (Campos et al. 2009). The trajectories experienced by these participants were similar to the constant-velocity trajectories described below, and had a peak velocity of approximately 1.3 m/s.

Practice trials

Participants completed a total of 36 practice trials, which also yielded data for a separate study. The first four trials consisted of walking with eyes open without pointing. During the final sixteen practice trials, participants pointed while walking around the room with their eyes closed. At the end of each trial, participants were allowed to look at their final arm position to receive feedback about their pointing performance. We wanted to be sure that participants had sufficient practice pointing with their eyes closed before the test trials on the wheelchair began.


There were three basic types of velocity profiles that participants experienced while in the wheelchair (see Fig. 2c): (1) constant-velocity trajectories, in which a brief acceleration phase was followed by a period of constant-velocity travel and a deceleration phase, (2) two-interval trajectories, which consisted of two consecutive movement intervals with a short stop in between (within a single trial), and (3) reverse-motion trajectories, in which the direction of travel reversed twice during a single trial (forward, then backward, then forward again). Each basic type was presented at two velocity levels: (1) peak velocity of 1.0 m/s or (2) peak velocity of 1.6 m/s. The trajectories varied in length between 5 and 6 m and moved the participant by a distance of either 1.3 or 3.6 m from the target at the point of nearest passage. When different target-relative start locations are taken into account, there were a total of 24 unique trajectories (3 types × 2 speed levels × 4 target-relative start locations). Each combination was repeated 4 times, for a total of 96 trials per participant. Trajectory types were delivered in a pseudorandom order, which ensured that participants were unable to predict what the next trajectory would be.

Data analysis

All analyses were performed off-line in Matlab (MathWorks, Inc., Natick, MA). To account for differences in a participant’s arm orientation and the orientation of the pointer, the signed angular pointing errors in azimuth and elevation were computed for the first 0.3 s at the start of each trial when the participant’s eyes were open. These errors were then subtracted as a constant from the remainder of the data for that trial. The mean signed elevation error was 19°, as the tip of the pointer was angled slightly upward in relation to the declination of participants’ arms. The mean signed azimuth error was 3°, indicating that participants tend to begin by pointing slightly ahead of the target.

For each trial, each sampled pointing angle was converted to an estimate of perceived location. As shown in Fig. 1, the perceived X coordinate (relative to the left-most starting point) is given by the equation Xperceived = Xtarget – ΔX. Here ΔX = ΔY tan θ, where θ equals the recorded azimuth of the arm (i.e., rotation around a vertical axis) and ΔY is the distance between the target and the travel path. In the depicted example, ΔX is positive, but ΔX turns negative after perceived passage of the target. This conversion relies on the four assumptions presented earlier. Thus, the pointing measurements resulted in a sequence of estimated perceived locations during each physical trajectory. Differentiating perceived location with respect to time yielded a measure of continuous perceived velocity. Velocity data were low-pass filtered (first-order Butterworth filter, cutoff frequency of 1 Hz) to reduce noise. Most of the subsequent analyses were performed on the measured perceived velocity.

If, on a given trial, tracking data were not recorded for more than 10% of the trial duration (or more than half of the time needed to establish the initial pointing error), this trial was excluded from the analysis.


Baseline accuracy

Before analyzing pointing accuracy in the absence of vision, it was necessary to establish how accurately the method was able to capture “perceived velocity” when participants pointed with their eyes open. With eyes open, of course, participants can point quite accurately toward the target even if they perceive self-motion inaccurately or not at all. To quantify this baseline accuracy, we subtracted actual velocity on individual trials (recorded from head-tracking data) from the computed “perceived velocity” (obtained according to the method described above). The average of all trials is shown in Fig. 3a. When pointing with eyes open while seated in a wheelchair, the velocity recovered by analyzing arm movements did not deviate from actual velocity by more than 0.1 m/s, except at the very beginning of the trial, when participants tended to move their arms faster than appropriate.
Fig. 3

Experiment 1 results. a Baseline accuracy for Experiment 1 obtained during eyes-open pointing. Baseline accuracy was computed for individual trials by subtracting actual velocity (as recorded from head-tracking data) from perceived velocity when pointing with eyes-open. The average of all 364 trials is plotted in black, along with between-subject standard deviation (gray region). b Underperception of distance traveled at the end of the trial when pointing in the absence of vision. Perceived location (based on arm angle at end of trial) subtracted from actual location at end of trial, average of all subjects for a given trajectory type. Light gray bars represent 5 m trials and dark gray bars represent 6 m trials. Plotted with 95% confidence intervals (two-tailed t-test). c Evidence of start-point dependence of the method. Average measured perceived velocity at 1.5 s into 8 unique constant-velocity trajectories in Experiment 1: 5 m or 6 m paths with target distance of 1.3 m (1) or 3.6 m (2) and speed level of 1.2 m/s (gray bars) or 1.65 m/s (black bars). In df, dotted black line represents actual velocity, solid black line represents the average for all subjects when pointing in the absence of vision, and the thin gray lines represent individual subject averages. d Individual participant averages for constant-velocity trajectories. Average data from 1.65 m/s and 6 m trajectories are shown. e Individual participant averages for two-interval trajectories. f Individual participant averages for a reverse-motion trajectory

Constant-velocity trajectories

A prominent pattern in participants’ pointing behavior during constant-velocity trajectories was the under-perception of distance traveled at the end of the constant-velocity trials. By analyzing participants’ arm angle at the end of the trial, we compared their final perceived location to their actual location. For all participants and almost all trials, the total perceived distance traveled was less than the actual distance (Fig. 3b). The under-perception of distance traveled is significant for all distance/speed combinations except 4 m, 1.0 m/s peak velocity trials (3 m, 1.0 m/s peak velocity trials: p = 0.0022; 3 m, 1.6 m/s peak velocity trials: p = 0.0022; 4 m, 1.0 m/s peak velocity trials: p = 0.3636; 4 m, 1.6 m/s peak velocity trials: p = 0.0476; Wilcoxon rank sum test on participant means). The distribution of perceived traveled distances is not Gaussian, which is reflected in the asymmetrical confidence intervals.

By taking advantage of the rich information provided by this data set, the basis for this underperception becomes immediately apparent. When we examine perceived velocity averaged over subjects, over the entire course of a constant-velocity trial, we see a steady decrease in perceived velocity during periods of constant-velocity travel (Fig. 3d). For all of the constant-velocity trajectories (varying peak speed and varying location), the average perceived velocity at the beginning of any constant-velocity interval was significantly higher than at the end (Table 1).

Two-interval trajectories

Examining more complex trajectories provides even greater insight into the capabilities of the inertial system to effectively perceive and update changes during passive movements. In the two-interval trajectories, participants’ subjective responses to the second velocity profile were diminished compared to their responses to the first profile. The maximum perceived velocity during the second interval is a smaller proportion of the maximum actual velocity (Fig. 3e). We used proportional velocity, because the maximum actual velocity of the second interval was less than that of the first for certain trajectories. For all trajectories, perceived velocity as a proportion of actual velocity decreased from an average of 78% for the first velocity peak to 52% for the second velocity peak. This change is significant for 1.0 m/s peak velocity and 1.6 m/s peak velocity trajectories (two-tailed paired t-test, t(5) = 3.41, p = 0.0189).

Reverse-motion trajectories

It is clear that three participants robustly perceived the movement reversal, whereas three participants have recorded perceived velocities near zero during this interval, despite the fact that their actual velocities were near –1 m/s (Fig. 3f).

Start-point and velocity dependence

If we assume that participants perceived identical trajectories in the same way, regardless of the target-relative starting position, we would expect the method to record the same pattern of perceived velocity for each. There was, however, a small but significant difference between recovered perceived velocities on identical trajectories with different target-relative starting locations (Fig. 3c). In constant-velocity trials, a two-way (path length × target distance) repeated-measures ANOVA on perceived velocity 1.8 s into the trial revealed a significant main effect of path length (F(1,145) = 8.619, p = 0.0324), but no main effect of target distance (F(1,145) = 0.087, p = 0.6782). The mean values of perceived velocity for the different trajectories are shown in Fig. 3c. Overall, the different target-relative starting locations had the effect of scaling the magnitude of perceived velocity, rather than changing the general response patterns.


In Experiment 1, we used spatial updating and continuous pointing to measure the perceived self-velocity of participants undergoing linear translations. The baseline measures with eyes open showed, as expected, that the measurement of pointing leads to accurate measures of the participant’s actual location along the path. The important results are obtained when the method is employed as participants are moved passively with eyes closed. Continuous pointing requires that the participants use their perceived self-velocity to update the egocentric direction to the target and point accordingly. The method revealed that participants perceived a decay in self-velocity during constant-velocity travel, perceived lower velocity during the second of two identical velocity profiles and often failed to perceive a reversal in velocity. We designed Experiment 2 with the following goals in mind: (1) to test the robustness of the method utilizing a new apparatus (MPI Motion Simulator) with a new pointing configuration, and (2) to see if the same perceptual errors occur under more controlled conditions. Trajectories used in Experiment 2 were qualitatively similar to those used in Experiment 1, but included three speed levels instead of two. In addition, a pause of variable length was included in the middle of two-interval trials. Changing the pause length may reveal the time course of the degree to which the second acceleration is underperceived. By using the MPI Motion Simulator instead of a robotic wheelchair, we were able to mask vibrations and program movement trajectories more precisely.

Experiment 2



Twelve participants completed Experiment 2, one of whom was female (mean age: 25.5 ± 3.3, age range: 20–30 years). All participants in Experiment 2 were right-handed and had self-reported normal or corrected-to-normal vision and self-reported normal vestibular functioning. They were recruited from the Max Planck Institute Subject Database and were naïve with respect to the purpose of the study. This research was performed in accordance with the ethical standards specified by the 1964 Declaration of Helsinki. Participants received compensation of 12€ following completion of the 1.5 h experimental session.

Stimuli and apparatus

Passive transport vehicle
Participants were seated on the Max Planck Institute (MPI) Motion Simulator which consists of a large anthropomorphic robot arm (KUKA Robotics, Augsburg, Germany) customized for use in scientific research (Teufel et al. 2007; see Fig. 4a). Its six joints allow the MPI Motion Simulator to deliver motion trajectories with up to six degrees of freedom (X, Y, and Z translations and pitch, roll, and yaw rotations). During the experiment, participants wore noise-canceling headphones which played recorded wind sounds to mask the noise produced by the MPI Motion Simulator. The trajectories included random up-and-down motions of low amplitude (maximum power between 5–10 and 15–25 Hz, rms amplitude = 0.9 mm) to mask the vibrations of the apparatus. The experiment took place in complete darkness while experimenters monitored participants’ behavior via a live infrared video feed in a nearby control room.
Fig. 4

Experiment 2 methods. a A participant on the MPI Motion Simulator used in Experiment 2. b The pointing device used in Experiment 2. c The six target-relative start locations for Experiment 2. On each trial, subjects began at one of the squares and pointed toward the circle (the target). Arrows indicate the direction of travel. Although there were only two physical starting locations (1.25 m to the right or left of the central target), the presence of three targets created a total of six target-relative starting locations. For example, for a trial beginning on the right, the target-relative starting location depended on the target which was active on that particular trial. Targets were presented in a pseudorandom order, and only one target was active on a given trial. d Velocity profiles for all trajectories used in Experiment 1. Top Constant-velocity trajectories. Maximum speeds were 0.54, 0.64, or 0.74 m/s. Middle Two-interval trajectories. Maximum speeds were 0.34, 0.44, or 0.54 m/s. The pause prior to the second interval in Experiment 2 was either 0.05, 0.25, or 0.5 s. Bottom Reverse-motion trajectories. Maximum speeds were the same as in two-interval trajectories, except that the speed for Experiment 2 only varied during the period of motion reversal

Motion-capture systems

Three infrared cameras (OptiTrack, NaturalPoint, Inc., Corvallis, Oregon) were mounted on the robot arm above the participant’s right shoulder. The cameras tracked the location and orientation of reflective markers that were attached to an arm brace at a rate of 83 Hz (Fig. 4b). The precise joint angles of the MPI Motion Simulator, which specify the exact movement profile executed by the robot, were recorded at the same rate.


The targets in Experiment 2 were three circular LEDs, 5.5 cm in diameter, spaced 0.75 m apart and mounted 3.36 m off the ground in front of the participant. This was the approximate shoulder height of participants seated in the MPI Motion Simulator. These targets remained at the same height for all participants. The targets were viewed with the room fully illuminated, so as to allow accurate perception of the locations of the targets.


Trials began when the room was illuminated by an ambient light. Five-seconds passed before one of the three target lights turned on. When ready, participants pointed to the illuminated target light with their right hand, then pressed a button with their left hand to initiate the movement. Both the target light and the ambient light were extinguished just prior to movement initiation. Participants were allowed to keep their eyes open if they chose to do so, as the room was completely dark. The MPI Motion Simulator then moved the participant along one of several pre-defined trajectories (each described below). Participants heard a beep at the end of the trial which prompted them to lower their arm. The room was not re-illuminated until a short, back-and-forth “disorienting” trajectory was delivered to bring the participants to the next starting location. There were two physical starting locations, either 1.25 m to the right or 1.25 m to the left of the middle target. The two physical starting locations and three target locations resulted in a total of six target-relative starting locations, one of which was activated pseudorandomly on each trial (Fig. 4c).

After the completion of the experiment, participants filled out a short questionnaire. They reported the number of unique trajectories they felt they experienced, the shape of those velocity profiles, the relative difficulty of the task, and whether they felt motion sickness at any point during the experiment.

Practice trials

The first four trials of the experiment were completed with the ambient light on, both to allow participants to practice and to establish a baseline pointing accuracy. Four additional practice trials were completed in darkness to allow participants to adjust to the pitch-black conditions.


As in Experiment 1, there were three basic types of velocity profiles that participants experienced (see Fig. 4d). The trajectories in Experiment 2 were slower and shorter than Experiment 1, but were more controlled (i.e., there was much less trial-to-trial variation for the same trajectory). The MPI Motion Simulator can monitor its own location much more accurately than the robotic wheelchair, allowing for more precise control over its position and velocity.

Peak velocities of the trajectories in Experiment 2 ranged between 0.34 and 0.74 m/s and all trajectories were 2.5 m long. Each type of velocity profile was experienced at three different peak velocities. Unlike in Experiment 1, a variable pause was inserted before the second acceleration of the two-interval trajectories as a way of testing the effect of pause duration on the perception of the second movement interval. Taking into account all the variations, there were a total of 90 unique trajectories. All constant-velocity trajectories (18 total) were repeated twice, yielding a total of 108 experimental trials. Trajectories were delivered in a pseudorandom order.

Data analysis

All analyses were performed off-line in Matlab (MathWorks, Inc., Natick, MA), and were the same as in Experiment 1 (see above).


Perceptual errors were modeled in Matlab by simulating a participant moving with a specific velocity profile and inverting the equation for computing perceived location to compute pointing angles along a path for a given target location (θ = arctan ΔXY). Here, ΔX and ΔY represent the distances from the subjective target location, which, unlike in our typical analysis, is not assumed to correspond to the actual target location. Subsequently, perceived location and velocity were computed using the forward equation (ΔX = ΔY tan θ), taking into account the assumptions discussed in the introduction (such as correctly perceived target location and a non-veering travel path). In this way, the effects on pointing behavior of errors other than misperceived self-motion can be simulated. For example, to simulate an error exclusively resulting from a misperceived target distance at trial onset, we place the target, say, 50% closer to the participant along the line of sight and compute changes in pointing angle toward this location as the participant moves at a constant-velocity along a straight path. The resulting angles are used to compute perceived location under the assumption that the participant has perceived the target to be in its true location. Using the type of analysis just described, separate simulations were performed to determine the specific consequences of a misperception of initial target location, a misperception of heading (travel direction), and a misperception of translational velocity.


Data removal

The responses of four participants indicated almost no detection of changes in velocity. Following the experiment, these participants reported being confused throughout the experiment. Because their data differed substantially from the other eight participants, their results were excluded from all analyses and figures except for Fig. 5g. It is important to take note of their inability to complete the task, however, as it highlights important individual differences observed in this task.
Fig. 5

Experiment 2 results. a Baseline accuracy for all participants in Experiment 2 obtained during eyes-open pointing. Baseline accuracy was computed for individual trials by subtracting actual velocity (as recorded from head-tracking data) from perceived velocity during pointing with vision (obtained as described in Fig. 4). The average of ten trials is plotted in black, along with between-subject standard deviation (gray region). b Underperception of distance traveled at the end of the trial. Perceived location (based on arm angle at end of trial) subtracted from actual location at end of trial, average of all subjects for a given trajectory type. Plotted with 95% confidence intervals (two-tailed t-test). In cg, dotted black line represents actual velocity, solid black line represents the average for all subjects when pointing in the absence of vision, and the thin gray lines represent individual subject averages. c Individual participant averages for constant-velocity trajectories. d Individual participant averages for two-interval trajectories (signed mean). e Individual participant averages for two-interval trajectories (unsigned mean). f Individual participant averages for a reverse-motion trajectory (subject group 1). g Individual participant averages for a reverse-motion trajectory (subject group 2). h Evidence of start-point dependence of the method. Measured perceived velocity at 3.98 s into 18 unique constant-velocity trajectories in Experiment 2: beginning left (L) or right (R) of the target with active target –0.75 m (1), 0 m (2), or +0.75 m (3) from the midpoint of the trajectory and speed level of 0.54 m/s (light gray bars), 0.64 m/s (dark gray bars), or 0.74 m/s (black bars)

Baseline accuracy

All biases in baseline accuracy (actual velocity on eyes-open-pointing trials subtracted from perceived velocity) were greatly reduced in Experiment 2 compared to Experiment 1 (see Fig. 5a compared to Fig. 3a). This most likely was due to the more controlled movements produced by the MPI Motion Simulator. In Experiment 1, the mean unsigned difference between actual and perceived velocity for the first 3 s of all eyes-open trials was 0.08 ± 0.04 m/s. In Experiment 2, the same statistic for the first 20 s of reverse-motion eyes-open trials, for example, was 0.03 ± 0.03 m/s.

Constant-velocity trajectories: under-perception of distance traveled

As in Experiment 1, the distance traveled at the end of the constant-velocity trials was under-perceived for all subjects in almost all trials (Fig. 5b). There was also a steady decrease in perceived velocity during periods of constant-velocity travel, although this effect is not quite as prominent as in Experiment 1 (Fig. 5c). Nevertheless, average perceived velocity at the beginning of any constant-velocity interval was higher than at the end. The difference between these time points is significantly greater than zero for all trajectories in Experiment 2 (see Table 1). Significance remains when only trials beginning to the right of the target are considered. These are the trials in which there were the fewest biomechanical constraints on arm movements.
Table 1

Measured perceived velocity at the beginning and end of intervals of nonzero constant-velocity travel

Maximum speed (m/s)

Starting locations

Mean 1 (m/s)

Mean 2 (m/s)

Difference (m/s)









Right of target












Right of target












Right of target





Starting locations to the right of the target are those in which biomechanical constraints on pointing behavior would have the least effect

p-values for a two-tailed t-test

Two-interval trajectories

For trajectories in which the second acceleration was identical to the first, the average perceived velocity is nearly zero during the second interval (Fig. 5d). Because some participants actually perceived a movement reversal during the second interval, we also analyzed the average unsigned perceived velocity. Here, proportional perceived velocity is still reduced in the second interval (Fig. 5e). For all trajectories, the average ratio of the absolute value of perceived velocity to the actual velocity at the first velocity peak is 112.54 ± 18.29%. The same ratio at the second velocity peak is 80.42 ± 23.66%. There was no effect of changing the length of the pause interval between the two intervals of an individual trial.

Reverse-motion trajectories

Eight participants responded robustly to the direction change in the reverse-motion trajectories (Fig. 5f). The four participants whose data was removed from the rest of the analysis had perceived velocities during the reverse-motion trajectories that appeared much different than those of other participants (Fig. 5g). Rather than responding to changes in movement direction, the perceived velocity profiles of these four participants were roughly flat; even on individual trials. These participants also produced a flat response for all trajectory types. Whether this was due to poor movement perception or a misunderstanding of the experimental instructions is not clear. Nevertheless, this result prompted the removal of these four participants from the rest of the analysis.

Start-point dependence

The effect of starting location was analyzed for the constant-velocity and reverse-motion trajectories. A two-way (starting location × target light), repeated-measures ANOVA resulted in a significant interaction effect between start location and target light (F(2,274) = 8.486, p = 0.0003 for constant-velocity trajectories at 3.98 s after initiation and F(2,128) = 9.461, p = 0.0002 for reverse-motion trajectories at 5.25 s after initiation). This reflects the fact that, for both types of velocity profiles, the target light that was the farthest to the right (3 was the farthest to the right, 2 was in the middle, and 1 was to the left), resulted in an increase in mean recovered perceived velocity when the starting location was on the left. For constant-velocity trajectories, these means are equal to 0.44, 0.47, and 0.51 m/s, respectively. For reverse-motion trajectories, these means are equal to 0.38, 0.45, 0.52 m/s. The opposite trend is seen when the starting location was on the right. For constant-velocity trajectories, the means for target lights 1, 2, and 3 are 0.58, 0.56, and 0.51 m/s. For reverse-motion trajectories, they are 0.57, 0.49, and 0.46 m/s. For four of the six starting locations, faster trajectories lead to faster recovered perceived velocity. The mean values of perceived velocity for the different trajectories are shown in Fig. 5h. These results show that there appears to be some biomechanical constraints that lead to predictable start-point dependence effects. Specifically, when a participant’s arm moves to the left, across the torso, the speed of movement is slower than when the arm moves to the right. As in Experiment 1, the overall shape of the perceived velocity profile was not altered by changing the starting location.


To test two of the assumptions presented in the introduction (initial perception of target distance is accurate and perceived direction of movement is parallel to the actual direction), we simulated how recovered perceived location and velocity would change if participants misperceived the original target distance (Fig. 6a) or failed to perceive themselves moving along a vector that was not aligned with their actual motion (Fig. 6b), leading to a perceived final location closer to or farther from the target. A third possible source of error, which is the type of error we expected to identify using our method, is a leaky integration of acceleration, which results in incorrect estimates of self-velocity and, hence, incorrect estimates of distance traveled (Fig. 6c). To make a more direct comparison with our results, the effects of leaky integration were modeled for the actual trajectories from Experiment 2 (Fig. 7).
Fig. 6

Predicted perceived locations (top row) and velocities (bottom row) for a constant-velocity movement interval under three conditions. Values show what recovered perceived velocity would look like if participants made a given error. a Misperceived target location; values range from 50% distance (long dashes) to 150% (short dashes), b Offset in perceived final Y-location; values range from 1.5 m away from the target (long dashes) to 1.5 m towards the target (short dashes), c Exponential decay in perceived velocity; values for constant b in the equation D = a*exp(−b*a), where a = actual distance traveled and D = measured perceived distance traveled, range from 10−3 (long dashes) to 10−1 (short dashes). Actual location and velocity is represented by thick black lines; theoretical measured location and velocity for different parameter values are represented by thin gray lines. Light dashed gray lines indicate the point of target passage

Fig. 7

Leaky integration model applied to actual trajectories from Experiment 2. a Constant-velocity trajectory. b Two-interval trajectory. c Reverse-motion trajectory. Black lines represent actual velocity, gray lines represent theoretical perceived velocity based on a simple leaky integrator model in which perceived change in location is equal to A−1(1–eAx), where A is a constant between 0 and 1 and x is the actual change in location (Lappe et al. 2007)

General discussion

The rich information, robustness, and quantitative analysis made possible by continuous pointing provide much richer insights into the characteristics of perceived velocity compared to previous methods. As a method for measuring perceived self-motion, continuous pointing has several key advantages. First, pointing continuously to a previously viewed target is an intuitive task that does not require any training or calibration. Participants in our experiment immediately performed the task correctly following brief verbal instructions. Second, continuous pointing makes it possible to measure subjective location while the participant is moving, without any need for trajectory replication or post-hoc judgments. Third, and perhaps most importantly, our method provides a continuous estimate of perceived velocity in units of meters per second. This facilitates a direct comparison between the perceived trajectory and the actual trajectory taken by the participant.

Continuous pointing is therefore ideal for studying the perception of complex, passive movement trajectories because it provides dynamic information about perceived velocity without requiring participants to remember how they have moved. It is important to note, however, that a certain amount of systematic start-point dependence was observed, meaning that a participant’s arm angle at the beginning of the trial predictably affected their pointing behavior. Experiment 2 verified that the consistent perceptual errors made by participants in Experiment 1 were more general phenomena, rather than a finding that was specific to a particular experimental apparatus.

What is perhaps most exciting about this novel continuous-pointing method is the information that it conveys about the precise characteristics of perceived translations under circumstances in which perception is non-veridical. In the absence of vision, audition, or proprioception, perceiving translational movements depends upon the transduction of mainly inertial information along with other cues, such as somatosensory information, including vibrations or wind (Yong et al. 2007). Obtaining a precise internal estimate of changes in velocity or location requires that this inertial information be integrated accurately (Mayne 1974). Our continuous-pointing method reveals the types of errors that are being made by the system as a whole, narrowing down the space of possible models and furthering efforts to elucidate the necessary and sufficient conditions for veridical perception of passive translations based on inertial signals. We see the primary advantage of the method as its ability to quickly reveal discrepancies between perceived and actual translational motion during complex linear trajectories. Here, we will discuss some plausible sources of errors made by participants in this study, with special attention paid to the ease with which certain trends can be identified in continuous-pointing data.

By analyzing participants’ arm angles at the end of each constant-velocity trial we demonstrate that the average distance traveled in each trajectory was underestimated (Figs. 3b, 5b). This result could be attributed to any number of errors, including a scaled underperception of velocity throughout the trial or a reduced sensitivity to the initial acceleration. This underestimation could also have been revealed through a post-hoc judgment. For example, we could have asked participants to only indicate when they felt they had passed the target, rather than point to it continuously. However, using this type of method and/or measures would not allow us to effectively reduce the space of possible error models. It is clear that they would not allow for the interpretation of responses to anything other than the simplest trajectories. With continuous-pointing data, it is easy to hone in on the sources of error by overlaying the measured perceived velocity and actual velocity traveled (Figs. 3d, 5c).

From Fig. 6a, we see that a scaled misperception of the target location leads to a pattern of pointing behavior that is indistinguishable from a scaled misperception of velocity throughout the entire trial. This does not agree with the actual results from constant-velocity trajectories, which tend to show a steady decrease in perceived velocity throughout the trial. It is possible that an initial misperception of target distance might lead to a slight shift in the perceived velocity time series, but based on the results of the conditions with richer self-motion cues, we are not concerned about this possibility. Furthermore, we can rule out the case in which participants perceive themselves to be moving along a path that deviates in the direction of the target, as this would result in an increase in recovered perceived velocity throughout the trial (Fig. 6b). Perceiving motion deviating away from the direction of the target leads to a less dramatic decrease in recovered perceived velocity, which appears similar to our observations; however, we rule out this possibility based on the assumptions stated in the introduction.

In both experiments, the average participant’s data indicates two general sources of error that lead to an overall underestimation of distance traveled: an underestimation of the duration of the acceleration phase and a decrease in perceived velocity during intervals of constant-velocity travel (Fig. 6c). The decrease in perceived velocity during constant-velocity travel could result from either a leaky integration of inertial information, high-pass filtering of velocity information, or Bayesian assumptions about stationarity in the absence of an otolith signal (Laurens and Droulez 2007). These models would likely make different predictions about the responses to more complex trajectories and could be differentiated through further applications of the continuous-pointing method.

Looking now at the two-interval trajectory, we see that, despite the fact that the second acceleration is well above threshold (Guedry 1974; Benson et al. 1986; Gianna et al. 1996), almost all participants in all conditions have an attenuated perception of the overall speed of the second movement interval (Figs. 3e, 5d, e). The reduced response to the second acceleration in a pair of forward accelerations is an effect that, to the best of our knowledge, has not been reported in the previous literature (perhaps due to the lack of a sensitive enough measure). This error could be due to a low-level process, such as leaky integration of the otolith signal. A simple leaky integrator model, in which the influence of past accelerations on the current velocity estimate decays over time, would predict a reduced response to the second acceleration. A model of leaky integration applied to actual trajectories from Experiment 2 yields a time series of perceived velocities that are qualitatively similar to the average time series seen in the experiment (Fig. 7). High inter- and intra-participant variability hindered fitting of model parameters to the actual data.

The reduced response to the second acceleration could relate to an increased uncertainty about movement direction that participants have before the second interval of two-interval trials. Specifically, participants are always able to anticipate the direction of movement prior to the first acceleration, and thus respond robustly as soon as an acceleration is felt. This was not true for the second movement and therefore uncertainty about when the second movement would begin, or what the direction of travel would be, may explain the proportionally decreased response to the second acceleration. This trend remains when we look at unsigned perceived velocity, which accounts for participants who perceived their speed veridically, but in the wrong movement direction. These two sources of error are, of course, not mutually exclusive, so the correct explanation may very well include both.

The continuous-pointing data also revealed high inter-participant variations in response to the movement reversal. In Experiment 1, three out of six participants did not appear to perceive themselves moving backwards during these trials. In Experiment 2, the same was true for four out of 12 participants. Although we do not know the cause of this error, the results highlight the ease with which the continuous-pointing method can be used to classify different behaviors. When looking at the data, the error made by these participants was immediately clear, and allowed us to classify participants accordingly. We could eliminate explanations for results that depended upon an inability to perceive the movement reversal. For example, a failure to perceive backwards movement leads to a longer perceived distance traveled; a result that could also be explained by an overperception of forward velocity. These trials also demonstrate another useful feature of the method: the ability to independently analyze a portion of a much longer trajectory. We were able to consider behavior during the movement-reversal interval alone because errors do not necessarily have a cumulative effect on the response, as they would when a discrete post-hoc judgment is used. Overall, the cause of the individual differences in inertial self-motion perception revealed through the current data is an interesting result that should be more systematically explored. For instance, it would be interesting to explicitly test the capabilities of individual participant’s vestibular functioning and evaluate how this might affect performances on these types of tasks.

In order to make strong claims regarding the data obtained from the continuous pointing tasks, it must be assumed that there are no biomechanical constraints that affect performance in a way that can confuse patterns of perceived self-motion with kinematic-related effects. Under ideal conditions, we would expect measured profiles of perceived velocity to be identical, regardless of the start location of a given trial. We do, however, observe some start-point dependence, such that participants’ arm angle at the beginning of the trial affected their pointing behavior from different start locations (Figs. 3c, 5h). Fortunately, these effects follow a predictable pattern, which can be accounted for in future experiments. Specifically, the relative recovered perceived velocity is directly related to the amount of space available for the arm to move freely before encountering an obstacle. In the case of Experiment 1, this obstacle was the limit of the shoulder joint, which prevented the arm from pointing too far backwards. In Experiment 2, the main obstacle was the torso, which hinders movement of the pointing arm when it moves across the body to the left. It is therefore important that experimenters using this method acknowledge these constraints and account for them accordingly. One option, for example, would be to reduce the range of arm angles used. This would also, of course, limit the distance that participants can be transported in a single trial, but this could be compensated for by moving the target farther away. A more distant target would increase the amount of noise in the data, but would also increase the maximum trajectory distance. Despite these effects, we believe that using multiple target-relative starting locations is worthwhile, as it prevents participants from falling into a routine movement pattern by forcing them to point in a unique way on each trial.

The next step in verifying the continuous-pointing method will be to calibrate it against other commonly used methods, such as indicating the point of target passage and online velocity judgments, to evaluate whether the same general trends are observed. We foresee continuous pointing becoming a valuable tool to more extensively investigate the particular characteristics of passive self-motion perception, as well as for exploring relevant issues in a variety of other research domains. Continuous pointing can be extended for use with two- and three-dimensional trajectories, with separate or combined horizontal and vertical motions. The MPI Motion Simulator has the unique capability of presenting three-dimensional trajectories while measuring continuous pointing responses. The method can also be applied with great advantage to the study of linear vection, which is the perception of self-motion based solely on optic-flow information (see Lestienne et al. 1977 who used a postural sway measure). This method can also be used to extend the findings of Campos et al. (2009) which revealed unique characteristics of pointing behavior during actual self-motion that were absent during purely imagined self-motion through space.


Continuous pointing shows great promise as a new measure of perceived self-motion, particularly during complex passive translations. It is able to recover an estimate of participants’ perceived location and velocity with a high degree of spatial and temporal precision. Continuous pointing has the potential to yield a much more detailed characterization of the dynamics of translational self-motion perception than was accessible through previous methods. By facilitating a direct comparison between the actual and perceived velocities during complex, multi-interval trajectories, continuous pointing is an important addition to the currently available methods for studying perceived self-motion.


  1. 1.

    When participants initially point at the visible target prior to moving, there are some systematic errors, partly reflecting the discrepancy between the direction of the pointer recorded by the motion capture system and the participant’s intended pointing direction. In our analysis of the pointings, we null these initial pointing errors and apply the same corrections to all subsequent pointings during the trajectory. Our assumption about pointing is that the changes in pointing during the trajectory are accurate, even though there are constant errors revealed in the initial pointings.



The authors would like to thank Michael Weyel for his help in programming the robotic wheelchair and automating data capture. We thank Hans-Günther Nusseck, Harald Teufel, Reinhard Feiler, Michael Kerger, and Michael Reimann for their invaluable help in carrying out the experiment on the MPI Motion Simulator. We extend our gratitude to John Butler and Bernhard Riecke for their advice on methods of analysis and to Horst Mittelstaedt for very helpful discussions on earlier versions of this manuscript. We also thank Simon Musall and Naima Laharnar for their help with carrying out Experiment 1.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Supplementary material

221_2009_1805_MOESM1_ESM.doc (498 kb)
(DOC 23 kb)


  1. Angelaki DE (2004) Eyes on target: what neurons must do for the vestibuloocular reflex during linear motion. J Neurophysiol 92:20–35PubMedCrossRefGoogle Scholar
  2. Benson AJ, Spencer MB, Stott JR (1986) Thresholds for the detection of the direction of whole-body, linear movement in the horizontal plane. Aviat Space Environ Med 57:1088–1096PubMedGoogle Scholar
  3. Berthoz A, Israël I, Georges-François P, Grasso R, Tsuzuku T (1995) Spatial memory of body linear displacement: what is being stored? Science 269:95–98PubMedCrossRefGoogle Scholar
  4. Campos JL, Siegle JH, Mohler BJ, Bülthoff HH, Loomis JM (2009) Imagined self-motion differs from perceived self-motion: evidence from continuous pointing (submitted)Google Scholar
  5. de Graaf B, Bos JE, Wich S, Bles W (1994) Arthrokinetic and vestibular information enhance smooth ocular tracking during linear (self-)motion. Exp Brain Res 101:147–152PubMedCrossRefGoogle Scholar
  6. Fukusima SS, Loomis JM, Da Silva JA (1997) Visual perception of egocentric distance as assessed by triangulation. J Exp Psychol Hum Percept Perform 23:86–100PubMedCrossRefGoogle Scholar
  7. Gianna C, Heimbrand S, Gresty M (1996) Thresholds for detection of motion direction during passive lateral whole-body acceleration in normal subjects and patients with bilateral loss of labyrinthine function. Brain Res Bull 40:443–449PubMedCrossRefGoogle Scholar
  8. Glasauer S, Brandt T (2007) Noncommutative updating of perceived self-orientation in three dimensions. J Neurophysiol 97:2958–2964PubMedCrossRefGoogle Scholar
  9. Glasauer S, Schneider E, Grasso R, Ivanenko YP (2007) Space-time relativity in self-motion reproduction. J Neurophysiol 97:451–461PubMedCrossRefGoogle Scholar
  10. Grasso R, Glasauer S, Georges-François P, Israël I (1999) Replication of passive whole-body linear displacements from inertial cues. Facts and mechanisms. Ann NY Acad Sci 871:345–366PubMedCrossRefGoogle Scholar
  11. Guedry FE (1974) Psychophysics of vestibular sensation. In: Kornhuber HH (ed) Handbook of sensory physiology, vol VI/2. Springer, Berlin, pp 3–154Google Scholar
  12. Harris LR, Jenkin M, Zikovitz DC (2000) Visual and non-visual cues in the perception of linear self-motion. Exp Brain Res 135:12–21PubMedCrossRefGoogle Scholar
  13. Hutchison JJ, Loomis JM (2006) Does energy expenditure affect the perception of egocentric distance? A failure to replicate Experiment 1 of Proffitt, Stefanucci, Banton, Epstein. Span J Psychol 9:332–339PubMedGoogle Scholar
  14. Israël I, Berthoz A (1989) Contribution of the otoliths to the calculation of linear displacement. J Neurophysiol 62:247–263PubMedGoogle Scholar
  15. Israël I, Chapuis N, Glasauer S, Charade O, Berthoz A (1993) Estimation of passive horizontal linear whole-body displacement in humans. J Neurophysiol 70:1270–1273PubMedGoogle Scholar
  16. Israël I, Grasso R, Georges-Francois P, Tsuzuku T, Berthoz A (1997) Spatial memory and path integration studied by self-driven passive linear displacement. I. Basic properties. J Neurophysiol 77:3180–3192PubMedGoogle Scholar
  17. Israël I, Crockett M, Zupan L, Merfeld D (2005) Reproduction of ON-center and OFF-center self-rotations. Exp Brain Res 163:540–546PubMedCrossRefGoogle Scholar
  18. Ivanenko Y, Grasso R, Israël I, Berthoz A (1997) Spatial orientation in humans: perception of angular whole-body displacements in two-dimensional trajectories. Exp Brain Res 117:419–427PubMedCrossRefGoogle Scholar
  19. Kallie CS, Schrater PR, Legge GE (2007) Variability in stepping direction explains the veering behavior of blind walkers. J Exp Psychol Hum Percept Perform 33:183–200PubMedCrossRefGoogle Scholar
  20. Klier EM, Angelaki DE, Hess BJM (2007) Human visuospatial updating after noncommutative rotations. J Neurophysiol 98:537–544PubMedCrossRefGoogle Scholar
  21. Lappe M, Jenkin M, Harris LR (2007) Travel distance estimation from visual motion by leaky path integration. Exp Brain Res 180:35–48PubMedCrossRefGoogle Scholar
  22. Laurens J, Droulez J (2007) Bayesian processing of vestibular information. Biol Cybern 96:389–404PubMedCrossRefGoogle Scholar
  23. Lestienne F, Soechting J, Berthoz A (1977) Postural readjustments induced by linear motion of visual scenes. Exp Brain Res 28:363–384PubMedCrossRefGoogle Scholar
  24. Loomis JM, Philbeck JW (2008) Measuring spatial perception with spatial updating and action. In: Klatzky RL, Behrmann M, MacWhinney B (eds) Embodiment, ego-space, and action. Taylor & Francis, New York, pp 1–43Google Scholar
  25. Loomis JM, Silva JAD, Fujita N, Fukusima SS (1992) Visual space perception and visually directed action. J Exp Psychol Hum Percept Perform 18:906–921PubMedCrossRefGoogle Scholar
  26. Loomis JM, Klatzky RL, Golledge RG, Cicinelli JG, Pellegrino JW, Fry PA (1993) Nonvisual navigation by blind and sighted: assessment of path integration ability. J Exp Psychol Gen 122:73–91PubMedCrossRefGoogle Scholar
  27. Loomis JM, Klatzky RL, Philbeck JW, Golledge RG (1998) Assessing auditory distance perception using perceptually directed action. Percept Psychophys 60:966–980PubMedGoogle Scholar
  28. Loomis JM, Lippa Y, Klatzky RL, Golledge RG (2002) Spatial updating of locations specified by 3-D sound and spatial language. J Exp Psychol [Learn, Mem, Cogn] 28:335–345CrossRefGoogle Scholar
  29. MacNeilage PR, Banks MS, Berger DR, Bülthoff HH (2007) A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Exp Brain Res 179:263–290PubMedCrossRefGoogle Scholar
  30. Mayne R (1974) A systems concept of the vestibular organs. In: Kornhuber HH (ed) Handbook of sensory physiology, vol VI/2. Springer, Berlin, pp 493–580Google Scholar
  31. Mittelstaedt H (1996) Somatic graviception. Biol Psychol 42:53–74PubMedCrossRefGoogle Scholar
  32. Mittelstaedt M, Glasauer S (1991) Idiothetic navgiation in gerbils and humans. Zool Jb Physiol 95:427–435Google Scholar
  33. Mittelstaedt ML, Mittelstaedt H (2001) Idiothetic navigation in humans: estimation of path length. Exp Brain Res 139:318–332PubMedCrossRefGoogle Scholar
  34. Nico D, Israël I, Berthoz A (2002) Interaction of visual and idiothetic information in a path completion task. Exp Brain Res 146:379–382PubMedCrossRefGoogle Scholar
  35. Paige GD, Seidman SH (1999) Characteristics of the VOR in response to linear acceleration. Ann N Y Acad Sci 871:123–135PubMedCrossRefGoogle Scholar
  36. Philbeck JW, Loomis JM (1997) Comparison of two indicators of visually perceived egocentric distance under full-cue and reduced-cue conditions. J Exp Psychol Hum Percept Perform 23:72–85PubMedCrossRefGoogle Scholar
  37. Philbeck JW, Loomis JM, Beall AC (1997) Visually perceived location is an invariant in the control of action. Percept Psychophys 59:601–612PubMedGoogle Scholar
  38. Sahm CS, Creem-Regehr SH, Thompson WB, Willemsen P (2005) Throwing versus walking as indicators of distance perception in similar real and virtual environments. ACM T Appl Percept 2:35–45CrossRefGoogle Scholar
  39. Seemungal BM, Glasauer S, Gresty MA, Bronstein AM (2007) Vestibular perception and navigation in the congenitally blind. J Neurophysiol 97:4341–4356PubMedCrossRefGoogle Scholar
  40. Seidman SH (2008) Translational motion perception and vestiboocular responses in the absence of non-inertial cues. Exp Brain Res 184:13–29PubMedCrossRefGoogle Scholar
  41. Seidman SH, Telford L, Paige GD (1998) Tilt perception during dynamic linear acceleration. Exp Brain Res 119:307–314PubMedCrossRefGoogle Scholar
  42. Sholl MJ (1989) The relation between horizontality and rod-and-frame and vestibular navigational performance. J Exp Psychol Learn Mem Cogn 15:110–125CrossRefGoogle Scholar
  43. Siegler I, Reymond G, Leboucher P (1999) Role of otoliths in spatial orientation during passive travel in a curve. Ann N Y Acad Sci 871:447–450PubMedCrossRefGoogle Scholar
  44. Sun H-J, Campos JL, Chan GSW (2004a) Multisensory integration in the estimation of relative path length. Exp Brain Res 154:246–254PubMedCrossRefGoogle Scholar
  45. Sun H-J, Campos JL, Young M, Chan GSW, Ellard CG (2004b) The contributions of static visual cues, nonvisual cues, and optic flow in distance estimation. Perception 33(1):49–65PubMedCrossRefGoogle Scholar
  46. Telford L, Seidman SH, Paige GD (1997) Dynamics of squirrel monkey linear vestibuloocular reflex and interactions with fixation distance. J Neurophysiol 78:1775–1790PubMedGoogle Scholar
  47. Teufel HJ, Nusseck H-G, Beykirch KA, Butler JS, Kerger M, Bülthoff HH (2007) MPI Motion Simulator: development and analysis of a novel motion simulator. In: Proceedings of the AIAA modeling and simulation technologies conference and exhibit. American Institute of Aeronautics and Astronautics, Reston, VA, USA, pp 1–11Google Scholar
  48. Wu B, Ooi TL, He ZJ (2004) Perceiving distances accurately by a directional process of integrating ground information. Nature 428:73–77PubMedCrossRefGoogle Scholar
  49. Yong NA, Paige GD, Seidman SH (2007) Multiple sensory cues underlying the perception of translation and path. J Neurophysiol 97:1100–1113PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2009

Authors and Affiliations

  • Joshua H. Siegle
    • 1
    • 4
  • Jennifer L. Campos
    • 1
    • 2
  • Betty J. Mohler
    • 1
  • Jack M. Loomis
    • 3
  • Heinrich H. Bülthoff
    • 1
  1. 1.Department of Human Perception, Cognition and ActionDepartment Bülthoff, Max Planck Institute for Biological CyberneticsTübingenGermany
  2. 2.iDAPT, Technology Research and DevelopmentToronto Rehabilitation Institute TorontoTorontoCanada
  3. 3.Department of PsychologyUniversity of CaliforniaSanta BarbaraUSA
  4. 4.Department of Brain and Cognitive SciencesMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations