Roll tilt self-motion direction discrimination training: First evidence for perceptual learning

Abstract

Perceptual learning, the ability to improve the sensitivity of sensory perception through training, has been shown to exist in all sensory systems but the vestibular system. A previous study found no improvement of passive self-motion thresholds in the dark after intense direction discrimination training of either yaw rotations (stimulating semicircular canals) or y-translation (stimulating otoliths). The goal of the present study was to investigate whether perceptual learning of self-motion in the dark would occur when there is a simultaneous otolith and semicircular canal input, as is the case with roll tilt motion stimuli. Blindfolded subjects (n = 10) trained on a direction discrimination task with 0.2-Hz roll tilt motion stimuli (9 h of training, 1,800 trials). Before and after training, motion thresholds were measured in the dark for the trained motion and for three transfer conditions. We found that roll tilt sensitivity in the 0.2-Hz roll tilt condition was increased (i.e., thresholds decreased) after training but not for controls who were not exposed to training. This is the first demonstration of perceptual learning of passive self-motion direction discrimination in the dark. The results have potential therapeutic relevance as 0.2-Hz roll thresholds have been associated with poor performance on a clinical balance test that has been linked to more than a fivefold increase in falls.

Introduction

Perceptual learning leads to a stable improvement in sensory function through repeated exposure to stimuli (Fahle, 2005; Gold & Watanabe, 2010). While most improvements in perception happen during development (Atkinson, Braddick, & Moar, 1977; Gibson, 1969), perceptual learning is still possible throughout adulthood by means of extensive training and neuronal plasticity (Fahle & Poggio, 2002). Perceptual learning results in an improved perception of stimuli. In particular, visual perceptual learning has been studied in the context of rehabilitation in clinical conditions, aging, as well as education (Dosher & Lu, 2017). Moreover, improvements through training have been shown in the auditory system (Atienza, Cantero, & Dominguez-Marin, 2002; Moore, Amitay, & Hawkey, 2003), the olfactory system (Moreno et al., 2009; Wilson & Stevenson, 2003), taste perception (Owen & Machamer, 1979), and the somatosensory system (Pleger et al., 2003; Sathian & Zangaladze, 1998).

To date, however, no demonstrations of perceptual learning have been reported in passive self-motion perception relying primarily on the vestibular organs. Hartmann and colleagues found no perceptual learning of passive self-motion for yaw rotations (semicircular canal input) and leftward-rightward translations (y-translation; otolith input) in the dark (Hartmann, Furrer, Herzog, Merfeld, & Mast, 2013). Interestingly, perceptual learning of self-motion direction occurred when participants were exposed to a visual scene during both training and testing, thus combining visual and vestibular information during both training and testing. The difference in learning outcome between the two conditions was explained by the highly multisensory nature of spatial orientation and the importance of visual information for self-motion perception (Wolfe et al., 2018).

Indeed, it has been argued that multisensory stimuli are optimal for perceptual learning (Shams & Seitz, 2008). In their review, the authors argue that unisensory stimuli can exclusively alter brain structures involved in processing this specific type of stimulus. Multisensory stimuli, however, can alter not only brain structures involved in processing unisensory inputs, but also functional connectivity between unisensory structures as well as multisensory structures. Thus, training with multisensory stimuli increases the probability and efficiency of perceptual learning. Interestingly, multisensory training increases perceptual learning even for test stimuli that are unisensory (Guo & Guo, 2005; Seitz, Kim, & Shams, 2006; Von Kriegstein & Giraud, 2006). These studies imply that a multisensory setting facilitates perceptual learning.

The goal of the present study was to investigate whether perceptual learning of passive self-motion without visual input is possible when the passive self-motion stimuli are composed of simultaneous otolith and semicircular canal information. The earlier Hartmann et al. (2013) study used motion stimuli that either activated the semicircular canals or the otoliths in isolation. A combined motion stimulus activating both the otoliths and the semicircular canals, such as roll tilt, can be considered a multisensory stimulus since it involves different vestibular sensory organs. It should be noted that self-motion perception thresholds need not depend exclusively on vestibular information. Somatosensory, proprioceptive, and visceral signals cannot be ruled out completely (Jian, Shintani, Emanuel, & Yates, 2002; Lim, Karmali, Nicoucar, & Merfeld, 2017; Mittelstaedt, 1992, 1996; Yates & Stocker, 1998). However, a comparison of self-motion thresholds in healthy subjects and bilateral vestibular patients suggests that the vestibular system plays the predominant role in self-motion perception for roll tilt (Valko, Lewis, Priesol, & Merfeld, 2012).

We investigated perceptual learning with 0.2-Hz roll tilt motion stimuli because they require the brain to combine otolith and semicircular information (Lewis, Priesol, Nicoucar, Lim, & Merfeld, 2011; Lim et al., 2017). Subjects were trained in a 0.2-Hz roll tilt direction discrimination task. To assess changes in self-motion perception, 0.2-Hz roll tilt thresholds were measured before and after training. In addition, we measured transfer of learning to a higher frequency (1 Hz) and different motion axes (pitch, y-translation). Most studies on perceptual learning report that learning is specific to the trained condition (Dosher & Lu, 2017; Parkosadze, Otto, Malania, Kezeli, & Herzog, 2008), suggesting that transfer effects are rather unlikely. However, previous findings of learning transfer from multisensory to unisensory conditions (Seitz et al., 2006) suggest possible transfer effects from roll tilt direction discrimination to y-translation direction discrimination.

Methods

Subjects

Thirty subjects took part in this study. Ten subjects (six female, four male, aged between 22 and 31 years) were part of the training group that received a self-motion discrimination training, as well as a pre- and post-test to measure their self-motion perception thresholds. Another 20 subjects (12 female , eight male, aged between 21 and 38 years) were tested as control subjects (divided in two different groups with n = 10; see Motion stimuli for details) who received no training, but took part in the pre- and post-test threshold measurements with the same time interval in between. Subjects indicated no history of vestibular disorders. They were compensated with cash or course credits for participating in the experiment. All subjects gave informed consent prior to the study. The study was carried out in accordance with the Declaration of Helsinki and ethical approval was obtained from the Ethics Committee of the University of Bern.

Motion stimuli

The motion stimuli used for the training and the threshold measurements were applied using a six degrees of freedom motion platform (6DOF2000E, MOOG Inc., East Aurora, NY, USA). All stimuli consisted of single cycles of sinusoidal acceleration motion profiles (Grabherr, Nicoucar, Mast, & Merfeld, 2008) with a frequency of either 0.2 Hz or 1 Hz along different motion axes dependent on the condition. Participants were blindfolded and seated on a cushioned chair mounted on the motion platform. The head was fixated and participants were wearing headphones playing white noise to cover the sound from the motion platform.

During training the subjects of the training group experienced roll tilts about an earth horizontal axis with the center of rotation on the level of the motion platform, just below the hip. The frequency of the motion was 0.2 Hz. The peak velocity of the stimuli was determined individually for each subject based on the performance at the pretest measurement. We aimed at an accuracy of about 65% at the start of the training to maximize efficiency of the training (Hartmann et al., 2013).

At the pre- and post-test measurements, thresholds of self-motion perception for different motion conditions were measured for all participants. We were interested in the learning effect for the trained condition and whether there is a potential transfer to other types of motion axes and to a higher motion frequency. Thus, the pre- and post-test assessment of the experimental group also included roll 1 Hz (same motion axis, different frequency), pitch 0.2 Hz (different motion axis, semicircular canal and otolith combined, same frequency, center of rotation on the level of the motion platform), and y-translation 0.2 Hz (unisensory condition, otolith only, same frequency) threshold measurements. All 20 control subjects completed the pre- and post-test assessment in the roll 0.2 Hz and roll 1-Hz condition. Additionally, we measured pitch 0.2 Hz and y-translation 0.2 Hz thresholds in one half (n = 10) of the control subjects. We tested the other half of the control subjects (n = 10) with pitch 1 Hz to include another type of motion with higher frequency. Note that we wanted to avoid testing all the control thresholds within the same participants in order to reduce the duration of the threshold measurement sessions.

At the pre- and post-test measurements, we used seven different motion intensities for each direction (left/right for roll tilt and y-translation, forward/backward for pitch), resulting in 14 different stimuli. The peak velocities were 1.5 °/s, 0.85 °/s, 0.65 °/s, 0.4 °/s, 0.15 °/s, 0.1 °/s and 0.05 °/s for the roll and pitch stimuli and 0.06 m/s, 0.055 m/s, 0.05 m/s, 0.045 m/s, 0.04 m/s, 0.035 m/s, and 0.03 m/s for the y-translation stimuli. Each stimulus was presented ten times during this measurement, resulting in a total of 140 trials per motion condition. The peak velocities of each stimulus were chosen based on pilot testing in order to measure the whole spectrum of performance while accounting for interindividual differences between subjects. For the y-translation, the highest velocity was not chosen based on performance, but because it was the maximal possible velocity due to displacement limitations of the motion platform.

Procedure

Pretest

The first appointment served the purpose of measuring the psychometric functions for all tested motion axes and frequencies. For the training group, performance in the pretest measurement of the roll 0.2-Hz condition additionally determined the peak velocity of the training stimuli. Subjects completed all four motion conditions in this session and the order of the conditions was counterbalanced across subjects. Prior to each motion condition, we administered 24 practice trials consisting mostly of suprathreshold stimuli to allow for familiarization with the task and motion condition. Then, during the actual threshold measurement, subjects performed the motion direction discrimination task for the respective motion condition. A sound indicated the onset of the motion. Subjects responded by button press to indicate whether they were rotated (or translated) to the left or right (or backward or forward). Each motion condition took between 20 and 40 min depending on the motion frequency and response speed. Including breaks between motion conditions, the pretest took around 3 h.

Training

The training group received a roll 0.2 Hz motion direction discrimination training starting a day after the pretest. The training was comparable to the threshold measurement in the pretest in the roll 0.2-Hz condition, with the difference that only one motion intensity was used. This motion intensity was chosen for each subject individually based on their fitted psychometric function in the pretest. We chose the peak velocity such that the performance accuracy (i.e., percent correct) in the first training sessions would be about 65%. If accuracy was below 55% or above 85% in the first three training blocks we adapted the difficulty of the task. The training was administered over 6 days with a 2-day break over the weekend after either the third or the fourth training day. Each day, subjects completed three blocks of the direction discrimination training with a 100 trials. Thus, the training consisted of 300 trials (90 min) per day, which was a compromise between the minimal number of trials required for perceptual learning to take place (160–400; Aberg, Tartaglia, & Herzog, 2009) and a duration that was a limited burden on subjects. Over the course of the 6 training days, subjects trained for approximately 9 h and they completed 1,800 trials. In order to maximize learning efficiency, participants received feedback in the form of a short tone when they made a mistake (De Niear, Noel, & Wallace, 2017; Fahle & Edelman, 1993; Goldhacker, Rosengarth, Plank, & Greenlee, 2014).

Post-test

The post-test served to assess training effects in the roll 0.2-Hz condition as well as in the transfer conditions. For the training as well as the control group, the post-test session took place on the ninth day after the pretest session (on the first day after the last training session for the training group). The time of measurement was kept the same as in the pretest session. For all motion conditions, subjects again received the 24 practice trials before the measurement started. For each subject, the order of the motion conditions was the same as in the pretest.

Data analysis

Responses were analyzed using Bayesian Hierarchical Generalized Linear Models that estimate fixed effects and varying effects for each subject (partial pooling models). This has clear advantages over the more traditional method of fitting data to each subject individually (no-pooling models) and then further analyzing the estimated parameters. On a conceptual level, partial pooling models assume that there is a distribution of perceptual thresholds in the population (i.e., fixed or group-level effects), and all subjects are random draws from this distribution (i.e., varying effects). This allows for estimation of group-level means and additional varying effects for individual subjects. In no-pooling models, subjects are treated as completely independent because no assumption is made about an underlying population. For parameter estimation, partial pooling models lead to more reliable parameter estimates (Katahira, 2016). Firstly, partial pooling models induce shrinkage of the parameter estimates for the subjects (varying effects) towards the group means, which reduces overfitting of the data (Ellis, Klaus, & Mast, 2017; Gelman & Hill, 2007; Katahira, 2016). Another important advantage of partial pooling models is that uncertainty concerning the parameters of each subject is considered when estimating group-level effects. This is achieved by weighting the data of individual subjects according to the uncertainty that is associated with it. When analyzing a point estimate of the parameters for each subject individually, uncertainty is not taken into account. In that case, each estimate is weighted equally and independent of its uncertainty. Lastly, data from all subjects are used to estimate varying effects for each subject individually, thus increasing the reliability of estimates by using all possible information (Gelman & Hill, 2007). A drawback of using Bayesian Hierarchical Generalized Linear Models is the increased complexity and computational power needed for data analysis. However, modern software packages such as BRMS and RStan have made the application of such models more convenient (Bürkner, 2017; Stan Developent Team, 2018).

For the pre/post comparison, responses were analyzed using a Bayesian Hierarchical Generalized Linear Model with a probit link function. The probability of a rightward (or forward for pitch) response was predicted by peak velocity (positive = rightward/forward, negative = leftward/backward), the time (pre vs. post) and the group (training vs. control; note that the two control groups were not separated for data analysis). Dummy coding was used for categorical variables, with pre and training being the reference categories (see Table 1 for a description of model parameters). This allowed for estimation of psychometric functions before and after the training (or waiting period) for both the training and control group. Varying intercept and varying slopes for all variables (velocity, time) except group, which is a between-subjects variable, were implemented (Barr, Levy, Scheepers, & Tily, 2013). We define perceptual learning as an increase in slope of the psychometric function that reflects sensitivity (Wichmann & Hill, 2001). For comparability with other studies on self-motion perception, threshold values are also reported. Threshold values are the inverse of the slope parameter of the psychometric function and represent the velocity value at which a subject has an accuracy of 84% if there is no bias (Merfeld, 2011).

Table 1 Description of model parameters in the pre/post comparison

For the training data, we predicted the probability of a rightward answer by the stimulus direction (right or left). If we adapted the stimulus intensity within the first three blocks (see above), these blocks were excluded for analysis. We used effect coding for the variable stimulus direction (right = 0.5, left = −0.5). This allows for a convenient interpretation of model parameters in terms of signal detection parameters. The negative intercept can be understood as the decision criterion. The parameter for the stimulus direction can be readily interpreted as d’, a standard signal detection sensitivity index (Knoblauch & Maloney, 2012). Again, maximal varying effects structure (varying intercept and varying slopes for direction and block) that was justified by the design was implemented in this analysis (Barr et al., 2013).

Models for both pre/post comparison and data recorded during the training were estimated using brms (Bürkner, 2017) and rstan (Stan Developent Team, 2018). Weakly informative priors were used for model estimation. For population-level parameters a normal prior (mean = 0, SD = 100) was used. For all other parameters default priors provided by brms were used, and specifically for subject-level variability a half student-t distribution was used (df = 3, mean = 0, spread = 10). Parameter estimates were obtained using Markov Chain Monte Carlo Sampling (MCMC) with four independent chains of 1,000 warm-up samples and 1,000 samples drawn from the posterior distribution, which were saved for statistical inference. To make sure that the samples of the chains converged to the same posterior distribution, chains were visually inspected and R-Hat statistics were computed. All R-Hats were below 1.02, suggesting that the chains had converged to the same posterior distribution (Gelman et al., 2013). The parameter estimates representing additive effects were evaluated using the 95% credible interval (95% CrI) based on the posterior distribution. If the 95% CrI of a parameter estimate did not include 0, it was interpreted as strong evidence for an effect (Kruschke, 2013; Nicenboim & Vasishth, 2016). A maximum-likelihood approach for the same statistical models with lme4 (Bates, Mächler, Bolker, & Walker, 2015) for parameter estimation led to the same conclusions as the Bayesian analysis reported in this paper. All data, models, and code for model estimation are freely accessible on the Open Science Framework (OSF; https://osf.io/dhtq8/).

Results

Pre/post comparison

In the discussion of the results for each motion condition we focus mainly on the parameters b_post*velocity and b_post*control*velocity, as these parameters reflect perceptual learning. However, if there is a relevant finding (e.g., concerning the bias) it will also be discussed. The parameter b_velocity was positive in all motion conditions used in the experiment. This is not surprising, as it suggests that in the reference category subjects were able to perform the task and that their discrimination ability improved with increasing stimulus level (i.e., their psychometric function showed a positive slope).

Roll 0.2 Hz

A full account of all parameter estimates for the roll 0.2 Hz motion condition can be found in Table 2 and is illustrated in Fig. 1a. There was an increase in the slope of the psychometric function (i.e., decrease in threshold) when comparing the pre- and post-test condition in the training group (b_post*velocity = 1.37, 95% CrI [0.61; 2.18]). The negative three-way interaction of velocity, time of measurement, and group indicates that this increase in sensitivity was not present in the control group (b_post*control*velocity = -1.26, 95% CrI [-2.18; -0.35]). Indeed, supplementary analysis with the control group as reference confirms that there is no improvement between pre- and post-test evident in the control group (b_post*velocity = 0.22, 95% CrI [-0.40; 0.66]; see Table S1 in the Supplementary Materials for a detailed account of parameter estimates of the model with the control group as reference). Training improved sensitivity in 0.2-Hz roll tilt discrimination. This is evidence for perceptual learning of self-motion discrimination in the dark. In the training group, the average threshold across subjects was reduced 33% from 0.36 °/s (range: 0.27–0.65 °/s) before training to 0.24°/s (range: 0.17–0.40 °/s) after the training. Moreover, each individual subject showed a reduction in threshold between the two measurements (see Fig. 2a). In the control group, the threshold was 0.35 °/s (range: 0.20–0.82 °/s) at the first measurement and 0.33°/s (range: 0.17–3.80 °/s) at the second measurement.

Table 2 Model summary for the roll 0.2 Hz pre/post comparison
Fig. 1
figure1

Visualization of fitted psychometric functions estimated with the hierarchical model. a Proportion of right responses as a function of angular velocity in the roll 0.2-Hz condition. There is an increase in slope of the psychometric function (i.e., increased discriminability) between the two measurements in the training group (left panel), but not in the control group (right panel). b Proportion of right responses as a function of angular velocity in the roll 1-Hz condition. The slope of the psychometric function in the post-test is increased compared to the pretest for the training group (left panel) and for the control group (right panel). c Proportion of forward responses as a function of angular velocity in the pitch 0.2-Hz condition. There is neither an increase in slope for the training group (left panel) nor for the control group (right panel). d Proportion of right responses as a function of velocity in the y-translation 0.2-Hz condition. There is neither an increase in slope for the training group (left panel) nor for the control group (right panel)

Fig. 2
figure2

Perceptual thresholds for all subjects in the roll 0.2-Hz (a) and roll 1-Hz (b) motion conditions. Data points represent varying effects of logthresholds for each subject, which were estimated in the hierarchical generalized linear model. Each color represents a single subject before and after the training or the waiting period. The training group is visualized in the left panels, and the control group in the right panels. Larger gray circles represent population estimates of logthresholds. Thresholds were log transformed for better scaling of the visualization

Additionally, the model also suggests an overall bias toward the response right for the reference group (b_intercept = 0.23, 95% CrI [0.09; 0.36]). A lack of any other effects concerning the response tendencies suggests that there is a preference for rightward responses in all motion conditions. This is reflected in the slight leftward shift of all psychometric functions in Fig. 1a.

Roll 1 Hz

All parameter estimates for the roll 1-Hz condition are summarized in Table 3, and Fig. 1b shows the psychometric functions. We found a similar increase in slope of the psychometric function as in the trained motion condition (Roll 0.2 Hz) when comparing the pre- and post-test condition in the training group (b_post*velocity = 1.05, 95% CrI [0.24; 1.93]). We found no three-way interaction between velocity, time of measurement, and group, suggesting that the change between the first and the second measurement was the same for the control group (b_post*control*velocity = -0.26, 95% CrI [-1.36; 0.77]). This implies that there was also an increase in slope for the control group in the roll 1 Hz motion condition. This is supported by supplementary analysis with the control group as reference category, which shows that the slope of the psychometric function of the control group increased between pre- and post-test (b_post*velocity = 0.78, 95% CrI [0.09; 1.47]); see Table S2 in the Supplementary Materials for a detailed account of parameters of the model with the control group as reference. The mean thresholds before and after training were 0.42 °/s (range: 0.26–1.39 °/s) and 0.29 °/s (range: 0.17–0.84 °/s), respectively, in the training group. This is a reduction of 31%. In the control group, the th;eshold was 0.33 °/s (range: 0.18–3.30 °/s) at the first measurement and 0.26 °/s (range: 0.16–0.74 °/s) at the second measurement, thus, it was reduced by 21% even without training. Just like in Roll 0.2 Hz, each subject of the training group showed a reduced threshold after training (Fig. 2b). In the control group, thresholds were reduced in all but one subject.

Table 3 Model summary for the roll 1 Hz pre/post comparison

Pitch 0.2 Hz

All parameter estimates for the pitch 0.2-Hz condition can be found in Table 4 and psychometric functions are visualized in Fig. 1c. We found no increase in slope of the psychometric function for the pitch 0.2-Hz condition between pre- and post-test in the training group (b_post*velocity = 0.20, 95% CrI [-0.36; 0.74]). There was also no three-way interaction between velocity, time of measurement and group (b_post*control*velocity = -0.13, 95% CrI [-0.87; 0.60]). Neither the experimental group nor the control group improved between the pre- and post-test measurement. In terms of thresholds, the training group had a mean threshold of 0.47 °/s (range: 0.27–0.89 °/s) before training and 0.43 °/s (range: 0.23–0.85 °/s) after training. In the control group, the threshold was 0.61 °/s (range: 0.23–1.86 °/s) at the first measurement and 0.59 °/s (range: 0.21–1.80 °/s) at the second measurement.

Table 4 Model summary for the pitch 0.2 Hz pre/post comparison

The parameter b_post suggests that there was a bias favoring forward responses in the reference group after the training session that was not present at the first measurement (b_post = 0.28, 95% CrI [0.05; 0.51]). This suggests that there was a shift in the decision criterion caused by the training.

Y-translation 0.2 Hz

Parameter estimates in the y-translation 0.2-Hz condition are summarized in Table 5 and visualized in Fig. 1d. The slope of the psychometric function in the y-translation 0.2-Hz condition between pre- and post-test in the training group did not increase (b_post*velocity = 0.32, 95% CrI [-5.12; 6.01]). We found no three-way interaction between velocity, time of measurement, and group (b_post*control*velocity = -0.13, 95% CrI [-0.87; 0.59]), thus there was no improvement in the control group either. In the training group, the mean threshold was 0.13 m/s (range: 0.08–0.46 m/s) before training and 0.12 m/s (range: 0.03–15.07 m/s) after the training. In the control group, the threshold was 0.08 m/s (range: 0.02–0.52 m/s) at the first measurement and 0.08 m/s (range: 0.01–0.41 m/s) at the second measurement.

Table 5 Model summary for the Y-translation 0.2 Hz pre/post comparison

Pitch 1 Hz

One subject had to be excluded in this condition due to a mistake in data recording. No increase in sensitivity between the first and second measurement (b_velocity*post = -0.04, 95% CrI [-1.46; 1.37]) was found (summary of results in Table 6) in the subjects tested with pitch 1 Hz. The threshold for this group was 0.36 °/s (range: 0.22–1.27 °/s) at the first measurement and 0.37 °/s (range: 0.19–1.43 °/s) at the second measurement.

Table 6 Model summary for the pitch 1 Hz pre/post comparison

Training effect

The effects of training vary between individuals. d’ for each subject (varying effects) during training show that some subjects improved over time. Five out of the ten subjects have a positive slope of d’ over the course of the 18 blocks. For the remaining five subjects, one shows a zero slope, and four show a negative slope of d’ over the 18 training blocks. See Table 7 and Fig. 3 for a detailed summary of the varying effects between subjects. Two of the subjects with a negative slope were asked to repeat the training with a slightly higher velocity, to test whether training stimuli were too difficult for learning to be evident. Indeed, visual inspection indicates a positive learning curve with the easier stimuli (see blue dots in Fig. 3).

Table 7 Summary of effects of the slope of d’ as a function of the block for each subject (varying effects)
Fig. 3
figure3

Visualisation of d’ over the training blocks for each subject individually (random effects). Black dots indicate model prediction of d’ for each block with 95% CrI. Red dots are d’ that were calculated on the basis of the proportions of hits and false alarms for each subject and block. The blue dots (subjects 3 and 6) represent d’ estimated on the basis of the proportions of hits and false alarms for the second time they completed the training. These data were not included in the fitted model, and only serve to illustrate the hypothesized explanation that stimuli were too difficult in the training sessions. A lack of data points in the first three sessions indicates that for this subject the motion intensity was changed and the data before the change of motion intensity was not included in the analysis

On the population level, d’ in the first block of training is significantly higher than 0 (b_direction = 0.94, 95% CrI [0.51; 1.38]; sd_direction = 0.64, 95% CrI [0.35; 1.18]). A d’ of 0 corresponds to chance performance in a discrimination task. Thus, participants were above chance in discriminating leftward from rightward rotations in the first block. The additive effect of the block on d’, i.e. the slope of d’ over the blocks, did not differ from 0 (b_direction*block = 0.01, 95% CrI [-0.03; 0.06]; sd_direction*block = 0.06, 95% CrI [0.04; 0.11]) on the population level. This suggests that on the population level, d’ did not increase as a function of the block number and performance stayed the same during the training task over the course of the 18 training blocks. A visualisation of d’ as a function of the block for the population is shown in Fig. 4.

Fig. 4
figure4

Model fit of d’ as a function of the block. Black dots are mean population estimates of d’ with the bars indicating the 95% CrI. Red dots are d’, which were calculated on the basis of the proportions of hits and false alarms for each subject and block

Analysis of the response tendency did not reveal any substantial biases on the population level. The intercept of the modelled training data does not differ from 0, suggesting that participants did not have a bias for either leftward or rightward responses in the first block of training (b_Intercept = 0.06, 95% CrI [-0.12; 0.25]; sd_Intercept = 0.28, 95% CrI [0.17; 0.48]). The additive effect of the block number on the Intercept, i.e. the slope of the bias over the block number, does not differ from 0 (b_block = 0.00, 95% CrI [-0.01; 0.02]; sd_block = 0.03, 95% CrI [0.02; 0.05]). This suggests that the bias did not change over the course of the training.

Discussion

Subjects were trained in a roll tilt 0.2-Hz self-motion direction discrimination task in the dark. Self-motion perception thresholds for different motion frequencies and axes were assessed before and after training. After 6 days of training (9 h, 1,800 trials), perceptual thresholds were reduced by 33% in the 0.2-Hz roll direction discrimination task. This indicates better roll direction discrimation after the training.

This is – to our knowledge – the first demonstration of perceptual learning of self-motion perception in the dark. In a previous study, Hartmann et al. (2013) used yaw rotations about an earth vertical axis (semicircular canal input only) and y-translations (otolith input only) and found no evidence for perceptual learning in the dark. Here, we used 0.2-Hz roll tilts (combined otolith and semicircular canal input) about an earth horizontal axis and we found learning. In roll tilt perception, the dynamic signal from the semicircular canals is integrated with gravitational cues from the otoliths (Lim et al., 2017). We conclude that the integration of these signals is most likely responsible for perceptual learning. This is in line with two recent studies on learning of dynamic balancing (Vimal, DiZio, & Lackner, 2017; Vimal, Lackner, & DiZio, 2018). In these studies, subjects were able to learn a dynamic balancing task in an upright roll rotation task or in a supine yaw rotation task. Removing the gravitational cues in the supine roll rotation task and the upright yaw rotation task impaired overall performance and learning of the balancing task.

We did not find any transfer of perceptual learning to the pitch 0.2-Hz or the y-translation 0.2-Hz condition. This lack of transfer is in line with most perceptual learning studies and the previous study on perceptual learning of self-motion perception (Dosher & Lu, 2017; Hartmann et al., 2013). A transfer to the unisensory y-translation condition could have been expected based on the studies on multisensory perceptual learning (Shams & Seitz, 2008). However, it should be noted that the otolith input during y-translation is purely due to linear translation and thus is not directly comparable to gravitational cues present in the roll tilt.

The perceptual improvement appears to be specific to the roll plane. However, we did observe increased performance after 0.2-Hz roll tilt training in the roll tilt 1-Hz condition. This may not represent a transfer effect because we found a similar increase in sensitivity in the 1-Hz condition for the control group that received no training. Fast learning has been reported in visual tasks (Fahle, Edelman, & Poggio, 1995; Poggio, Fahle, & Edelman, 1992) and may account for the improvement in the 1-Hz condition without any training. Control subjects in our study completed a total of 280 trials, which is in line with the number of trials used in studies on fast perceptual learning. However, more research is needed to replicate and further investigate the unexpected finding obtained with 1-Hz roll motion.

Given that we found increased performance after the training in the roll 0.2-Hz condition on the population level, it may seem surprising that we did not find a corresponding increase in d’ during the training. When looking at d’ as a function of the training block, we found that d’ remains unchanged during the training on the population level. Looking at effects for each subject separately, we can see that half the subjects show a positive association between block and d’ as would be expected when perceptual learning occurs. In the subjects who showed a negative association, d’ was relatively close to zero during most blocks. This implies that the chosen velocity was too difficult for subjects and performance was close to chance, which impairs learning and makes measurements noisier. Thus, even if there was learning, it might not be visible in d’. Simulation studies show that even in the absence of human variation, threshold assays vary about 10–20% depending on the number of simulated trials (Chaudhuri & Merfeld, 2013). Indeed, repeated threshold measurements in humans show variations in thresholds consistent with simulations (Clark, Galvan-Garza, Bermudez Rey, Yi, & Merfeld, 2015; Clark & Merfeld, 2016). Given these findings, it is likely that we underestimated the thresholds of these subjects and, thus, the motion intensity during training was too difficult for learning to be evident during training (Goldhacker et al., 2014). Indeed, when repeating the training in two subjects with a higher velocity, d’ appears to increase over time (blue dots in Fig. 3). It is important to point out that the training did lead to overall improved sensitivity as assessed by the pre- and post-test findings – suggesting that there was a training effect despite the observed absence of changes in d’ during training.

Perceptual learning of self-motion perception can be of importance for rehabilitation in the context of vestibular disease and for prevention of falls in the context of reduced vestibular function associated with aging (Agrawal, Carey, Della Santina, Schubert, & Minor, 2009; Agrawal et al., 2012; Allen, Ribeiro, Arshad, & Seemungal, 2017). Heightened self-motion perception thresholds that are thought to be caused by reduced vestibular function due to age have been demonstrated (Agrawal, Bremova, Kremmyda, Strupp, & MacNeilage, 2013; Bermúdez Rey et al., 2016; Bremova et al., 2016; Iwasaki & Yamasoba, 2015; Kingma, 2005; Roditi & Crane, 2012). A recent study showed that decreased balance test performance is associated with higher age and increased self-motion perception thresholds especially in a roll 0.2-Hz condition (Karmali, Bermúdez Rey, Clark, Wang, & Merfeld, 2017). A re-analysis of these data showed that 50% of the age-related balance decline found in this data set was mediated by the aforementioned increase in 0.2-Hz roll tilt thresholds (Beylergil, Karmali, Wang, Bermúdez Rey, & Merfeld, 2019). Thus, reducing roll tilt 0.2-Hz direction discrimination thresholds may eventually prove to be a useful intervention to improve balance and reduce falls in elderly people. Future studies are needed to test whether the decrease in perceptual thresholds of roll self-motion due to training reported herein can improve balance test performance, as is suggested by the correlation reported in the literature. Worse performance in balance tests is associated with higher morbidity, partly due to a risk of falling (Bermúdez Rey et al., 2016). Thus, we suggest that self-motion perception training should be further investigated with respect to potential therapeutic value.

Conclusion

Roll tilt self-motion perceptual thresholds in the dark were decreased after 6 days of training. In other words, roll tilt direction discrimination improved. Given the previously reported correlation between roll tilt thresholds and balance, it is likely that roll tilt direction discrimination training can be beneficial for people recovering from vestibular disease and for the elderly to counteract the decrease of vestibular function due to ageing. Future studies are needed to investigate whether this increase in sensitivity causally influences balance, and thus reduces falls.

References

  1. Aberg, K. C., Tartaglia, E. M., & Herzog, M. H. (2009). Perceptual learning with Chevrons requires a minimal number of trials, transfers to untrained directions, but does not require sleep. Vision Research, 49(16), 2087-2094.

    PubMed  Article  Google Scholar 

  2. Agrawal, Y., Bremova, T., Kremmyda, O., Strupp, M., & MacNeilage, P. R. (2013). Clinical testing of otolith function: Perceptual thresholds and myogenic potentials. Journal of the Association for Research in Otolaryngology, 14(6), 905-915.

    PubMed  PubMed Central  Article  Google Scholar 

  3. Agrawal, Y., Carey, J. P., Della Santina, C. C., Schubert, M. C., & Minor, L. B. (2009). Disorders of balance and vestibular function in US adults: Data from the National Health and Nutrition Examination Survey, 2001-2004. Archives of Internal Medicine, 169(10), 938-944.

    PubMed  Article  Google Scholar 

  4. Agrawal, Y., Zuniga, M. G., Davalos-Bichara, M., Schubert, M. C., Walston, J. D., Hughes, J., & Carey, J. P. (2012). Decline in semicircular canal and otolith function with age. Otology & neurotology: official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology, 33(5), 832.

    Article  Google Scholar 

  5. Allen, D., Ribeiro, L., Arshad, Q., & Seemungal, B. M. (2017). Age-related vestibular loss: Current understanding and future research directions. Frontiers in Neurology, 7(231).

  6. Atienza, M., Cantero, J. L., & Dominguez-Marin, E. (2002). The time course of neural changes underlying auditory perceptual learning. Learning & Memory, 9(3), 138-150.

    Article  Google Scholar 

  7. Atkinson, J., Braddick, O., & Moar, K. (1977). Development of contrast sensitivity over the first 3 months of life in the human infant. Vision Research, 17(9), 1037-1044.

    PubMed  Article  Google Scholar 

  8. Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278.

    Article  Google Scholar 

  9. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1-48.

    Article  Google Scholar 

  10. Bermúdez Rey, M. C., Clark, T. K., Wang, W., Leeder, T., Bian, Y., & Merfeld, D. M. (2016). Vestibular perceptual thresholds increase above the age of 40. Frontiers in Neurology, 7(162).

  11. Beylergil, S. B., Karmali, F., Wang, W., Bermúdez Rey, M. C., & Merfeld, D. M. (2019). Chapter 18: Vestibular roll tilt thresholds partially mediate age-related effects on balance. Progress in Brain Research, 248, 249-267.

    PubMed  Article  Google Scholar 

  12. Bremova, T., Caushaj, A., Ertl, M., Strobl, R., Böttcher, N., Strupp, M., & MacNeilage, P. R. (2016). Comparison of linear motion perception thresholds in vestibular migraine and Menière’s disease. European Archives of Oto-Rhino-Laryngology, 273(10), 2931-2939.

    PubMed  PubMed Central  Article  Google Scholar 

  13. Bürkner, P.-C. (2017). brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80(1), 1-28.

    Article  Google Scholar 

  14. Chaudhuri, S. E., & Merfeld, D. M. (2013). Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions. Experimental Brain Research, 225(1), 133-146.

    PubMed  Article  Google Scholar 

  15. Clark, T. K., Galvan-Garza, R. C., Bermudez Rey, M. C., Yi, Y., & Merfeld, D. M. (2015). Perceptual noise and sensorimotor adaptation. NASA Human Research Program Investigator’s Workshop, Galveston, TX, 13-15 Jan, 2015.

  16. Clark, T. K., & Merfeld, D. M. (2016). Vestibular perceptual noise and adaptation to an altered gravity environment. NASA Human Research Program Investigator’s Workshop, Galveston, TX, 8-11 Feb, 2016.

  17. De Niear, M. A., Noel, J.-P., & Wallace, M. T. (2017). The impact of feedback on the different time courses of multisensory temporal recalibration. Neural Plasticity, 2017.

  18. Dosher, B., & Lu, Z.-L. (2017). Visual perceptual learning and models. Annual Review of Vision Science, 3, 343-363.

    PubMed  PubMed Central  Article  Google Scholar 

  19. Ellis, A. W., Klaus, M. P., & Mast, F. W. (2017). Vestibular cognition: The effect of prior belief on vestibular perceptual decision making. Journal of Neurology, 264(1), 74-80.

    PubMed  Article  Google Scholar 

  20. Fahle, M. (2005). Perceptual learning: Specificity versus generalization. Current Opinion in Neurobiology, 15(2), 154-160.

    PubMed  Article  Google Scholar 

  21. Fahle, M., & Edelman, S. (1993). Long-term learning in vernier acuity: Effects of stimulus orientation, range and of feedback. Vision Research, 33(3), 397-412.

    PubMed  Article  Google Scholar 

  22. Fahle, M., Edelman, S., & Poggio, T. (1995). Fast perceptual learning in hyperacuity. Vision Research, 35(21), 3003-3013.

    PubMed  Article  Google Scholar 

  23. Fahle, M., & Poggio, T. A. (2002). Perceptual learning. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  24. Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge university press.

  25. Gelman, A., Stern, H. S., Carlin, J. B., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. Boca Raton, FL: Chapman and Hall/CRC.

    Book  Google Scholar 

  26. Gibson, E. J. (1969). Principles of perceptual learning and development. Englewood Cliffs, NJ: Prentice Hall.

    Google Scholar 

  27. Gold, J. I., & Watanabe, T. (2010). Perceptual learning. Current Biology, 20(2), R46-R48.

    PubMed  Article  Google Scholar 

  28. Goldhacker, M., Rosengarth, K., Plank, T., & Greenlee, M. W. (2014). The effect of feedback on performance and brain activation during perceptual learning. Vision Research, 99, 99-110.

    PubMed  Article  Google Scholar 

  29. Grabherr, L., Nicoucar, K., Mast, F. W., & Merfeld, D. M. (2008). Vestibular thresholds for yaw rotation about an earth-vertical axis as a function of frequency. Experimental Brain Research, 186(4), 677-681.

    PubMed  Article  Google Scholar 

  30. Guo, J., & Guo, A. (2005). Crossmodal interactions between olfactory and visual learning in Drosophila. Science, 309(5732), 307-310.

    PubMed  Article  Google Scholar 

  31. Hartmann, M., Furrer, S., Herzog, M. H., Merfeld, D. M., & Mast, F. W. (2013). Self-motion perception training: thresholds Improve in the light but not in the dark. Experimental Brain Research, 226(2), 231-240.

    PubMed  PubMed Central  Article  Google Scholar 

  32. Iwasaki, S., & Yamasoba, T. (2015). Dizziness and imbalance in the elderly: Age-related decline in the vestibular system. Aging and Disease, 6(1), 38.

    PubMed  Article  Google Scholar 

  33. Jian, B., Shintani, T., Emanuel, B., & Yates, B. (2002). Convergence of limb, visceral, and vertical semicircular canal or otolith inputs onto vestibular nucleus neurons. Experimental Brain Research, 144(2), 247-257.

    PubMed  Article  Google Scholar 

  34. Karmali, F., Bermúdez Rey, M. C., Clark, T. K., Wang, W., & Merfeld, D. M. (2017). Multivariate analyses of balance test performance, vestibular thresholds, and age. Frontiers in Neurology, 8(578).

  35. Katahira, K. (2016). How hierarchical models improve point estimates of model parameters at the individual level. Journal of Mathematical Psychology, 73, 37-58.

    Article  Google Scholar 

  36. Kingma, H. (2005). Thresholds for perception of direction of linear acceleration as a possible evaluation of the otolith function. BMC Ear, Nose and Throat Disorders, 5(1), 5.

    PubMed Central  Article  Google Scholar 

  37. Knoblauch, K., & Maloney, L. T. (2012). Modeling psychophysical data in R (Vol. 32). Springer Science & Business Media.

  38. Kruschke, J. K. (2013). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General, 142(2), 573.

    Article  Google Scholar 

  39. Lewis, R. F., Priesol, A. J., Nicoucar, K., Lim, K., & Merfeld, D. M. (2011). Dynamic tilt thresholds are reduced in vestibular migraine. Journal of Vestibular Research, 21(6), 323-330.

    PubMed  PubMed Central  Article  Google Scholar 

  40. Lim, K., Karmali, F., Nicoucar, K., & Merfeld, D. M. (2017). Perceptual precision of passive body tilt is consistent with statistically optimal cue integration. Journal of Neurophysiology, 117(5), 2037-2052.

    PubMed  PubMed Central  Article  Google Scholar 

  41. Merfeld, D. M. (2011). Signal detection theory and vestibular thresholds: I. Basic theory and practical considerations. Experimental Brain Research, 210(3-4), 389-405.

    PubMed  PubMed Central  Article  Google Scholar 

  42. Mittelstaedt, H. (1992). Somatic versus vestibular gravity reception in man. Annals of the New York Academy of Sciences, 656(1), 124-139.

    PubMed  Article  Google Scholar 

  43. Mittelstaedt, H. (1996). Somatic graviception. Biological Psychology, 42(1-2), 53-74.

    PubMed  Article  Google Scholar 

  44. Moore, D. R., Amitay, S., & Hawkey, D. J. (2003). Auditory perceptual learning. Learning & Memory, 10(2), 83-85.

    Article  Google Scholar 

  45. Moreno, M. M., Linster, C., Escanilla, O., Sacquet, J., Didier, A., & Mandairon, N. (2009). Olfactory perceptual learning requires adult neurogenesis. Proceedings of the National Academy of Sciences, 106(42), 17980-17985.

    Article  Google Scholar 

  46. Nicenboim, B., & Vasishth, S. (2016). Statistical methods for linguistic research: Foundational Ideas—Part II. Language and Linguistics Compass, 10(11), 591-613.

    Article  Google Scholar 

  47. Owen, D. H., & Machamer, P. K. (1979). Bias-free improvement in wine discrimination. Perception, 8(2), 199-209.

    PubMed  Article  Google Scholar 

  48. Parkosadze, K., Otto, T. U., Malania, M., Kezeli, A., & Herzog, M. H. (2008). Perceptual learning of bisection stimuli under roving: Slow and largely specific. Journal of Vision, 8(1), 5-5.

    PubMed  Article  Google Scholar 

  49. Pleger, B., Foerster, A.-F., Ragert, P., Dinse, H. R., Schwenkreis, P., Malin, J.-P., Nicolas, V., & Tegenthoff, M. (2003). Functional imaging of perceptual learning in human primary and secondary somatosensory cortex. Neuron, 40(3), 643-653.

    PubMed  Article  Google Scholar 

  50. Poggio, T., Fahle, M., & Edelman, S. (1992). Fast perceptual learning in visual hyperacuity. Science, 256(5059), 1018-1021.

    PubMed  Article  Google Scholar 

  51. Roditi, R. E., & Crane, B. T. (2012). Directional asymmetries and age effects in human self-motion perception. Journal of the Association for Research in Otolaryngology, 13(3), 381-401.

    PubMed  PubMed Central  Article  Google Scholar 

  52. Sathian, K., & Zangaladze, A. (1998). Perceptual learning in tactile hyperacuity: Complete intermanual transfer but limited retention. Experimental Brain Research, 118(1), 131-134.

    PubMed  Article  Google Scholar 

  53. Seitz, A. R., Kim, R., & Shams, L. (2006). Sound facilitates visual learning. Current Biology, 16(14), 1422-1427.

    PubMed  Article  Google Scholar 

  54. Shams, L., & Seitz, A. R. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12(11), 411-417.

    PubMed  PubMed Central  Article  Google Scholar 

  55. Stan Developent Team. (2018). RStan: The R interface to Stan. R package version 2.17.3.

    Google Scholar 

  56. Valko, Y., Lewis, R. F., Priesol, A. J., & Merfeld, D. M. (2012). Vestibular labyrinth contributions to human whole-body motion discrimination. Journal of Neuroscience, 32(39), 13537-13542.

    PubMed  Article  Google Scholar 

  57. Vimal, V. P., DiZio, P., & Lackner, J. R. (2017). Learning dynamic balancing in the roll plane with and without gravitational cues. Experimental Brain Research, 235(11), 3495-3503.

    PubMed  Article  Google Scholar 

  58. Vimal, V. P., Lackner, J. R., & DiZio, P. (2018). Learning dynamic control of body yaw orientation. Experimental Brain Research, 236(5), 1321-1330.

    PubMed  Article  Google Scholar 

  59. Von Kriegstein, K., & Giraud, A.-L. (2006). Implicit multisensory associations influence voice recognition. PLoS Biology, 4(10), e326.

    Article  Google Scholar 

  60. Wichmann, F. A., & Hill, N. J. (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63(8), 1293-1313.

    Article  Google Scholar 

  61. Wilson, D. A., & Stevenson, R. J. (2003). The fundamental role of memory in olfactory perception. Trends in Neurosciences, 26(5), 243-247.

    PubMed  Article  Google Scholar 

  62. Wolfe, J., Kluender, K., Levi, D. M., Bartoshuk, L., Herz, R., Klatzky, R., & Merfeld, D. M. (2018). Chapter 12: Vestibular sensation. Sensation and Perception (pp. 378-419). Cary, NC: Oxford University Press USA.

  63. Yates, B., & Stocker, S. (1998). Integration of somatic and visceral inputs by the brainstem Functional considerations. Experimental Brain Research, 119(3), 269-275.

    PubMed  Article  Google Scholar 

Download references

Acknowledgements

The work was supported by SNF Grants 162480 and 147164 (to FWM) and NIH/NIDCD grant R01 DC014924 (to DMM). We would like to thank the participants and Daniel Fitze, Livio Hardegger, Leona Knüsel, and Muriel Sauvant for assistance in data collection.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Manuel P. Klaus.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

ESM 1

(DOCX 28.7 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Klaus, M.P., Schöne, C.G., Hartmann, M. et al. Roll tilt self-motion direction discrimination training: First evidence for perceptual learning. Atten Percept Psychophys 82, 1987–1999 (2020). https://doi.org/10.3758/s13414-019-01967-2

Download citation

Keywords

  • Vestibular System
  • Self-Motion Perception
  • Roll Tilt
  • Perceptual Learning
  • Multisensory Processing