Psychonomic Bulletin & Review

, Volume 20, Issue 1, pp 21–53 | Cite as

Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

  • Roland Sigrist
  • Georg Rauter
  • Robert Riener
  • Peter Wolf
Theoretical Review

Abstract

It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.

Keywords

Skill learning and automaticity Augmented extrinsic feedback Unimodal feedback Feedback strategy 

Introduction

In the field of sports, trainers want their athletes to jump higher or run faster—in general, to perform motor tasks better. In rehabilitation, therapists want their patients to recover lost motor functions as quickly and permanently as possible. The aim of research in motor learning is to enhance these examples of complex motor (re-)learning by optimizing instructions and feedback. Depending on the motor feature to be learned, trainers and therapists switch modalities to instruct the motor task; for instance, instead of visually demonstrating the movement, they move the athlete or patient through it. Technical displays, which have become increasingly common for providing augmented feedback, can also address different modalities: vision (screens, head-mounted displays), hearing (speakers, headphones), haptics (robots, vibrotactile actuators), or a combination of them.

Feedback strategies may also be classified according to the point in time at which feedback is provided: either during motor task execution (i.e., concurrent [online, real-time] feedback) or after it (i.e., terminal feedback). Recently, the benefits of concurrent, as compared with terminal, feedback strategies have been controversially discussed. Literature related to this controversy is the basis of this review; we elaborate the potential and the limitations of concurrent feedback strategies for enhancing motor learning. In this context, we will particularly consider motor task complexity and the applied feedback modality.

The review starts with definitions of relevant terms, followed by general remarks on augmented feedback to broaden the contextual integration of the statements. In the subsequent sections, the efficiency of concurrent feedback strategies are separately discussed for the visual, auditory, and haptic modalities, followed by a section on multimodal feedback strategies (see Fig. 1). In each of these sections, the advantages of displaying feedback in that modality or in a multimodal manner are listed, existing studies are reviewed, and guidelines for designing feedback displays are provided.
Fig. 1

Illustration of the review outline and summary of the main conclusions. The figure shows the experimentally confirmed (solid) and our hypothesized (dashed) effectiveness of a feedback strategy to enhance motor learning depending on functional task complexity. The broader the shape, the more effective the strategy is

We mainly review studies with healthy subjects. Studies with patients are only partly considered, since patients often know how a movement should be performed but are physically not able to do so (Yang & Kim, 2002). Feedback may, therefore, not exclusively facilitate motor learning but, rather, enhance compensatory mechanisms and strategies in order to overcome loss of motor function due to a damaged neuromuscular system. Thus, patients may benefit from augmented feedback in a way that is different from motor learning in healthy individuals. Even though the effectiveness of augmented feedback applied in rehabilitation is beyond the scope of the present review and has been discussed elsewhere (Huang, Wolf, & He, 2006; Molier, Van Asseldonk, Hermens, & Jannink, 2010; Ribeiro, Sole, Abbott, & Milosavljevic, 2011), some studies will be considered for a broader discussion of feedback design aspects.

Definitions

Augmented feedback, also known as extrinsic feedback, is defined as information that cannot be elaborated without an external source; thus, it is provided by a trainer or a display (Schmidt & Wrisberg, 2008; Utley & Astill, 2008). The term display is not constrained to the visual modality—for example, screens or projectors. Headphones and speakers are also called auditory displays, and robots can act as haptic displays. Augmented feedback can relate the learner’s individual performance to a desired performance or to an instruction. Instructions are used to emphasize certain aspects of the movement, to remind of previously explained principles (Schmidt & Wrisberg, 2008), or to induce a certain focus (Wulf & Shea, 2002). Intrinsic or internal feedbacks—that is, sensory afferences—are always present during motor learning. In this review, the term feedback means augmented feedback; otherwise, we explicitly refer to intrinsic feedback.

In this review, the word haptic refers to both tactile and kinesthetic perception: Tactile perception is usually conveyed through the skin, such as by vibrations or pressure; kinesthetic perception refers to receptors in muscles and tendons that allow us to feel the pose of our body (O’Malley & Gupta, 2008). In our definition, the term haptic augmented feedback extends the term haptic guidance, also known as physical guidance or physical assistance. Haptic guidance refers to physically guiding the subject through the ideal motion by a haptic interface (Feygin, Keehner, & Tendick, 2002). Beyond haptic guidance, haptic augmented feedback also includes any kind of haptic perception that teaches the necessary features that guide the subject toward, and not necessarily through, the desired motion. This definition also distinguishes haptic augmented feedback from haptic rendering, which refers to feeling virtual objects haptically (Salisbury & Srinivasan, 1997).

Motor learning describes a lasting change of motor performance caused by training. Motor learning includes the development of a parameterized motor program, which forms the basis of the feedforward control strategy, as well as the gradual reduction of the variability in the newly developed motor program via sensory feedback loops (Shmuelof, Krakauer, & Mazzoni, 2012). At a behavioral level, motor learning can be characterized by three different phases (Fitts & Posner, 1967; Schmidt & Wrisberg, 2008). In an early, attention-demanding phase, learning progresses rapidly, and a first movement representation—that is, the motor program—of the to-be-learned task is formed. During a second phase, the motor presentations are further refined, and error detection/correction mechanisms are improved. Sensory afferences of the ongoing movement are compared with the intended motor output, and errors are corrected either online, when the movement is slow, or in a subsequent movement, when trials are fast. Consequently, overall error and movement variability are improved. Finally, in a third phase, movements are performed in a highly automatized and consistent manner. Consequently, in this review, we characterize motor learning by a lasting increase of performance assessable in short- and long-term retention tests, when augmented feedback is withdrawn, or in transfer tests, when different or related movements are performed. If an augmented feedback was tested only for its benefits in enhancing current performance—that is, during the training with the feedback—we will explicitly refer to this issue. To define task complexity, we refer to a general description provided by Wulf and Shea (2002): “We will judge tasks to be complex if they generally cannot be mastered in a single session, have several degrees of freedom, and perhaps tend to be ecologically valid. Tasks will be judged as simple if they have only one degree of freedom, can be mastered in a single practice session, and appear to be artificial” (Wulf & Shea, 2002, p. 186). Data from studies on simple tasks reflect the second phase of learning, where studies of complex tasks provide insights about the processes occurring in the first phase.

Important contextual insights gained from studies on visual feedback

During the last few years, vision has been the modality most intensively investigated in the context of optimizing augmented feedback for motor learning. This research has revealed how (visual) feedback strategies can either facilitate or impair motor learning. General opportunities, as well as the pitfalls, of visual augmented feedback will be described next.

In general, concurrent feedback can enhance performance in the acquisition phase, but the performance gains are lost in retention tests. This finding is explained by the guidance hypothesis, which states that permanent feedback during acquisition leads to a dependency on the feedback (Salmoni, 1984; Schmidt, 1991; Schmidt, Young, Swinnen, & Shapiro, 1989). The guidance forces learners to ignore their intrinsic feedback—that is, proprioception. Evidence for this expectation of performance loss has been given in studies on simple motor tasks applying concurrent feedback or very frequent terminal feedback (Schmidt & Wulf, 1997; Van der Linden, Cauraugh, & Greene, 1993; Winstein et al., 1996). The guidance hypothesis is also supported by results of studies on visuomotor adaptation in simple motor tasks (Bernier, Chua, & Franks, 2005; Heuer & Hegele, 2008; Sulzenbruck & Heuer, 2011).

The specificity-of-learning hypothesis states that learning involves integration of the most optimal sources of afferent information for performing the given task, thereby surpassing other sources of afferent information—for example, proprioception (Proteau, 1992). Usually, augmented feedback is designed to be this optimal source and becomes part of the task itself. This hypothesis has been supported in studies on simple tasks such as aiming tasks (Proteau, 2005; Proteau & Isabelle, 2002; Robin, Toussaint, Blandin, & Proteau, 2005), visuomotor adaptation tasks (Bernier et al., 2005), arm movement pattern reproduction tasks (Blandin, Toussaint, & Shea, 2008), and a force production task (Ranganathan & Newell, 2009). Accordingly, concurrent feedback may change the task (Schmidt & Wrisberg, 2008), and performance is expected to decrease when the feedback information is withdrawn—that is, performance in the original task. The concurrent information may impose training of a specific control strategy that can be recalled better than the untrained strategy when the task is addressed without additional information. For instance, when concurrent visual information is available, a fast-developing component in visual-spatial coordinates—for example, location of sequential target positions—is used to control a movement pattern. When no concurrent visual information is available, a slow-developing component in motor coordinates—for example, muscle activation patterns—is used to preplan movements (Kovacs, Boyle, Grutmatcher, & Shea, 2010).

However, one should be careful in transferring conclusions found in studies on simple tasks to complex task learning (Guadagnoli & Lee, 2004; Winstein, 1991; Wulf & Shea, 2002). Concurrent, as well as very frequent, feedback was found to be detrimental for simple task learning, but this might not be true for complex, sport-related task learning, which is promoted by a meta-analysis (Marschall, Bund, & Wiemeyer, 2007). By tendency, it seems that the more complex the task, the more the trainee can profit from concurrent feedback. One reason could be that concurrent feedback attracts an external focus of attention (Shea & Wulf, 1999), which was found to be beneficial for motor learning, since it “promotes automaticity in movement control” (Wulf, 2007a, p. 4). Another reason might be that in an early learning phase, concurrent feedback can prevent cognitive overload and, therefore, enhance learning of complex motor tasks (Wulf & Shea, 2002). The discovery of the new structure of the movement (Braun, Mehring, & Wolpert, 2010; Wolpert, Diedrichsen, & Flanagan, 2011; Wolpert & Flanagan, 2010) may be facilitated by concurrent feedback that makes the relevant information more accessible. The guiding role of concurrent feedback might, therefore, have a positive effect by making the complex motor task easier to understand (Huegel & O’Malley, 2010).

In the early learning phase, guidance in the form of concurrent feedback or very frequent terminal feedback has been suggested to be effective (Liebermann et al., 2002). As such, concurrent feedback could also be combined with very frequent terminal feedback, since the latter seems to reduce the dependency on the former by limiting information processing of the concurrent feedback (Blandin et al., 2008). However, when the learner has an idea of the movement—that is, the first phase of learning has been overcome—the learner may profit more from less frequent feedback, either concurrent or terminal. No-feedback trials are needed to develop a persistent internal movement representation, which can be recalled in retention tests when augmented feedback is withdrawn (Crowell & Davis, 2011; Kovacs & Shea, 2011; Winstein, 1991). Thus, the frequency of feedback should decrease with increasing skill level—that is, with decreasing functional task complexity—to further facilitate motor learning (Guadagnoli & Lee, 2004; Timmermans, Seelen, Willmann, & Kingma, 2009; Wulf & Shea, 2002; Wulf, Shea, & Matschiner, 1998). Functional task complexity depends on the current individual skill level and changes during the learning process, whereas nominal task complexity remains invariant (Guadagnoli & Lee, 2004).

Different adaptations of feedback frequency to an increasing skill level have been proposed for terminal feedback. Fading feedback—that is, reduced feedback frequency over time—has been shown to be effective (Crowell & Davis, 2011; Kovacs & Shea, 2011). However, the optimal fading rate is commonly unknown. Feedback reduction usually follows a predefined schedule and might, therefore, not be optimal for each individual. To respect individual progress, performance-based feedback adaptions have also been introduced (Huegel & O’Malley, 2010). Bandwidth feedback—that is, feedback when the movement error exceeds (or is within) a certain threshold (Ribeiro et al., 2011)—should force the learner to repeat good trials (Winstein, 1991). Bandwidth feedback has been shown to be effective (Timmermans et al., 2009); however, setting the error threshold is not trivial (Ribeiro et al., 2011). Adverse thresholds could promote maladaptive short-term corrections; that is, the learner corrects irrelevant errors originating from noise in the sensory–motor system (Schmidt, 1991), which may “hinder the development of a stable movement representation” (Chiviacowsky & Wulf, 2002, p. 413). Thus, errors caused by motor noise should be ignored; instead, errors in motor planning should be minimized (van Beers, 2009). Since the brain cannot determine the origin of the current error, it has been suggested that the brain corrects errors only incompletely (Liu & Todorov, 2007) to minimize variance in movement (van Beers, 2009). This matches with a theory of optimal motor control, stating that task-irrelevant errors or variability coming from sensory–motor noise should be left untreated to maximize performance (Liu & Todorov, 2007; Todorov, 2004; Todorov & Jordan, 2002; Wei & Körding, 2009; Wolpert et al., 2011).

Self-controlled feedback allows the learner to determine when feedback should be provided. Advantages of self-controlled feedback are seen in the adaptation to the learner’s needs, in that it allows a focus on the current aspect the learner wants to correct, in the promotion of deeper information processing, and in the involvement of the learner in the learning process, resulting in an increased motivation (Wulf, 2007b). Self-controlled terminal feedback has been proven to be more effective than externally imposed terminal feedback in ball throwing (Janelle, Barba, Frehlich, Tennant, & Cauraugh, 1997; Janelle, Kim, & Singer, 1995), in sequential timing tasks with the index finger (Chiviacowsky & Wulf, 2002, 2005), and in a motor perception task requiring walking through virtual sliding doors on a treadmill (Huet, Camachon, Fernandez, Jacobs, & Montagne, 2009).

Self-controlled feedback per se cannot be the only reason for better learning, as Chiviacowsky and Wulf have shown: Subjects who had to decide prior to the trial whether they wanted to receive terminal feedback were outperformed by subjects who could decide after the trial (Chiviacowsky & Wulf, 2005). It seems that feedback is most effective if it is provided after good trials, due to enhanced motivation and positive reinforcement to repeat good trials (Chiviacowsky & Wulf, 2007). Indeed, self-controlled terminal feedback tends to be requested after learners believe that they have performed well (Chiviacowsky & Wulf, 2002, 2005). This ability to request feedback oneself and relate it to one’s own performance may promote self-efficacy (Chiviacowsky & Wulf, 2005). This promotion of self-efficacy has been suggested to have more impact on motor learning than well-chosen feedback frequency (Wulf, 2007b; Wulf, Shea, & Lewthwaite, 2010).

Besides self-efficacy, self-estimation of movement error is also stated to facilitate motor learning (Guadagnoli & Kohl, 2001; Liu & Wrisberg, 1997), due to a better development of error detection capabilities (Swinnen, Schmidt, Nicholson, & Shapiro, 1990). Delaying terminal feedback for a few seconds is believed to allow sufficient time for self-estimation of the error to prevent reliance on extrinsic feedback (van Vliet & Wulf, 2006). Therefore, a disadvantage of concurrent feedback might be that self-estimation of the error is hindered. However, to profit from self-estimation of the actual movement error, learners should know the targeted movement in general to be able to self-estimate their performance—a prerequisite that might not be fulfilled in early learning phases (Sigrist, Rauter, Riener, & Wolf, 2011).

To get an idea of the targeted movement, feedback in complex tasks should be prescriptive—that is, feedback should inform the learner on how to correct the error—rather than descriptive (i.e., information about occurrence of an error) (Tzetzis, Votsis, & Kourtessis, 2008). After an internal movement representation has been developed, descriptive feedback is meaningful. Accordingly, terminal feedback facilitates learning as soon as the learner is able to associate the terminal feedback with the prior performance. Before this ability is acquired, prescriptive terminal feedback, but also concurrent feedback, is suggested to be more effective (Sulzenbruck & Heuer, 2011).

Due to the potential of concurrent feedback to enhance learning, concurrent feedback is more and more provided by a variety of technical systems applied in sports and rehabilitation. Most commonly, concurrent feedback is displayed visually, since this is, at first glance, the most natural and easiest way. The effectiveness of visual concurrent feedback and visualization design aspects are discussed next.

Visual feedback: Impact of task complexity and design criteria

Vision is often regarded as the most important perceptive modality during interaction with the environment in daily life. At least for perceiving spatial information, vision dominates other senses (Nesbitt, 2003). Many motor tasks are impossible or, at least, are much harder to perform without vision—for example, walking on an uneven terrain, hitting a tennis ball, or skiing. In the field of motor learning, visual learning strategies such as learning by observation or by imitation, as well as by video demonstration, are well-established. Accordingly, several researchers have investigated the effects of visual feedback on learning a motor task. Applied methods and gained results of related studies have revealed that the impact of augmented feedback depends on task complexity and skill level (Timmermans et al., 2009; Utley & Astill, 2008; Wulf & Shea, 2002). Thus, in the first part of this section, feedback provided during simple tasks is discussed separately from that provided during complex tasks. Besides task complexity, the way of visualizing the feedback can influence its effectiveness. Therefore, different visualization approaches are discussed in the second part of this section.

Effectiveness depends on task complexity

Visual feedback in simple tasks

The effects of concurrent visual feedback have often been investigated in simple labor tasks. During acquisition of a simple lever arm movement, a concurrent visual feedback group outperformed a no-feedback group (Schmidt & Wulf, 1997). However, in the retention tests when feedback was withdrawn, accuracy and stability degraded in the feedback group, as compared with the no-feedback group. Similar effects have been reported in studies on simple isometric force production tasks (Ranganathan & Newell, 2009; Van der Linden et al., 1993) or a partial weight-bearing task (Winstein et al., 1996). As compared with terminal feedback, concurrent visual feedback has led to a better performance during acquisition. However, in retention tests, the concurrent visual feedback groups performed worse than the terminal feedback groups. Park, Shea, and Wright (2000) reported concurrent feedback to be effective in a hand force production task, but only in combination with terminal feedback on the same trial and no-feedback trials on alternate trials. The no-feedback trials during the acquisition phase were suggested to be important “to develop intrinsic error detection and correction capabilities” and to avoid dependency on the augmented, extrinsic feedback (Park et al., 2000, p. 294).

In studies on visuomotor adaptation with simple aiming movements, concurrent feedback was less effective than terminal feedback in acquiring the related internal model. In these studies, the feedback about the movement resulted in a transformed cursor movement on the screen. After training, subjects were still adapted to the manipulated feedback if they had practiced with terminal feedback—that is, aftereffects were present—but not with concurrent feedback (Bernier et al., 2005; Heuer & Hegele, 2008; Sulzenbruck & Heuer, 2011). However, contradictory results have also been reported (Hinder, Tresilian, Riek, & Carson, 2008).

The results of the aforementioned studies indicate that concurrent visual feedback is rather unfavorable for learning simple motor tasks. In general, those findings can be explained by the guidance hypothesis, which states that permanent feedback during acquisition leads to a dependency on the feedback (Salmoni, 1984; Schmidt, 1991; Schmidt et al., 1989). However, it cannot be excluded that learning of simple motor tasks may benefit from concurrent feedback if the training includes trials without feedback (Wulf & Shea, 2002) or is combined with terminal feedback (Blandin et al., 2008). Interestingly, in a simple aiming task, training with weak visual feedback (bad contrast on the screen) seemed to allow processing of visual and kinaesthetic information concomitantly, since performance was better in retention tests than after training with full visual feedback or without visual feedback. The weak visual feedback could provide guidance to complete the task, in principle, but also did not prevent the development of a motor program and, thus, according to the specificity of the learning hypothesis, resulted in the most optimal source of afferent information (Robin et al., 2005).

Visual feedback in complex tasks

In contrast to simple motor tasks, learning of complex tasks with concurrent visual feedback has predominantly been reported to be effective. In physical therapy, practice of complex mobilization skills was facilitated by concurrently displayed bars or force-time plots indicating the deviation from the target force: Snodgrass, Rivett, Robertson, and Stojanovski (2010) compared a group receiving combined concurrent and terminal feedback with a no-feedback group and reported superiority of the feedback group in retention tests (Snodgrass et al., 2010). In the study by Lee, Moseley, and Refshauge (1990), the concurrent feedback group also outperformed the no-feedback group (Lee et al., 1990). Chang, Chang, Chien, Chung, and Hsu (2007) reported that subjects benefited equally from terminal and concurrent feedback (Chang et al., 2007). Indeed, these studies in physical therapy showed that concurrent feedback can contribute to enhancement of motor learning. However, it was not shown to be more effective than terminal feedback.

Swinnen, Lee, Verschueren, Serrien, and Bogaerds (1997) reported that concurrent feedback can enhance learning of a 90° interlimb out-of-phase coordination task, whereas Maslovat, Brunke, Chua, and Franks (2009) reported negative effects of concurrent feedback on the same task. Both studies applied the same feedback visualization in the form of Lissajous figures. Lissajous figures display the displacement of one limb on the abscissa and the displacement of the other limb on the ordinate. Consequently, 90° out-of-phase movements would result in a perfect circle. Maslovat et al. (2009) assumed that the contradictory outcome of the studies originates from the fact that in the study of Swinnen et al. (1997), feedback trials alternated with no-feedback trials. Indeed, also for interlimb 90° relative phase coordination, augmented feedback in the form of Lissajous figures was stated to be very effective if no-feedback trials are considered to ensure that the learner can develop an internal movement representation (Kovacs & Shea, 2011). A recent study by Ronsse, Puttemans, et al. (2011), again on interlimb out-of-phase coordination, revealed that young subjects became dependent on visual feedback in the form of Lissajous figures. These findings and, consequently, the guidance hypothesis were corroborated by results from functional magnetic resonance imaging (fMRI) (Ronsse, Puttemans, et al., 2011).

In a study by Wishart, Lee, Cunningham, and Murdoch (2002) of the same task, younger adults could profit from both terminal and concurrent feedback, whereas older adults could profit from concurrent visual feedback only (Wishart et al., 2002). Thus, older adults may experience the task as more complex than younger adults. Since older adults may remain in an attention-demanding phase of learning longer, concurrent feedback can help them to grasp the general movement pattern. However, once the general movement pattern is learned, in the second phase of learning, feedback trials need to be mixed with no-feedback trials of terminal feedback, which is superior to building up a movement representation relying on feedforward control and online correction based on proprioceptive afferences.

In general, for complex tasks, it is favorable to decrease cognitive demands to prevent cognitive overload. Concurrent feedback may decrease cognitive load since it attracts an external focus of attention (Wulf, 2007a), which has been confirmed in a balancing task on a stabilometer (Shea & Wulf, 1999). Similarly, more frequent, as well as blocked, feedback can also decrease cognitive load (Wulf & Shea, 2002). For learning a complex slalom type movement on the ski simulator, concurrent visual feedback on every trial was more effective than feedback on every second trial (Wulf et al., 1998), and concurrent blocked feedback tended to be superior to concurrent serial (random) feedback (Wulf, Hörger, & Shea, 1999). These studies point out that the application not only of either terminal or concurrent feedback has an impact on learning, but also of feedback frequency—fixed, reduced over time, or self-controlled. Reduced feedback frequency—that is, fading feedback—seems beneficial for both terminal feedback and concurrent feedback, as was shown in studies on training running technique (Crowell & Davis, 2011) and interlimb out-of-phase coordination (Kovacs & Shea, 2011).

Augmented reality—that is, superposition of visualizations on the real environment—is a very popular technology in medicine for enhancing pre- and intraoperative procedures (Sielhorst, Feuerstein, & Navab, 2008). However, concurrent visual feedback provided by a head-mounted display did not enhance learning of a complex hand movement, as compared with instruction only (watch and reproduce) (Yang & Kim, 2002). As also was noted by the authors, the applied superposition of the target movement with a ghost by the head-mounted display might not be optimal for this task, since the field of view was limited and forced frequent head movements. A converse tendency was shown for complex ball-throwing tasks. The concurrent superposition of target angles over the real video image tended to improve performance (Schack, Bockemühl, Schütz, & Ritter, 2008; Schack & Heinen, 2007).

Instead of using augmented reality to provide feedback on top of the real environment, Todorov, Shadmehr, and Bizzi (1997) built a virtual table tennis simulator, which rendered not only concurrent feedback, but the whole training environment. In an experiment, learning of a specific target shot was supported by concurrent visual feedback—that is, superposition of the virtual racket movement on virtual reference racket movement. Thereby, simulator training with concurrent feedback was more efficient than actually hitting more real balls under the supervision of a real trainer (Todorov et al., 1997). Augmented reality or virtual reality simulators seem to have a great potential to facilitate motor learning. However, to date, knowledge about how to design simulators including augmented feedback in order to exploit their potential is still limited.

In summary, it seems that the more complex a task is, the more the learner can profit from concurrent visual feedback. Positive effects of concurrent feedback have been demonstrated in quite different tasks, such as mobilization in physical therapy (Chang et al., 2007; Lee et al., 1990; Snodgrass et al., 2010), interlimb out-of-phase coordination tasks (Kovacs & Shea, 2011; Swinnen et al., 1997; Wishart et al., 2002), a slalom type movement on a ski simulator (Wulf et al., 1999; Wulf et al., 1998), a balancing task on a stabilometer (Shea & Wulf, 1999), ball throwing (Schack et al., 2008; Schack & Heinen, 2007), running (Crowell & Davis, 2011), indoor rowing (however, without testing retention) (Anderson, Harrison, & Lyons, 2005), and table tennis (Todorov et al., 1997). In early learning phases of complex task learning, concurrent visual feedback can prevent cognitive overload (Wulf & Shea, 2002), make the relevant information more accessible, and help the learner to build up a first movement representation/motor program. If concurrent feedback is provided in subsequent phases, where the motor program is refined, wrong error detection/correction mechanisms are trained because of the dominant visual modality, which is detrimental once the specific source is withdrawn.

Commonly, the effects of concurrent visual feedback have been investigated in complex tasks that are rather artificial than related to any sport, with a few exceptions (Eaves, Breslin, van Schaik, Robinson, & Spears, 2011; Todorov et al., 1997; Wulf et al., 1999; Wulf et al., 1998). Although a few sport simulators incorporating augmented or virtual reality have been developed, such as for rowing (Frisoli et al., 2008; Ruffaldi, Gonzales, et al., 2009; von Zitzewitz et al., 2008), canoeing (Tang, Carignan, & Olsson, 2006), bicycling (Carraro, Cortes, Edmark, & Ensor, 1998; Mestre, Maïano, Dagonneau, & Mercier, 2011), bobsledding (Kelly & Hubbard, 2000), archery (Göbel, Geiger, Heinze, & Marinos, 2010), gymnastics (Multon, Hoyet, Komura, & Kulpa, 2007), and dancing (Drobny & Borchers, 2010; Drobny, Weiss, & Borchers, 2009; Nakamura, Tabata, Ueda, Kiyofuji, & Kuno, 2005), these simulators have not been used to examine the effectiveness of augmented feedback for motor learning or to evaluate different visual feedback designs. The design of a visual feedback may have a significant impact on the outcome; for example, reduced visibility fostered learning more than did fully visible feedback (Robin et al., 2005). Approaches to the design of visual feedback will be reviewed in the next section.

Design aspects of visual feedback

In general, the impact of the type of error visualization—that is, of the actual and the desired motor behavior—on the effectiveness of visual augmented feedback has not been systematically evaluated so far. Many possibilities exist for visualizing errors in kinematic or kinetic variables, ranging from abstract visualizations, such as simple plots, gauges, bars, lines, or numbers, to less abstract (natural) visualizations, such as 3-D animations or virtual mirrors. A systematic comparison of different visualizations has rarely been done in the field of motor learning: To instruct movement tasks, abstract sketches have been shown to be more effective than real pictures, but also more effective than a very abstract stickman illustration (Kruber, 1984). Animations can have an advantage over real videos, because the animations can be reduced to the most relevant information, as has been shown in an assembly task (Petzold et al., 2004). Since humans can recognize complex biological motions by observing only a few point lights placed on a moving body (Giese & Poggio, 2003), point lights can also provide effective feedback, as has been shown for dancing (Eaves et al., 2011). However, feedback about other complex multidimensional movements in 3-D space might require more natural visualizations. In this section, different types of abstract and natural augmented feedback visualizations are discussed in relation to their effectiveness in enhancing motor learning.

Abstract visualizations

In many simple tasks, the task-relevant variable has been represented on a normal screen in form of lines, curves, gauges, bars, or points (Eriksson, Halvorsen, & Gullstrand, 2011; Morris, Tan, Barbagli, Chang, & Salisbury, 2007; Park et al., 2000; Ranganathan & Newell, 2009; Ruffaldi, Filippeschi, et al., 2009; Schmidt & Wulf, 1997; Shea & Wulf, 1999; Van der Linden et al., 1993; Yang, Bischof, & Boulanger, 2008). In some studies, the actual variable was plotted simultaneously with the target trajectory that was displayed completely at the beginning of the trial (Park et al., 2000; Van der Linden et al., 1993; Yang et al., 2008). In other studies, the target trajectory emerged during the task execution (Morris et al., 2007; Schmidt & Wulf, 1997). An arrow indicating the current score on a scale served as concurrent feedback for a simple weight-bearing task (Winstein et al., 1996). For simple tasks, abstract visualizations might be sufficient, since the small number of relevant variables can be meaningfully represented and cognitively mastered.

In complex tasks, visual feedback has also been provided by abstract visualizations (Crowell & Davis, 2011; Debaere, Wenderoth, Sunnaert, Van Hecke, & Swinnen, 2003, 2004; Eaves et al., 2011; Eriksson et al., 2011; Hurley & Lee, 2006; Kovacs & Shea, 2011; Lee et al., 1990; Lee, Swinnen, & Verschueren, 1995; Maslovat et al., 2009; Shea & Wulf, 1999; Smethurst & Carson, 2001; Snodgrass et al., 2010; Swinnen et al., 1998; Swinnen et al., 1997; Wishart et al., 2002; Wulf et al., 1999; Wulf et al., 1998). For complex interlimb coordination tasks, a displacement–displacement plot (Lissajous figure) has been used in many studies as an abstract concurrent feedback (Debaere et al., 2003, 2004; Kovacs & Shea, 2011; Maslovat et al., 2009; Ronsse, Puttemans, et al., 2011; Smethurst & Carson, 2001; Swinnen et al., 1998; Swinnen et al., 1997). Usually, the displacement of one limb was represented on the abscissa, the displacement of the other limb on the ordinate. Consequently, for example, a 90° out-of-phase interlimb coordination resulted in a circle configuration; a 135° out-of-phase pattern resulted in an ellipse configuration. For a complex 2-D out-of-phase coordination task, Lissajous-type feedback that displayed relative phase of both hands facilitated performance dramatically, but learning was not assessed (Boyles, Panzer, & Shea, 2012). Reference trajectories complemented the Lissajous figure in some studies, giving additional information about the range of the movement (Hurley & Lee, 2006; Kovacs & Shea, 2011; Lee et al., 1995; Maslovat et al., 2009; Maslovat, Chua, Lee, & Franks, 2004, 2006; Wishart et al., 2002). In general, concurrent visual feedback in the form of Lissajous figures was reported to be beneficial for learning a complex coordination pattern. Note that Lissajous figures are commonly not purely concurrent, since the trace of the movement is usually displayed for some time; thus, terminal feedback is provided as well. The challenge of out-of-phase coordination tasks is that two independent effectors have to be controlled in accordance with a common movement plan. The Lissajous figure facilitates this process because it provides a single outcome parameter capturing the relation of both limbs. In these situations, an abstract visualization such as a Lissajous figure seems to be sufficient because the number of relevant movement variables is relatively small.

Similarly, abstract visualizations were shown to be sufficient to enhance learning of other complex motor tasks also with a small number of relevant variables. An oscilloscope could effectively give feedback about the force onset on a skiing simulator (Wulf et al., 1999; Wulf et al., 1998). Acceleration-time plots taught runners how to reduce excessive impact forces (Crowell & Davis, 2011). Force-time plots that indicated the deviation from the target force zone (colored or shaded band) were beneficial in teaching skills in manual therapy (Lee et al., 1990; Snodgrass et al., 2010). In a balancing task on a stabilometer, a bar representing the platform orientation was displayed in relation to horizontal reference bars in order to give effective concurrent feedback about the deviation from the equilibrium (Shea & Wulf, 1999). In a ball-throwing task, concurrently displayed bars representing actual and target joint angles were also shown to be effective in enhancing complex motor learning (Schack et al., 2008). Concurrent visual feedback about relevant rowing variables—for example, in the form of force-angle plots—could enhance rowing performance (Spinks & Smith, 1994), could help to maintain consistency of good rowing performance (Fothergill, 2010), and was valued by rowers and coaches (Smith & Loschner, 2002). To teach dancing skills, superposition of four limb end-effector positions in the form of point lights on a prerecorded video of expert movements was beneficial. Interestingly, giving feedback about end-effector positions only was more effective than feedback about 12 joint centers (or video instruction only) (Eaves et al., 2011). The study indicates that it is important to determine relevant key features of the task (as also was suggested by Huegel, Celik, Israr, & O’Malley, 2009; Todorov et al., 1997) and to provide feedback only about these key features, in order to not overwhelm the learner with irrelevant information. In realistic virtual reality systems, abstract augmented concurrent feedback has been reported to be detrimental to manual dexterity training of dental students (Wierinck, Puttemans, Swinnen, & van Steenberghe, 2005) but has been rated to be beneficial by medical educators and students for learning clinical breast exams (Kotranza, Lind, Pugh, & Lok, 2009) and has been shown to be effective for robotic laparoscopy training (Judkins, Oleynikov, & Stergiou, 2006).

Abstract visualizations seem to be efficient, since they can represent a key feature of a movement in an unambiguous way. Nevertheless, one should contrast different abstract visualizations prior to their application, in order to prevent any misinterpretation of the design. Moreover, common metaphors should be respected, such as red color standing for “wrong” and green for “correct.” However, there are two main disadvantages of abstract feedback designs. First, in the long run, they might become boring and, thus, hinder the learning process by demotivation. Second, feedback about complex multidimensional movements in 3-D space can hardly be abstracted but must be displayed in a more natural way.

Natural visualizations

Natural visualizations incorporate superposition or side-by-side 3-D perspectives of a reference and the corresponding user’s part. This section reviews the effectiveness of such visualizations, starting with superposition. To pass virtual sliding doors in the correct moment while walking on a treadmill, a superposition of ghost doors with “real” doors (natural visualization) was less effective than gauges (abstract visualization). The authors assumed that the inefficiency of the superposition with respect to feedback errors in walking speed resulted from an interference with the optic flow information (Huet et al., 2009). It may have been more useful to superimpose a virtual ghost avatar walking correctly on the subject’s avatar. Such virtual teacher approaches are very popular and, also, effective, as the subsequently discussed studies verify.

In a virtual table tennis environment, the superposition of the performer’s racket and the prerecorded expert’s racket allowed concurrent feedback about the multidimensional racket movement. The group practicing a table tennis shot in the virtual environment learned the target shot more quickly than did the group practicing with a real coach (Todorov et al., 1997). Displaying the movements of the body, limb, or end-effector of the learner simultaneously with those of a virtual trainer—that is, of the target movements—forces learning by imitation. No mental rotations are needed, since the learner can place himself/herself virtually inside the teacher in the same coordinate frame (Holden & Dyar, 2002). Accordingly, a virtual teacher has effectively instructed a step-in-place task (Koritnik, Bajd, & Munih, 2008; Koritnik, Koenig, Bajd, Riener, & Munih, 2010), and others have successfully been applied in rehabilitation, focusing also on limb and end-effector movements (Duschau-Wicke, von Zitewitz, Caprez, Lunenburger, & Riener, 2010; Holden & Dyar, 2002; Holden, Todorov, Callahan, & Bizzi, 1999). In contrast, whole-body superposition of learner and trainer did not enhance performance in the case of Tai Chi postures and gestures. Tai Chi was more efficiently learned when the virtual trainer was displayed beside or in front of the learner (Chua et al., 2003). In another study on Tai Chi, a similar visualization—that is, 3-D representations of the teacher and learner next to each other in a virtual reality scenario—was more effective than when the learner was asked to mimic the teacher’s movement presented on a video (Patel, Bailenson, Hack-Jung, Diankov, & Bajcsy, 2006). Thus, efficiency of superposition may depend on the amount of superimposed body parts. Too many superimposed parts may overwhelm the learner with too much information; as a consequence, he/she cannot focus on the most relevant ones (Eaves et al., 2011). Focusing on end-effector movements, besides forcing an external focus of attention, might also be beneficial, since end-effector kinematics are believed to play a key role in motor control (Todorov et al., 1997). However, the same goal or end-effector movement can be achieved by different solutions (Todorov, 2004). Thus, in some cases, not only the end-effector, but also the whole-body or limb movement must be optimized to prevent learning of uneconomic and compensatory movements. These assumptions have to be clarified in future studies in order to warrant superposition designs that optimally enhance motor learning.

It might also be important to figure out in which perspective a virtual teacher is most effective in enhancing motor learning. Recent results suggest an advantage of third- over first-person perspectives in a ball-catching task (Salamin, Tadi, Blanke, Vexo, & Thalmann, 2010). Indeed, it was found that first-person views involve other neural processes than do third-person perspectives (David et al., 2006; Kockler et al., 2010; Vogeley et al., 2004). However, virtual teachers for complex motor tasks have rarely been developed, and their design should be systematically evaluated.

In rehabilitation, provision of abstract visual feedback in a realistic, natural virtual environment has also been realized: Instead of a superposition of a virtual reference arm on the patient’s virtual arm shown in a natural virtual environment, augmented feedback was given by a semitransparent visualization of a cone and line to display the deviation of the patient’s virtual arm from the target trajectory. The patients performed smoother hand movements while being provided with the augmented visual (in combination with auditory) feedback (Huang et al., 2005). The same research group presented an aesthetic feedback design approach, again for a reaching and grasping task. Through exploration of the virtual environment by moving the arm, the user could recognize some embedded rules in the audiovisual feedback, which is the way the authors refer to a “semantic of action” approach. An image assembled or disassembled into particles in the direction of the deviation from the target position, and supination of the forearm was mapped onto the image rotation. Simultaneously, audio features were mapped onto distance, velocity, synchrony, and shoulder extension. The audiovisual feedback assisted subjects in reaching the movement goal (Chen et al., 2006; Wallis et al., 2007). Such an aesthetic feedback approach may motivate the learner to train longer than with a simple and abstract visualization. However, as in many studies involving visual feedback, a comparison with other feedback designs has not been reported, and retention was not tested; thus, conclusions on how visualizations should be designed cannot be drawn in general.

In general, visual concurrent feedback designs are desirable that guide the learner toward the optimal movement without causing a dependency on the feedback. In other words, visual feedback designs are effective when they enable parallel processing of visual and kinesthetic information relevant for movement generation (Wei & Körding, 2009). Thereby, the visual feedback calibrates kinesthetic information (Robin et al., 2005). Visual concurrent feedback may also emphasize the linkage of landmarks or key features of the motor task to kinesthetic information, which may facilitate a recall in no-feedback conditions. Moreover, to reduce dependency on concurrent feedback, the addition of very frequent terminal feedback seems to be helpful (Blandin et al., 2008), which might also be realized by displaying lasting visual traces. These approaches should be evaluated in future studies.

Conclusion on concurrent visual feedback

In simple tasks, the guidance hypothesis as well as the specificity-of-learning hypothesis were confirmed, since concurrent visual feedback increased performance during acquisition, but not during retention tests (Blandin et al., 2008; Proteau, 2005; Proteau & Isabelle, 2002; Ranganathan & Newell, 2009; Robin et al., 2005; Schmidt & Wulf, 1997; Van der Linden et al., 1993). Contradicting the guidance hypothesis, most studies on more complex tasks showed positive effects of concurrent visual feedback (Lee et al., 1990; Shea & Wulf, 1999; Snodgrass et al., 2010; Swinnen et al., 1997; Todorov et al., 1997; Wishart et al., 2002; Wulf et al., 1999; Wulf et al., 1998). Concurrent visual feedback was suggested to help the learner to access the specific information of the complex task quickly (Camachon, Jacobs, Huet, Buckers, & Montagne, 2007; Huet et al., 2009). Especially in very early learning phases, learners seem to benefit from concurrent feedback (Todorov et al., 1997) because it seems to decrease cognitive load (Wulf & Shea, 2002). If the design—that is, the visualization of the concurrent feedback—is inappropriate, positive effects are inhibited even though concurrent visual feedback would actually be effective for learning the task. A next step should be to investigate in parallel the effectiveness of abstract visualizations, natural visualizations such as virtual teachers, weak visualizations that do not block processing of kinesthetic information, and aesthetic, motivating approaches to enhance motor learning. Kinematic and kinetic variables in different tasks should be systematically evaluated prior to a comparison with other feedback strategies.

Auditory feedback: Three different approaches to enhancing motor learning

Auditory perception contributes to elite performance in sports. For instance, top performance in table tennis requires auditory information about the ball bouncing on the table and racket (Hermann, Honer, & Ritter, 2006). Returning a tennis service successfully benefits also from auditory information (Takeuchi, 1993). Although auditory information has an impact on performance, most sports are cognitively mastered in response to visually perceived information. As a consequence, providing additional visual augmented concurrent feedback may overload capacities of visual perception and cognitive processing. To minimize perceptual overload, concurrent feedback could also be displayed acoustically (or haptically). Auditory feedback may not only reallocate perceptual and cognitive workload, but also reduce distraction, since, unlike visual perception, auditory perception requires neither specific athlete orientation nor a focus of attention (Eldridge, 2006; Grond, Hermann, Verfaille, & Wanderley, 2010; Secoli, Milot, Rosati, & Reinkensmeyer, 2011). However, the impact of an auditory feedback depends considerably on the intuitive and correct interpretation of the applied mapping functions and metaphors. Applied functions and metaphors have to be carefully selected, since listening to auditory displays is less common than viewing visual displays.

In the first part of this section, auditory feedback that has been applied in motor learning will be reviewed in the following order. First, the section focuses on studies using an auditory alarm; that is, a sound without any kind of modulation is played as soon as, and as long as, the related movement variable exceeds a predefined threshold. Thereafter, studies are discussed in which movement variables are represented by sonification; that is, their magnitudes and changes over time are represented by nonspeech audio. Finally, studies are reviewed in which the movement error—that is, auditory feedback about the deviation between the actual performance and the target performance—is sonified. Existing ideas for designing a valuable auditory feedback are reviewed in the second part of this section. In the end of the section, an outlook on possible future research directions in the field of concurrent auditory feedback is given.

Auditory alarms

In recent years, auditory alarms have found their way into rehabilitation. For instance, to regain a physiological gait pattern, an auditory alarm was presented to the patient when muscle activity of the affected leg was lower than that of the healthy leg (Petrofsky, 2001), or pressure sensors were placed under the foot sole, providing an auditory alarm when unphysiological loading was present (Batavia, Gianutsos, Vaccaro, & Gold, 2001). When multiple areas under one foot or under both feet were of interest, more than one pressure sensor was integrated in the feedback, and accordingly, multiple alarms differing in frequency were presented to the user (Fernery et al., 2004). On the basis of the first pilot studies, such a kind of auditory feedback about unphysiological loading is said to have the potential to immediately alter gait patterns (Batavia et al., 2001; Fernery, Moretto, Hespel, Thevenon, & Lensel, 2004). This potential has recently been confirmed by a study showing that subjects immediately altered their gait pattern in response to an alarm occurring when predefined angular knee joint positions or accelerations were exceeded (Riskowski, Mikesky, Bahamonde, & Burr, 2009). This immediate modulation of a gait pattern by an auditory alarm was later confirmed for both knee flexion (Helmer et al., 2011) and vertical displacement of the center of mass during treadmill running (Eriksson & Bresin, 2010). However, learning was not assessed in these studies.

Alternations of a movement due to the presentation of an auditory alarm are still present when no feedback is available—for example, during catch trials, or even in transfer tests: After training of a barre exercise in dance, subjects, who had received auditory feedback about excessive foot pronation, significantly reduced the time spent in excessive foot pronation on catch trials, as compared with subjects receiving no feedback (Clarkson, Robert, Watkins, & Foley, 1986). Furthermore, after 2 weeks of training of circles on a pommel horse, an experimental group, which had received an alarm indicating hip flexion greater than 20°, could significantly improve their hip extension, in contrast to a control group, which had trained without feedback. This improvement was still present after 2 further weeks of training without feedback (Baudry, Leroy, Thouvarecq, & Chollet, 2006).

The training of professionals in shooting has also been assisted by auditory alarms that were provided when rifle movements or loading of the front leg exceeded a predefined threshold. The professionals could not improve their shooting performance (Underwood, 2009). It seems that in contrast to beginners, professionals may benefit more from specific information, such as angular displacements in a single joint, than from general information about the end-effector affected by multiple joints and represented by only one alarm.

In summary, an alarm is simple to interpret; athletes can immediately recognize in which direction they have to correct their movement and when the intended performance is gained. However, on the basis of such a discrete feedback, athletes cannot recognize to what extent they have to correct their movement. Recognition of the extent requires a continuous representation of movement data values. Such representations are reviewed in the next section.

Sonification of movement variables

Data values can be used to change the parameters of a sound. This method is termed audification if the data values are directly transferred to sound. For instance, the frequency spectrum of electromyographic data is perceivable for humans and can therefore serve as a direct input for loudspeakers. The term sonification is used if variables are mapped to sound parameters by a function; for example, each change in force development results in a defined change of the amplitude or frequency.

Sonification is generally applied to explore large amounts of data—for example, to quickly detect irregularities or specific patterns (Kramer, 1994; Walker & Nees, 2011). In this general sense, sonification has also been applied in human movement science: Age-dependent characteristics in electromyographic data have been perceived after assigning a frequency to each surface electrode placed on the quadriceps femoris and after mapping monitored activity to the amplitude (Pauletto & Hunt, 2006).

Not only the analysis of movement data can be facilitated by sonification, but also the motor learning itself—in particular, the time-dependent dynamic coordination of the movement (Effenberg & Mechling, 1998). The auditory demonstration of the targeted sequential timing prior to a keypressing task on a keyboard effectively enhanced learning of the relative timing pattern, which has been consistently shown in a few studies (e.g., Han & Shea, 2008; Lai, Shea, & Little, 2000; Shea, Wulf, Park, & Gaunt, 2001). In contrast to these fundamental studies on motor learning, the effect of concurrently presented sonified movement data on sport performance has only rarely been investigated. Chollet, Micallef, and Rabischong (1988) reported on crawl swimmers who could immediately—that is, within the same training session—improve their stability in velocity on the basis of sonified hydrodynamic pressure at hand paddles. Later on, Chollet, Madani, and Micallef (1992) mapped also the velocity of the waist during crawl to the frequency of a tone, and the training period was extended to 4 days. In a test session on the fifth day, athletes reduced their time to crawl 100 m more than the control group when they received general information first (i.e., about the waist velocity) and general and specific information later (i.e., about waist velocity and pressure at hand paddles). The improved performance associated with an improved coordination of the movement was still present 10 days later—that is, on Day 15. Further experimental groups, which received information in other combinations, showed better performance than the control group only on Day 15. Chollet et al. (1992) concluded that assimilation of information was facilitated when general information was provided prior to other types of information. Since, during the 10 days without training, which were not monitored, other random effects may have influenced the results, this conclusion has to be proven in further studies.

It may also be speculated that the sonified movement motivated subjects in the experimental groups to enhance their effort during the training period. Accordingly, the positive effect of triggering the onsets of elbow and wrist extension by percussions found in a netball task (Helmer, Farrow, Lucas, Higgerson, & Blanchonette, 2010) could be explained: The subjects receiving the auditory feedback were more motivated to repeat the task more often. Motivational effects, as well as emotive effects, are considered in current designs of movement sonifications (Schaffert, Barrass, & Effenberg, 2009). Accordingly, the results of Chollet et al. (1992) may therefore also be explained by improved physiological abilities that would not have been observable immediately after the training period, but a few days later. Although this explanation is only speculative, it still highlights an important issue of future studies on the effect of sonified movements or augmented concurrent feedback: An experimental design should be set up so that it distinguishes between enhanced technical skills and enhanced motivation resulting in enhanced physiological abilities.

Movement variables have also been sonified in sports other than crawl—for example, breast stroke (arm, leg, waist movement; Effenberg, 2000b), karate (timing of wrist and ankle movement; Yamamoto, Shiraki, Takahata, Sakane, & Takebayashi, 2004), ski carving (lateral displacement of ski; Kirby, 2009), rowing (boat acceleration; Schaffert, Mattes, & Effenberg, 2009), and German wheel (its three dimensional orientation; Hummel, Hermann, Frauenberger, & Stockman, 2010). Athletes appreciated the developed sonification and mostly stated that the sonification may have helped them to improve their skills. However, corresponding motor learning studies have not been published so far. Only a movement task on the German wheel was evaluated for novices and experts: Experts could benefit from the sonified movement, but novices could not (Hummel et al., 2010). Although this statement is based on a limited statistical analysis, it outlines one major limitation of sonified movement variables to be used by novices: If they have no idea of the correct movement sonification, they will not benefit from it.

In general, current literature lacks both a comprehensive identification of movement variables that can be sonified in order to facilitate motor learning and a systematic evaluation of the design of movement data sonification. Movement sonification is useful for facilitating motor learning only if it can be linked to a relatively precise movement representation. To achieve this also at a beginner’s stage, a visual or haptic display could guide the learner through the optimal movement, whereby the optimal sonification could be internalized and recalled in no-feedback conditions. Motor learning may be facilitated even more if the optimal movement is displayed simultaneously with the actual movement or if the current deviation between the actual movement and optimal movement is acoustically presented—that is, the movement error is sonified. Studies on this kind of concurrent auditory feedback in human movement science are reviewed in the next section.

Sonification of movement error

Effects of sports training facilitated by auditory feedback, which represents the actual deviation with respect to a reference instead of making the athlete just aware of an error by an alarm, have rarely been investigated. In speed skating, a case study reported that during training, the athlete could benefit from the sonified deviation from a one-dimensional target ankle movement (a harsh sawtooth tone made the athlete aware of the wrong movement; its intensity was proportional to the deviation) (Godbout & Boyd, 2010). In shooting, a one dimensional mapping has also been developed: The deviation of the actual aiming point with respect to the target was mapped to the frequency of a pure tone. The higher the frequency, the smaller was the deviation. A group of conscripts received this feedback in 50 % of 440 training shots, which were distributed over 11 sessions over 4 weeks. As compared with a group of conscripts that saw only the shooting score (knowledge of results), the auditory feedback group improved the shooting score already during the training sessions. Interestingly, the higher score was present in shots both with and without auditory feedback. Besides this immediate benefit of auditory feedback, a significant higher improvement was also observed in retention tests performed 2, 10, and 40 days after the training period. Subjects clearly benefited from auditory feedback, and not by knowledge of results, since in comparison with subjects without any training, improvements based on knowledge of results were not present any more after the first retention test (Konttinen, Mononen, Viitasalo, & Mets, 2004; Mononen, 2007).

Multidimensional error sonification has been reported for a rowing-type movement (Sigrist, Schellenberg, et al., 2011). Subjects could easily interpret and immediately use an auditory feedback on the deviation of the oar position. In this study, deviations in the horizontal plane were mapped to stereo balance, deviations in the vertical plane were mapped to pitch, and deviations about the oar’s longitudinal axis were mapped to timbre. When volume was additionally mapped to the total deviation, almost all subjects were able to follow the target movement as accurately as with an abstract visual feedback providing a similar amount of information on the error. Although this study highlights the immediate reduction of movement errors enabled by a rather unfamiliar auditory display (familiarization time was only about 8 min, which is quite short, as compared with our common familiarity with visual displays), effectiveness of the auditory display has to be proven in a study on motor learning.

In general, it becomes evident that the auditory display has to be designed appropriately in order to reduce the time needed for familiarization: Unfamiliar displays require a certain period of time before athletes can benefit from it, as has been seen in other studies (Baudry et al., 2006; Wulf et al., 1999). Interestingly, related motor learning theories such as the guidance and specificity-of-learning hypotheses have not been examined with auditory feedback, except for auditory demonstration of targeted rhythm prior to the task itself (e.g., Han & Shea, 2008; Shea et al., 2001). The question remains of whether augmented auditory feedback per se induces a dependency on this afferent information, as has been shown for visual feedback, a dependency that is detrimental to performance when the feedback is withdrawn. It is unclear whether auditory feedback per se is as dominant as visual feedback or whether it can provoke a better linkage to kinaesthetic information (with an appropriate design). Existing ideas for designing a valuable auditory feedback in general are discussed in the next section.

Design aspects of auditory feedback

The high number of sound dimensions such as loudness, pitch, or timbre, combined with auditory display attributes such as timing and localization, enable auditory feedback of high-dimensional data (Hermann & Hunt, 2005). This possibility of presenting movement data by a variety of sound dimensions requires a systematic approach to designing an auditory display (Effenberg, 2000a). However, auditory displays were often not systematically evaluated (Dubus, 2012) but, rather, selected in an ad hoc manner (Bonebright, Miner, Goldsmith, & Caudell, 2005). Arbitrarily designed displays may constrain motor learning by reduced motivation, distraction, or misinterpretation. In this section, after presenting a starting point for auditory displays in general, applicable mappings based on timbre and pitch to one movement variable are presented, followed by suggestions on mappings to more than one movement variable. Thereafter, the impact of the chosen polarity and the potential of music as a carrier signal are elaborated. Finally, the need for a systematic evaluation of the effectiveness of auditory displays is pointed out.

A good starting point for developing an auditory display facilitating motor learning is given by design principles of auditory graphs, which have recently been summarized by Flowers (2005): Perception of data profile changes is facilitated by mapping changes in numeric values to pitch height. Time information is more efficiently provided by rhythmic patterning of a pitch-mapped stream instead of by a stream of clicks or percussion instruments (Smith & Walker, 2005). Key events should be presented by volume changes. Studies on data sonification have revealed that changes in numerical values are preferably not mapped to loudness changes, in order to minimize concerns about interactions between pitch and loudness (Neuhoff, Kramer, & Wayand, 2002). However, studies on error sonification have successfully mapped the total amount of deviation with respect to a target movement to loudness (Drobny et al., 2009; Eriksson et al., 2011; Godbout & Boyd, 2010; Kleimann-Weiner & Berger, 2006; Vogt, Pirró, Kobenz, Höldrich, & Eckel, 2010).

To minimize perceptual grouping, separate continuous data streams should be mapped to different timbres, rather than to different rhythms (Dürrer, 2001). To choose distinguishable timbres, the equally spaced timbre-circle provided by Barrass (2005), based on the work of Grey (1975), could be applied: Opposite timbres—for example, soprano sax and flute—should be easily discriminated.

Pitch height may be an intuitive choice for displaying vertically aligned data—for example, vertical movement position. Accordingly, different pitch heights were used to display obstacle clearance (and subjects could benefit from it) (Erni & Dietz, 2001; Wellner, Schaufelberger, Zitewitz, & Riener, 2008). Furthermore, velocity and acceleration have also been successfully mapped to pitch height: Swimmers improved their performance after training assisted by sonified waist velocity (Chollet et al., 1992), and rowers confirmed that sonified boat acceleration represents the characteristic phases of boat motion (Schaffert, Mattes, & Effenberg, 2009).

While pitch height has also been mapped to one movement variable elsewhere (Ohta, Umegaki, Murofushi, Komine, & Sakurai, 2009), an additional movement variable has often been mapped to loudness (Fox & Carlile, 2005; Ghez, Rikakis, DuBois, & Cook, 2000). However, an evaluation of the interpretability and practicability of such a sonification of multiple movement variables has rarely been done. An exception in view of two sonified dimensional movement variables is given by Effenberg (2000b), reporting on a high acceptance rate of a sonified breaststroke in terms of longitudinal wrist motion and velocity mapped to pitch and loudness, respectively (Effenberg, 2000b).

One of the most complex, real-time sonifications reported so far was applied to arm movement. Harmonics were used to represent arm position, rhythm represented the smoothness of the movement, and auditory alarms were used to indicate a successful movement, as well as a compensatory movement (the severity of the compensation was mapped to loudness) (Chen et al., 2006; Huang et al., 2005). A pilot study revealed that stroke patients were able to improve the movement of their impaired arm—for example, in terms of smoothness (Wallis et al., 2007). However, the impact of the additionally available visual display (described in the visual section before) remains unclear, and a comparison with differently designed auditory feedbacks in order to evaluate its effectiveness has neither been reported by this research group.

A valuable auditory feedback for motor learning may also be based on the parallel display of the actual movement and the target movement. Different timbres enable parallel displaying of different data streams as seen for auditory graphs (Brown, Brewster, Ramloll, Burton, & Riedel, 2003) and for different frequency bands of an electroencephalography (Hinterberger & Baier, 2005). A further parallel display method presents actual movement data to one ear and target movement data to the other ear. This method was applied in a rowing task without explicitly reporting on its effectiveness (Gauthier, 1985). However, displaying different data streams to each ear might be limited: After two different sequences of words had been presented to each ear, subjects could not report on words heard in the nonattended ear (Bergman, 1994). Displaying data sequentially may overcome this limitation. However, in a sequential display, the duration of the presented movement data is a critical issue, due to the limits of the working memory and of the auditory sensory memory (Flowers, 2005).

Instead of sonifying the actual movement and the target movement in parallel, the difference between them—that is, the error—can also be sonified. Besides pitch height and loudness (Kleimann-Weiner & Berger, 2006; Konttinen et al., 2004), further sound dimensions may then work intuitively, such as stereo balance for deviations in the horizontal plane (applied by Sigrist, Schellenberg, et al., 2011), rhythm for deviations in time, or reverb for the distance to the target (Sigrist, Schellenberg, et al., 2011).

Other design aspects must also be determined with caution—for example, the polarity. Should the feedback indicate where you are relative to the target movement (i.e., provide state-indicative information), or should it indicate how to correct the movement (i.e., provide direction-indicative information)? One may speculate that the direction-indicative polarity will facilitate a corrective movement more than will the state-indicative polarity, since the required movement direction is directly presented. This hypothesis is supported by a recent study on immediate effects of multidimensional feedback on a rowing-type movement: When allowed, most subjects chose the state-indicative polarity, which seems to be more intuitive at first glance. However, movement errors were more reduced when direction-indicative polarity was chosen or prescribed (Sigrist, Schellenberg, et al., 2011).

Not only sound dimensions, but also the carrier signal itself can be the subject of auditory feedback design. Most of the reported auditory displays were based on a steady signal, which may get annoying after some time. Music may provide feedback in a more pleasant way, and its modulation has already been successfully applied in a movement synchronization task (Varni et al., 2011). However, if multiple aspects of a movement should be mapped, music as a carrier signal is limited, since continuity is hardly given in its features; thus, multiple error sonification seems hardly possible. An annoying steady signal can also be avoided by a carrier signal based on semantic sounds—for example, represent a high arm position by birds whistling and a low arm position by a frog croaking (Vogt, Pirró, Kobenz, Höldrich, & Eckel, 2009). Such a carrier signal may work for auditory alarms, but neither for movement sonification nor for error sonification, since those sonifications require a continuous mapping to facilitate continuous movement corrections.

In general, several auditory feedback designs have been presented for motor learning, but their interpretability and effectiveness have rarely been evaluated. To establish a general guideline for auditory displays in motor learning, differently designed auditory feedbacks should be compared with each other. Among others, such comparisons may reveal whether abstract sounds facilitate a movement more than do natural sounds (Rath & Rohs, 2006). Additionally, differently designed auditory feedbacks must be evaluated in different movement tasks, since the efficiency of an auditory display is task dependent (Flowers, 2005). The interpretability and effectiveness of the display depend further on the athlete, particularly with respect to age, gender, skill level, and musical abilities (Effenberg & Mechling, 1999). Rhythm and pitch height discrimination depend not on age (as long as young children are not compared with the elderly) but on gender, since females have shown better discrimination abilities (Mauney & Walker, 2007). Higher musical abilities afford a better pitch discrimination (Neuhoff & Wayand, 2002), but in principle, pitch changes of 10 % should be noticeable by almost every healthy learner.

Conclusion on concurrent auditory feedback

Concurrent auditory feedback has been successfully applied in motor learning. In comparison with visual feedback, auditory feedback may hinder processing of other sensory afferences to a lesser extent, and thereby, it could still be used to calibrate the motor program like sparse visual information. Success of auditory feedback may also originate from the fact that most studies on auditory alarms and movement sonification have investigated fast repetitive tasks. Such tasks limit online movement corrections—that is, corrections of irrelevant errors caused by sensory–motor noise; instead, the auditory information supports the feedforward control.

However, the literature lacks a systematic evaluation of the interpretability and effectiveness of different feedback designs, not only in terms of mapped movement variables, but also in terms of the abilities of the learner itself. Recent technological developments such as real-time monitoring of kinematic data have already found their way into the development of auditory displays (Chen et al., 2006; Rauter et al., 2009; Sigrist, Schellenberg, et al., 2011; Vogt, 2008). These technologies will contribute to overcoming the current initial stage of concurrent auditory feedback in motor learning.

Haptic feedback: Many concepts, few proofs

Especially in human newborns and infants, haptic interaction has a considerable impact on the development and on the motor learning process (Rochat & Senders, 1991). Infants up to 5 months perceive and understand their physical world through their hands without visual control (Sann & Streri, 2007). In our early days, the haptic sense lays the foundation for sensory integration—that is, the organization of sensory information for use in daily life (Ayres, 2005). This is because the haptic sense is the only one that enables us to interact with the world around us and, at the same time, to perceive these interactions (Minogue & Jones, 2006). This unique characteristic is called the bidirectional property of the haptic sense, which provides the basis to further enhance motor learning through haptic interactions (Hale & Stanney, 2004). Thus, it seems natural to investigate the effectiveness of haptic interactions in motor learning: Which haptic interactions enhance learning best for which type of motor task (e.g., simple vs. complex or cyclic vs. acyclic) and for whom (e.g., beginner, expert; or child, adult, elderly)? These questions are addressed in different fields of research: haptic rendering (Höver, Kósa, Székely, & Harders, 2009; McNeely, Puterbaugh, & Troy, 2005, 2006; Salisbury & Srinivasan, 1997), robot-assisted training and rehabilitation (Duschau-Wicke et al., 2010; Emken, Benitez, & Reinkensmeyer, 2007; Lambercy et al., 2007; Marchal-Crespo & Reinkensmeyer, 2009; Metzger, Lamberby, & Gassert, 2012; Nef, Mihelj, & Riener, 2007; Prange, Jannink, Groothuis-Oudshoorn, Hermens, & IJzerman, 2006; Reinkensmeyer, Emken, & Cramer, 2004), motor learning through haptic augmented feedback (Feygin et al., 2002; Flash & Hogan, 1985; Marchal-Crespo & Reinkensmeyer, 2008b; Reinkensmeyer et al., 2004; Reinkensmeyer & Patton, 2009; Wolpert, Ghahramani, & Flanagan, 2001), and human motor control (Feygin et al., 2002; Flash & Hogan, 1985; Haruno, Wolpert, & Kawato, 2001; Todorov, 2004; Viviani & Flash, 1995).

Among other topics, research on human motor control focuses on motor adaptation within a changing environment (e.g., Shadmehr & Mussa-Ivaldi, 1994), age-related learning (e.g., Takahashi et al., 2003), generalization and transfer of skills from one movement to another (e.g., Conditt, Gandolfo, & Mussa-Ivaldi, 1997; Oakley & O’Modhrain, 2005), internal versus external focus during task execution (e.g., Criscimagna-Hemminger, Donchin, Gazzaniga, & Shadmehr, 2003; Shadmehr & Moussavi, 2000), and internal movement representation (e.g., Haruno et al., 2001; Todorov, 2004). To investigate these topics, researchers have developed devices that provide haptic interaction and assess a subject’s performance simultaneously. To study human multijoint limb movement, in the 1980s, the first haptic device with two degrees of freedom (DOF) was built by Mussa-Ivaldi, Hogan, and Bizzi (1985). Since then, many haptic human–machine interfaces have been developed to haptically support and to investigate human motor learning. It is now even possible to use several commercially available haptic interfaces, such as desktop systems, ground and wall-mounted systems, portable systems, and tactile systems. Examples for desktop systems are, for example, the PHANTOM® Desktop™ Haptic Device (www.sensable.com), the 3-DOF omega haptic device and 6-DOF delta haptic device (www.forcedimension.com), Virtuose™ haptic devices (www.haption.com), Novint Falcon (home.novint.com), and Freedom 7S (www.mpb-technologies.ca). An example for the class of ground and wall-mounted systems is the HapticMaster (Van der Linde, Lammertse, Frederiksen, & Ruiter, 2002). An example for portable systems is the CyberGrasp, and examples for tactile systems are CyberTouch (www.cyberglovesystems.com), STRess (laterotactile.com), and TouchSense® (www.immersion.com) devices.

In this section, haptic strategies are reviewed whose aim is to facilitate human motor learning on the basis of haptic augmented feedback. In particular, the potential of different control strategies for facilitating motor learning is discussed. Since the effectiveness of many control strategies have solely been tested in rehabilitation applications, this section also refers to studies that have been conducted with patients. Detailed design rules for haptic interfaces have already been published elsewhere (O’Malley & Gupta, 2008).

Position-control-based haptic guidance: Feasible for movement instruction?

To facilitate robot-assisted human motor learning, position control is the most restrictive haptic guidance control strategy in terms of position and time. Position control enforces a predefined reference movement of the robot regardless of what the human user intends to do. Thus, from the robot’s point of view, the human represents only an external disturbance that has to be compensated for in order to decrease position errors. However, research on motor learning has shown that preventing humans from making errors can be detrimental. It was shown that the process of successful motor learning was prolonged by about 15 times if subjects were prevented from making errors (Scheidt, Reinkensmeyer, Conditt, Rymer, & Mussa-Ivaldi, 2000). Thus, making errors drives motor learning (Emken et al., 2007; Emken & Reinkensmeyer, 2005; Patton, Stoykov, Kovic, & Mussa-Ivaldi, 2006; Reisman, Wityk, Silver, & Bastian, 2007; Thoroughman & Shadmehr, 2000; van Beers, 2009). Nevertheless, position control may be useful for novices who do not know the desired movement at all or for less skilled or impaired subjects who are not physically able to perform a movement task. This hypothesis is supported by the challenge point theory, which states that novices or less skilled subjects may not improve if the task level is too challenging (Guadagnoli & Lee, 2004). Yet the potential of position control to facilitate motor learning has rarely been tested but should be evaluated. Especially in early learning of complex motor tasks, position control may help to acquire a first movement representation.

One of the few studies focusing on position control investigated training of 3-D trajectories with the PHANTOM® device (Feygin et al., 2002). Three different conditions were studied: (1) visual instruction, allowing the subject to watch the end of the robotic arm moving through the target motion; (2) position control guiding the subject, who grasped the end of the visually hidden robotic arm, through the target motion; and (3) a combination of both, enabling the subject to see the robotic arm while being guided through the target motion. Training with vision resulted in a more accurate learning of the trajectory shape than did position control, whereas position control facilitated timing best—that is, the temporal aspects of the trajectory. The best results for shape and timing were obtained when vision was combined with position control. In another study focusing on somehow complex and random movements (without any relation to functional movements), both visual and visuo-haptic training improved short-term retention of a novel path, whereas position control did not result in any significant improvement (Liu, Cramer, & Reinkensmeyer, 2006). In visuo-manual writing tasks, the impact of position control has also been contrasted with force control—that is, when the robot follows a predefined force profile instead of a predefined path (Bluteau, Coquillart, Payan, & Gentaz, 2008). Neither of the controllers facilitated learning of shape aspects of the writing task. However, force control facilitated learning of kinematic aspects in terms of movement fluidity—that is, number of velocity peaks and mean velocity.

In general, the experiments applying position control did not reveal any significant advantage of position control, as compared with other feedback strategies. This could be due to the instructional character of position control, which might be useful only in an early learning phase. Position control is assumed to be ineffective for motor learning because those motor control loops in the central nervous system between proprioceptive input and motor output are not strengthened even though they are important especially for improving dynamic tasks (Shadmehr & Mussa-Ivaldi, 1994). Furthermore, users behave passively and exert less energy (Israel, Campbell, Kahn, & Hornby, 2006) due to slackness when they execute movements guided by a position controller (Reinkensmeyer, Akoner, Ferris, & Gordon, 2009). Still, position controllers have constituted a basis for more complex control algorithms; thus, some researchers have presented the design of a position controller only as a proof of concept, without further using it in studies related to motor learning (Kousidou, Tsagarakis, Smith, & Caldwell, 2007; Loureiro & Harwin, 2007; Nef et al., 2007; Rauter, von Zitzewitz, Duschau-Wicke, Vallery, & Riener, 2010).

However, using position control as a haptic augmented feedback strategy may still be of use in rehabilitation—for example, through mobilization with a high number of repetitions and to reestablish normative patterns of motor output (Marchal-Crespo & Reinkensmeyer, 2009). In general, position control has the potential to demonstrate an a priori unknown movement to the user, and thus, it has an instructional character. As an instruction, position control may represent the first impetus that starts the process of motor learning, since the need for support depends on the user’s skill level (Cesqui et al., 2008).

Haptic guidance beyond position control: Many suggestions, no systematic evaluation

Haptic guidance is an umbrella term for various kinds of haptic augmented feedback strategies that all have in common guiding the human subject through the ideal motion by a haptic interface (Feygin et al., 2002). Commonly, a correcting force pushes the user’s limb toward a physiological reference trajectory or posture; for example, the correcting force increases with the deviation from the reference trajectory. Haptic guidance may lead to (1) strengthening of muscles and connective tissue, provoking motor plasticity and preventing stiffening; (2) somatosensory stimulation inducing brain plasticity; (3) reinforcement of the movement pattern by movement repetitions; (4) prolonged trainings by relieving therapists from back-breaking work; and (5) increased motivation due to successful active task completion (Marchal-Crespo & Reinkensmeyer, 2009). Haptic guidance strategies that allow irrelevant errors caused by noise in the sensory–motor system may mediate not only the new motor program in an early learning phase, but also the improvement of correct error detection/correction mechanisms in a later phase.

Control strategies used for haptic guidance beyond position control are impedance control (Hogan, 1985), admittance control (Van der Linde et al., 2002), path control (Vallery, Duschau-Wicke, & Riener, 2009) (also known as virtual tunnel; Mihelj, Nef, & Riener, 2007), force fields (Vallery, Duschau-Wicke, & Riener, 2009), performance-based adaptive control (Krebs et al., 2003), or combinations of these control strategies. These haptic guidance strategies provide a certain amount of freedom in terms of position and/or timing to the user, in contrary to position control. This freedom may facilitate motor learning (Scheidt et al., 2000) and can vary from completely unconstrained movements in terms of position and timing errors (e.g., provided by zero impedance control; Blaya & Herr, 2004) to completely restricted movements as provided by position control. Intermediate steps from zero-impedance control to position control range from constraints in position (e.g., provided by path control; Khatib, 1986; Vallery, Guidali, Duschau-Wicke, & Riener, 2009), and further over predefined position and soft time constraints (e.g., provided by path control with a flux; Marchal-Crespo, Rauter, Wyss, von Zitzewitz, & Riener, 2012), and further over predefined position and hard time constraints (e.g., provided by path control with a moving time window; Duschau-Wicke et al., 2010), to completely restricted movements in space and time, such as position control. Moreover, haptic guidance can also be provided in temporally (Endo, Kawasaki, Kigaku, & Mouri, 2007; Powell & O’Malley, 2011) or spatially (Gillespie, O’Modhrain, Tang, Zaretzky, & Pham, 1998; Powell & O’Malley, 2011) separated cues (Powell & O’Malley, 2012). Spatial separation of haptic guidance and task-inherent forces could be of importance, since combining those forces might lead to learning the wrong task (Marchal-Crespo & Reinkensmeyer, 2008a). Temporarily separating haptic guidance and task forces, to the contrary, might be important for reducing the user’s reliance on the feedback (Li, Patoglu, & O’Malley, 2009; Marchal-Crespo & Reinkensmeyer, 2009). Reliance on haptic guidance might be reduced using paradigms like assist-as-needed or fading feedback (Marchal-Crespo & Reinkensmeyer, 2009; Patoglu, Li, & O’Malley, 2009; Powell & O’Malley, 2012).

In general, control parameters are fixed prior to a study. However, control parameters like stiffness may also be a function of the pose of the haptic interface (Vallery, Duschau-Wicke, & Riener, 2009). Control parameters may further be a function of time or performance; for example, controller stiffness can be adapted in response to the user’s current performance and need, which has already been done (see Krebs et al., 2003; Marchal-Crespo & Reinkensmeyer, 2008b). A further possibility is to vary the control parameters over time and space as a function of the user’s performance (e.g., Rauter, von Zitzewitz, et al., 2010).

The control of most haptic guidance feedback strategies is based upon reference trajectories, except for zero impedance control and a few model-based control algorithms (Ronsse, Koopman, et al., 2011; Ronsse et al., 2010; Ronsse, Vitiello, et al., 2011). The reference trajectories are either recorded and postprocessed or artificially generated. Often, the artificially generated trajectories are smooth functions that are based on optimality principles approximating human motor control, such as minimal jerk, minimal torque, and minimal torque change (Todorov, 2004). In a recent control concept, the haptic controller does not rely on reference trajectories and supports arbitrary rhythmic movements of the user. To do so, the controller uses adaptive oscillators that synchronize with the sinusoidal high-level features of the user’s movements (Ronsse, Koopman et al., 2011; Ronsse et al., 2010; Ronsse, Vitiello, et al., 2011). Such strategies can be seen as a trade-off between leaving the user in full control of the movement features—that is, trajectory and movement frequency—while still providing a certain amount of assistance/guidance.

Until now, haptic guidance has been successfully applied mainly in simple motor tasks, such as guided point-to-point movements or reaching tasks (e.g., Amirabdollahian, Loureiro, & Harwin, 2002; Flash & Gurevich, 1991; Goodbody & Wolpert, 1998; Loureiro, Amirabdollahian, Coote, Stokes, & Harwin, 2001; Loureiro, Amirabdollahian, Topping, Driessen, & Harwin, 2003; Sainburg & Ghez, 1995; Todorov, 2004), but rarely in complex tasks (except by Brickman et al., 1996; Chen & Agrawal, 2012; Chen, Ragonesi, Agrawal, & Galloway, 2010; Ho, Basdogan, Slater, Durlach, & Srinivasan, 1998; Lewiston, 2009; Marchal-Crespo, Furumasu, & Reinkensmeyer, 2010; Marchal-Crespo et al., 2012; Marchal-Crespo & Reinkensmeyer, 2008b; Oakley, Brewster, & Gray, 2001; Rauter, von Zitzewitz, et al., 2010). The lack of studies on healthy subjects using haptic guidance in complex tasks might be caused by hardly available or affordable haptic interfaces supporting complex tasks in several DOFs and insufficient computer performance to control the desired number of DOFs.

The few existing studies on complex motor tasks revealed that haptic guidance reduces the perceived workload, improves the current performance, and enhances motor learning (Brickman et al., 1996). Furthermore, haptic feedback has been shown to enhance the user’s presence and cooperation (Ho et al., 1998; Oakley et al., 2001), which is of great importance for motor learning. For instance, haptic guidance enhanced motor learning in a steering task (Marchal-Crespo & Reinkensmeyer, 2008b) and motor adaption in wheelchair driving (Marchal-Crespo et al., 2010). In the steering task, haptic guidance was shown to outperform the condition without haptic guidance significantly in terms of driving accuracy. However, users seemed to become dependent on feedback through haptic guidance, promoting the guidance hypothesis. In the wheelchair driving task, haptic guidance was gradually adapted to performance in terms of errors in look-ahead distance and direction and direction change of the users. The results in 22 healthy children and 1 impaired child showed significantly higher learning rates when steering was supported by gradually adapted haptic guidance than without haptic guidance. In a task that required developmentally delayed infants to find their way through a maze while sitting on a mobile robot, haptic guidance by force fields displayed by a joystick was also more effective than no haptic guidance (Chen et al., 2010). All infants who obtained haptic feedback learned to drive through the maze more quickly and more accurately than the control group. These results have been recently extended in a study that showed improvement in driving skills of two subject groups that were trained in wheelchair driving using assist-as-needed and repelling force paradigms, as compared with a control group that did not receive any haptic feedback (Chen & Agrawal, 2012).

Sequential finger-pressing movements in piano playing have also been guided by a haptic device employing magnetic forces. In several experiments and retention tests on auditory–motor short-term memory tasks, the advantage of haptic guidance over auditory feedback and audio-haptic feedback over haptic guidance could be demonstrated (Lewiston, 2009). In future experiments, it would be interesting to see whether the effectiveness of haptic guidance in sequential tasks like piano playing can also be transferred to other tasks.

During the last few years, haptic guidance was also applied in the field of robot-assisted therapy. However, the results are controversial: Some studies reported that patients profit more from conventional therapy than from robot-aided therapy (Hidler et al., 2009; Hornby et al., 2008), whereas others reported the opposite (Husemann, Muller, Krewer, Heller, & Koenig, 2007; Lo et al., 2010; Mayr et al., 2007). The reason why conventional therapy through human physiotherapists can still be more effective than robot-aided therapy seems to be based on the fact that a human therapist can adapt his strategy and amount of haptic guidance. This hypothesis seems further to be supported by the results from the steering and the wheelchair experiments described above (Marchal-Crespo et al., 2010; Marchal-Crespo & Reinkensmeyer, 2008b). Thus, feedback adaptation might have in general a large impact on motor learning and should be investigated more intensely in future.

Haptic guidance has not been applied for complex motor task learning in the field of sports, except for a rowing-type and a tennis-type task. In the rowing task, a conservative force field displayed a virtual tunnel with elastic walls to guide the movement of the oar. The results of a pilot study revealed that the force field was able to compliantly guide a naive subject through a desired trajectory (Rauter, von Zitzewitz, et al., 2010). In the tennis-type task, three different control concepts were implemented—position control, path control, and guidance-as-needed controller—in order to investigate the influence of different haptic guidance concepts on task timing, whereby learning was not assessed (Marchal-Crespo et al., 2012).

Haptic guidance may help beginners to learn complex (sportive) movements in a safe and self-explanatory way (Powell & O’Malley, 2012). For experts, haptic guidance could be effective in teaching detailed technique aspects that can make the difference in professional sports. Since only a few studies have applied haptic guidance on complex motor tasks, which were also quite diverse, the general potential of haptic guidance to facilitate motor learning of complex movements remains open.

Vibrotactile feedback systems need to be evaluated in motor learning

In general, vibrotactile displays have mainly been developed to improve navigation and orientation in order to reduce workload of the visual and auditory system—for example, for steering an airplane (van Erp et al., 2006). Applications in sports are diverse: Vibrotactile displays have been applied to give information about tactics in soccer (van Erp et al., 2006), about the aerodynamic posture in skating and cycling (van Erp, Saturday, & Jansen, 2006), about the coordination of multiple dancers (van Erp et al., 2006), about dancing skills (Nakamura et al., 2005, Rosenthal et al., 2011), or about snowboarding skills (Spelmezan, Hilgers, & Borchers, 2009; Spelmezan, Jacobs, Hilgers, & Borchers, 2009). However, in the field of sports, only in rowing have the effects of vibrotactile feedback on motor learning been evaluated. In a pilot study, learning of an abstract oar trajectory on a rowing simulator was slightly more enhanced by visuo-vibrotactile feedback than by visual or vibrotactile feedback alone (Ruffaldi, Filippeschi, et al., 2009). In another study with expert rowers, no difference between vibrotactile feedback and visual feedback in enhancing timing of knee and back extension could be shown. However, a ceiling effect was present and was assumed to impede learning in either condition (van Erp et al., 2006).

The development of meaningful vibrotactile feedback, as well as practical systems, is challenging (Bark et al., 2011; Rosenthal et al., 2011). Appropriate sites on the body for the vibrators must be found; for example, the vibration must be easy to perceive, and at the same time, the vibrators should not hinder movement. Interestingly, some sites might have an initial advantage in representing specific information, due to its naturalness; however, the advantage dissolves when users are given enough time to become familiar with less intuitive sites (Stepp & Matsuoka, 2011). Moreover, appropriate signal ranges and modulations (e.g., pulse or amplitude modulation) of the vibration must be evaluated (Stepp & Matsuoka, 2012). The signal should be clear but not irritating or harming. And, as has already been discussed for auditory feedback design, the polarity of the signal must be considered (Bark et al., 2011; Spelmezan, Hilgers, & Borchers, 2009; Spelmezan, Jacobs, et al., 2009). The vibration can be meant either to pull the body part toward the signal (attractive, direction indicative) or to push it away (repulse, state indicative). Intuitive responses to vibrations on different body parts revealed that polarity preferences are mainly individual (Spelmezan, Jacobs, et al., 2009). In a study on learning arm movements with visuo-vibrotactile feedback, neither the attractive nor the repulsive mode was preferred (Bark et al., 2011). In contrast, a preference for the attractive mode was found for initiation of wrist rotations (Jansen, Oving, & Van Veen, 2004) and also for auditory feedback guiding oar movements in rowing (Sigrist, Schellenberg, et al., 2011). The preference for the repulsive or attractive mode might also depend on the vibration properties, since unpleasant vibrations are meant to be avoided, whereas neutral or even pleasant vibrations could attract a movement direction.

Applying a meaningful, intuitive metaphor is believed to be crucial for vibrotactile feedback (Spelmezan, Hilgers, & Borchers, 2009; Spelmezan, Jacobs, et al., 2009), especially if the task becomes more complex (Jansen et al., 2004). Lieberman and Breazeal (2007) took advantage of the sensory saltation phenomenon to give feedback about erroneous arm movements. Thereby, four vibrators around a rotational joint (e.g., wrist rotation) were sequentially pulsing to give a feeling of a rotating signal. A simpler mapping was used for hinge joints; that is, too far inward bending of the wrist resulted in more intense vibration of the actuator placed below the wrist. Results were less promising for feedback for the rotational joints when the rotational signal was applied than with the simpler mapping on the hinge joints (Lieberman & Breazeal, 2007). However, other designs that are also effective for rotational movements can still be found. Detailed studies focusing on design, long-term learning, and practicability of vibrotactile feedback in sports are needed to rate its value. To date, many vibrotactile systems are still in their early phase.

Augmenting the movement errors or the environment

In the last sections, strategies that guided the user to the correct movement (haptic guidance), as well as vibrotactile feedback, were reviewed. In this section, haptic strategies that augment the environmental conditions to optimally challenge or support the user are reviewed.

In general, the optimal learning condition may be given in an environment that challenges users depending on actual performance, learning progress, individual skills, and biomechanics. Such a challenging environment could be obtained by a controller that can modify spatial and/or temporal features of the target trajectory automatically. To our knowledge, to date, only once has such an approach been realized: a planar teleoperated system that adapted trajectory boundaries online for visuomanual tracking (Garcia-Hernandez & Parra-Vega, 2009). However, the trajectory boundaries in this teleoperated system are obtained by interpolation from a human master’s trajectory, not by an automated system.

Another approach that accounts for the demand to support or challenge each user adaptively in an optimal manner has been presented recently. The controller can modify its behavior continuously from haptic guidance by path control to error augmentation through continuous scaling of a torque field (Rauter et al., 2011). So far, a proof-of-concept has been provided; the impact of this controller on motor learning has yet to be evaluated.

Error augmentation, also known as error amplification, amplifies the movement error by disturbing force fields. Since errors drive motor learning (Emken et al., 2007; Emken & Reinkensmeyer, 2005; Patton et al., 2006; Reisman et al., 2007; Thoroughman & Shadmehr, 2000; van Beers, 2009), error augmentation seems to be promising by definition. However, it has been shown that skilled subjects may profit from learning with error amplification rather than from haptic guidance, in contrast to less skilled subjects (Cesqui et al., 2008; Milot, Marchal-Crespo, Green, Cramer, & Reinkensmeyer, 2010). These results are also supported by the challenge point theory stating that advanced users can profit from challenging feedback but novices cannot (Guadagnoli & Lee, 2004). Random noise-based perturbations were shown to be more effective than haptic guidance in one experiment on a path-following task in healthy subjects (Lee & Choi, 2010). For stroke patients, error augmentation in the form of speed-dependent disturbance forces was shown to be more effective than haptic guidance for learning a reaching task (Patton et al., 2006). It has further been found that limb impedance is increased in order to limit movement variations due to perturbations through error augmentation (Takahashi, Scheidt, & Reinkensmeyer, 2001).

In the context of challenging control concepts, control strategies that apply resistance against the executed motion—that is, constraint-induced control concepts—also could enhance motor learning. For example in rehabilitation, such controllers can hinder patients from executing wrong movement patterns, or even restrain the use of single body parts. Due to training against resistances, subjects can increase muscle strength and self-reported function and further reduce disabilities (Lambercy et al., 2011; Ouellette et al., 2004). Restraining movements of nonimpaired limbs may reduce hyperreliance on healthy limbs and improved impaired limbs (Kolb, 1995; Ogden & Franz, 1917; Ostendorf & Wolf, 1981; Royet, 1991; Sterr et al., 2002; Wolf, Lecraw, Barton, & Jann, 1989). Due to increased motor function, increased functional potential for motor learning, especially in the impaired limbs, could be expected.

Another principle for haptic augmented feedback amplifies the environmental dynamics of a task so that the subjects experience the task dynamics more intensely (Emken & Reinkensmeyer, 2005). The benefit of this “amplification of movement dynamics” has been exemplified in a walking task during which a force field was applied in an upward direction depending on the horizontal velocity of the foot during the swing phase. Subjects were able to adapt 26 % faster to the force field when it was transiently amplified, instead of being kept constant.

Counterbalance-based control—that is, gravity compensation—does not guide the human subject, but it cancels or reduces the experience of the earth’s gravitational field. This type of control relieves the human subject from the burden of his/her own limb weight and helps the subject to focus on the motor task. This form of haptic support is mostly used in locomotion training for patients in an orthotic device such as the Lokomat (Hidler et al., 2009; Hornby et al., 2008; Husemann et al., 2007; Mayr et al., 2007). It was not till counterbalance-based controllers had been applied that even heavily impaired subjects could profit from automated treadmill training.

For motor learning, processing of proprioceptive information should be forced when the key features of the movement are performed correctly, and not when they are performed incorrectly (Chiviacowsky & Wulf, 2007; Winstein, 1991). This can be addressed by haptic feedback strategies—for example, error augmentations or emphasizing control. However, until now, such emphasizing control strategies have been neither developed nor investigated in a systematic way.

Conclusion on concurrent haptic feedback

Position control strategies characterize the simplest form of haptic augmented feedback. Especially in patients, they seem to be useful in motor (re-)learning because of their motivational aspect provided through successful task completion and increased training duration and intensity. Still, the strongest feature of position control is its instructional character, which has hardly been investigated and might have been underestimated in the field of motor learning to date. A variety of haptic guidance control strategies have revealed promising results, especially if an adaption to the skill level of the learner was considered. Error augmentation was shown to outperform other haptic control strategies, since it intensifies error-based learning. Similarly, control concepts that modify the environment seem promising, since the task-inherent dynamics become more obvious to the learner. Other control strategies relieve subjects from their own weight and, thus, might enable learning of tasks that would exceed the learner’s physical abilities.

Most results on the effectiveness of haptic feedback have emerged from studies of simple motor tasks, and only a few from complex motor tasks, such as in sports. Vibrotactile feedback devices have been applied in sports, but learning has generally not been assessed. In general, haptic feedback has hardly been tested to be more or less effective than other feedback modalities. There is a need for a more systematic evaluation of haptic feedback in dependence on the task, the subject’s current performance, and skill level.

Multimodal feedback is promising

As has been shown in the previous sections, concurrent augmented unimodal visual, auditory, or haptic feedback has been reported to be able to accelerate complex motor learning. However, in daily life, multimodal, rather than unimodal, stimuli are present. Not only are humans used to processing stimuli in different modalities at the same time, but also multimodal information even facilitates acting in the world. For example, seeing a person talking makes understanding easier, as compared with only hearing the person talking (Campbell, Dodd, & Burnham, 1998; Munhall, Gribble, Sacco, & Ward, 1996). Hence, it can be hypothesized that in motor learning, augmented multimodal feedback is more efficient than unimodal feedback. In this section, a theoretical examination for concurrent multimodal feedback in motor learning is given first. Then studies are reviewed that applied audiovisual feedback and visuohaptic feedback. To our knowledge, concurrent augmented audiohaptic, or even audiovisuohaptic feedback, for motor learning has not been investigated so far.

Theoretical considerations promoting multimodal feedback

Researchers have suggested that the threshold of neural activation is reached earlier by multimodal learning than by unimodal learning (Seitz & Dinse, 2007; Shams & Seitz, 2008). Multimodal stimuli are typically perceived more precisely and faster than unimodal stimuli (Doyle & Snowden, 2001; Forster, Cavina-Pratesi, Aglioti, & Berlucchi, 2002; Fort, Delpuech, Pernier, & Giard, 2002; Giard & Peronnet, 1999). This holds true even during active movements (Hecht, Reiner, & Karni, 2008), an effect commonly described as sensory enhancement or intersensory facilitation (Carson & Kelso, 2004). Importantly, it has also been suggested that multimodal learning strengthens multimodal representations and the connections between the unimodal areas (Shams & Seitz, 2008). Researchers have assumed that, after training with multimodal stimuli, multimodal processing is activated even if only unimodal stimuli are present (Kim et al., 2008; Seitz et al., 2006; Shams & Seitz, 2008). In fact, after training with audiovisual feedback, learning of motion perception tasks was still enhanced even though auditory feedback was withdrawn (Kim, Seitz, & Shams, 2008; Seitz, Kim, & Shams, 2006). Moreover, it has been shown that enhanced performance with multimodal stimuli does not originate from additional alerting effects. Evidence for this conclusion is given by experiments showing that congruent audiovisual feedback was effective for learning the task, whereas incongruent audiovisual feedback was not (Kim et al., 2008). If alerting effects had existed, both conditions would have led to enhanced learning. Related results were presented in brain research. Congruent multimodal stimuli increased cellular activity in a supra-additive manner, which was greater than the sum of individual stimuli. This phenomenon is known as response amplification, which is especially pronounced when cross-modal stimuli are derived from the same events and have a spatiotemporal linkage (Carson & Kelso, 2004). In contrast, incongruent multimodal stimuli led to a subadditive response (Calvert, Campbell, & Brammer, 2000). These findings suggest that multimodal learning can be superior to unimodal learning, due to optimized neural activation and neural representation.

Many researchers believe that the positive effect of multimodal learning originates from a reduction of the cognitive load due to a distribution of information processing. For instance, Burke et al. (2006) has stated that people have different cognitive resources for information processing, even though not all can be used simultaneously without interference. This theory refers to the multiple-resource theory of Wickens (2002), which states that a distribution of information to different modalities is superior than providing the same amount of information in one modality. The multiple-resource theory is in line with Baddeley’s (1992) theory of working memory; visual-spatial information is maintained in one area, and auditory-verbal information in another area of the working memory. Both unimodal processors are believed to be controlled by the central executive, declared to be an attentional-controlling system, but being largely functionally independent. This allows extending the working memory by the provision of multimodal information inputs (Baddeley, 1992). Indeed, users preferred multimodal to unimodal interaction when complexity of a task was increased, which indicates that users self-manage resources of the working memory by shifting from unimodal to multimodal interaction with increasing cognitive demands (Oviatt, Coulston, & Lunsford, 2004). All these findings on memory and cognitive load imply that if workload is high in one modality, augmented feedback should be given in another modality or in a multimodal way. This might prevent cognitive overload and, therefore, might enhance motor learning.

The human senses differ in their capabilities. Vision is very precise in the perception of spatial information, whereas hearing is very precise in the perception of temporal information (Freides, 1974; Nesbitt, 2003; Welch & Warren, 1980). In particular, sound is effective for perceiving periodicity, regularity, and speed of motion (Kapur, Tzanetakis, Virji-Babul, Wang, & Cook, 2005; Kramer, 1994; Nesbitt, 2003). The perception of haptics can fulfill relatively high demands on processing both temporal and spatial information (Nesbitt, 2003) and is believed to be the most direct form of motor information (Lieberman & Breazeal, 2007), because haptic feedback can mechanically change the movement by applying forces on the body. Therefore, it is suggested that augmented information should be displayed in the appropriate modality (Huang et al., 2005), according to the modality appropriateness hypothesis (Welch & Warren, 1980), or in a multimodal way, since imperfect estimations gained by one modality can be improved by more precise information in another modality (Hecht & Reiner, 2009; van Beers, Sittig, & Gon, 1999). Multimodal integration is believed to follow a general principle; thus, the nervous system weights the information available in each modality in an optimal way (reducing final estimation variance), whereas attention can influence these weights (Alais & Burr, 2004; Ernst & Banks, 2002; van Beers et al., 1999). Designs of augmented multimodal feedback should exploit the modality-specific advantages. The optimal display modality or modalities should be chosen in order to gain the most precise perception, but, thereby, the challenge is to prevent a dependency on the augmented feedback. Moreover, a possible trade-off between performance and comfort should be considered. For instance, moving a pen inside a cyclic path was rated to be most comfortable with audiovisual alarm-type feedback, but accuracy was best with tactile or audiovisuotactile feedback (Sun, Ren, & Cao, 2011).

The study of Ronsse, Puttemans, et al. (2011) highlights the importance of choosing the right modality. A visual group was asked to learn an interlimb out-of-phase coordination task with Lissajous figures and could not benefit in terms of learning. For the auditory group, the turning points of the wrist movement were represented by tones during training. Their rhythm should have matched a target rhythm resembling the pattern of a galloping horse, which they could listen to prior to the training. The auditory group did learn; thus, they showed correct coordination even in a retention test. It might be that the auditory group could keep in mind the target rhythm and move their wrist accordingly, even without auditory feedback in retention tests (Ronsse, Puttemans, et al., 2011). Therefore, it might be very effective to combine movement sonification with either visual or haptic concurrent feedback for learning movements with complex temporal patterns, such as rowing or swimming, since the now well-known sonification might be kept in mind on no-feedback trials.

On the basis of research on learning related to information processing and motion perception, positive effects of the provision of information in a multimodal way can be expected. The question arises as to whether multimodal concurrent feedback also has a positive effect on motor learning. According to the guidance hypothesis and specificity-of-learning hypothesis, how multimodal feedback can be used to calibrate kinesthetic afferent information for later recall—that is, after training—should be examined. Since few studies on augmented concurrent multimodal feedback in motor learning have been published, in the following sections, not only studies on motor learning, but also studies with multimodal feedback on interaction or motor–perception are discussed. The latter studies did not apply augmented feedback about a kinematic or kinetic variable of a human movement, but about a variable of the environment that was relevant for the task.

Audiovisual feedback enhances perception

A meta-analysis on audiovisual (and visuotactile) feedback in tasks activities such as alert, warning, interruption, target acquisition, communication, navigation, and driving or vehicle operation revealed that audiovisual feedback is most effective in single tasks under normal workload conditions. In tasks with a high workload, audiovisual feedback is rather detrimental. In that meta-analysis, it is suggested that the use of both the auditory and the visual channel increases workload because the two modalities are cognitively linked (Burke et al., 2006). However, in interaction tasks studied with navigation simulators, the provision of augmented information in an audiovisual way increased flight performance (Bronkhorst, Veltman, & Van Breda, 1996; Tannen, Nelson, Bolia, Warm, & Dember, 2004) or driving performance (Liu, 2001). These findings are in agreement with the multiple-resource theory described earlier (Wickens, 2002): Since vision is highly loaded in navigation tasks, a distribution of augmented information to the visual and auditory modalities is appropriate. Note that a flight or a drive task has a high demand on cognition, whereas the motor task is quite simple (Todorov et al., 1997).

Not only the representation of audiovisual information in navigation tasks, but also the representation of a movement in an audiovisual way seems to be superior to visual or auditory representation only. In the studies reported by Effenberg (2005), the subjects were asked to estimate the height of a countermovement jump (Effenberg, 2005). The countermovement jump was presented visually on a screen and/or by sonification of the force on the force plate by loudness and pitch. The audiovisual condition led to the best estimation of the jump height and to the highest reproducibility. Besides the sonification, the no-sound-period during the flying phase of the jump could have facilitated the estimation, since the pause was directly correlated with the jump height. In some cases, temporal aspects of a movement can imply spatial properties of a movement (Liebermann et al., 2002). However, multimodal information could have enhanced performance, due to increased precision of perception, neural activity, and neural representation.

For movements with a small number of relevant variables, such as countermovement jumps, the representation of a single variable by visual and auditory features at the same time can increase task performance. For movements with a high number of relevant variables, providing feedback on different variables in different modalities might be effective. Such an approach was applied in a reaching task: Kinematic variables were each represented in a unimodal way, either by visual feedback (e.g., on hand orientation) or by sonification (e.g., of elbow flexion). Therewith, an engaging, multimodal feedback design was present that enhanced performance during the reaching task (Chen et al., 2006; Huang et al., 2005; Wallis et al., 2007).

Regarding the reviewed studies, it is conceivable that audiovisual concurrent feedback could have positive effects on motor learning. However, in general, retention tests without audiovisual feedback have not been done so far but should be included in future studies in order to assess the impact of audiovisual feedback on motor learning. A systematic evaluation of the effectiveness of audiovisual feedback in motor learning may also complement the theories on multimodal information processing.

Visuohaptic feedback can be effective for spatiotemporal learning

Visuohaptic information was used to enhance task realism, rather than to give augmented feedback. In simple targeting tasks, visuotactile information decreased error rate, as compared with visual information only (Oakley, McGee, Brewster, & Gray, 2000). During navigation in a driving simulator, reaction time and mental effort decreased when visual and vibrotactile displays were present (Van Erp & Van Veen, 2004). In a simple ball-balancing computer game, subjects preferred combined haptic and visual feedback to visual feedback only; however, performance was not enhanced, due to the addition of haptic feedback (Swindells, Unden, & Sang, 2003). Visuotactile displays were reported to be most effective in various tasks with high workload (Burke et al., 2006).

In particular, learning of temporal aspects can be accelerated with haptic guidance (Marchal-Crespo, McHughen, Cramer, & Reinkensmeyer, 2009), which seems also to be true if haptic guidance is added to visual feedback. In a study on drawing different shapes, Bluteau et al. (2008) showed that the addition of haptic to visual concurrent feedback did not further enhance learning of shape drawing. Instead, movement fluidity was improved by additional haptic guidance, but only if it was force controlled and not position controlled (Bluteau et al., 2008). For visual feedback, the line drawn by the subject was superimposed on the target figure. Haptic feedback was applied when the end-effector movement deviated from the target movement in order to guide the subject back to the target trajectory. Fluidity and speed were also enhanced for handwriting after children had trained with visuohaptic feedback, as compared with classical handwriting training (Palluel-Germain et al., 2007). For learning of a 3-D hand movement, haptic guidance combined with watching the target trajectory was more effective than unimodal training only. Interestingly, training with haptic guidance alone enhanced the timing-related performance, whereas visual training alone facilitated learning of position and shape (Feygin et al., 2002). In dancing, performance was more enhanced while having vibrotactile timing cues than by video instruction; however, retention was not tested (Nakamura et al., 2005).

For trajectory learning, the use of visuohaptic displays to provide augmented information was reported to be beneficial mainly in reducing spatial errors, but not exclusively. In a small study on a simple arm movement, subjects benefited slightly more from visuo-vibrotactile feedback than from visual feedback only and least from vibrotactile feedback only (Ruffaldi, Filippeschi, et al., 2009). To teach children handwriting, visuohaptic feedback training was more effective than visual feedback only (Garcia-Hernandez & Parra-Vega, 2009). In a study on shape drawing, visual and visuohaptic feedback enhanced learning similarly (Yang et al., 2008). Haptic guidance combined with visual instruction was even marginally less effective in teaching a 3-D hand movement than was visual instruction alone (Liu et al., 2006). Since the design of visual and haptic feedback can further be explored and improved, it might be too early to draw strong conclusions on the effectiveness of visuohaptic feedback. In general, related studies suggest superiority over unimodal feedback, especially for training temporal aspects of a movement.

So-called patient-cooperative haptic control strategies have been successfully applied in combination with visual feedback in gait rehabilitation. The haptic assist-as-needed strategy allows patients to control their movements, while still being provided sufficient guidance and support depending on the deviation from the target movement (Duschau-Wicke et al., 2010). Visual feedback incorporated the legs of an avatar of the patient and a second pair of semitransparent legs that indicated the target movements. The impact of the visual feedback and that of the haptic feedback were not assessed separately. Visuohaptic feedback strategies might also be applicable to the field of sports. The lack of studies in this area might originate from the huge technical effort that is needed to provide augmented haptic feedback for complex movements in sports.

Conclusions on multimodal feedback

The reviewed studies and related theories reveal that multimodal feedback can enhance motor learning. The positive effects are often explained by a reduction of workload, which is believed to be beneficial during complex motor task learning. Relevant information should be provided in an interpretable way and should not overwhelm the learner (Guadagnoli & Lee, 2004), which could be achieved by multimodal feedback designs taking advantage of each modality. Another benefit of multimodal concurrent feedback may be its support of learning of several aspects of a movement simultaneously. For example, augmented visual feedback could facilitate learning of spatial aspects of the movement effectively, and at the same time, auditory feedback could support learning of temporal aspects. However, prior to any comparison of multimodal augmented feedback strategies, feedback designs should be optimized and systematically evaluated.

General suggestions on future application of augmented feedback

Earlier studies on simple tasks found that concurrent feedback is detrimental for motor learning, but more recent studies have revealed that concurrent feedback can be effective if the motor task to be learned is complex (Fig. 1). Particularly in the early, attention-demanding learning phase, concurrent augmented feedback may help the novice to understand the new structure of the movement faster and prevent cognitive overload, which may accelerate the learning process. Concurrent augmented feedback may also be beneficial for experts, since learning of specific details of the movement can be complex, when the expert has to overcome automated but incorrect movement.

Consequently, we suggest that complex motor learning should start with concurrent feedback in order to facilitate an understanding of the movement in principle. Thereafter, it should be switched to lower frequency of concurrent feedback or to terminal feedback to facilitate automation of the movement. Self-controlled feedback offers one possibility for adapting the feedback to the current phase of the learner. The learner decides by himself/herself when he/she wants feedback, about what, and how—for example, in which modality. Besides the self-regulation of feedback frequency, self-controlled feedback has the advantage that it highly involves and motivates the learner (Wulf, 2007b) and may also promote self-efficacy. It might be necessary to let the learner select feedback within some constraints that are based on motor learning theories—for example, selection of movement features that are meaningful for the current skill level, a feedback frequency within a specified range, or a modality or combination of modalities that is appropriate. Still, selections might not be optimal, because learners’ self-estimation of their current performance might be wrong and learners might not have the valuable expertise of a trainer who knows which actions are needed to make progress. Thus, the learner might stick at a certain skill level. A solution to overcoming stagnation in learning is to monitor the learning process and switch the feedback accordingly. An intelligent virtual trainer feedback system, introduced by Rauter, Baur, Sigrist, Riener, and Wolf (2010) can switch feedback modality, feedback variables or movement features, and target thresholds in order to provide optimized and individualized training. With such a system, learners can always be challenged adequately, which is important for successful motor learning (Guadagnoli & Lee, 2004).

In order to prevent cognitive overload in a complex motor task, either augmented feedback may be provided in a modality that has free capacities, or it may be provided in a multimodal way. Multimodal feedback can be applied to exploit the specific advantages of each modality, such as the aptitude of visualizations to display spatial aspects and of sound or haptic feedback to display temporal aspects. In general, it should be started from the relevant variable of the movement (Spinks & Smith, 1994; Wulf et al., 1998) or relevant key features (Huegel et al., 2009; Todorov et al., 1997; Wolpert & Flanagan, 2010). Thereafter, an appropriate modality and augmented feedback design can be selected. It might also be interesting to investigate whether distinct feedback design in a specific modality can lead to a faster minimization of cost functions discussed in optimal motor control theories (Friston, 2011; Todorov, 2004; Todorov & Jordan, 2002). Thus, in terms of future feedback strategies, we suggest evaluating feedback designs in each modality on their interpretability, practicability, and motivational character first.

This review highlights the importance of a comparison of the effectiveness of different feedback strategies presented in different modalities. It also discusses important consequences that can influence the effectiveness of a feedback strategy. Since an external focus was stated to be more beneficial than an internal focus for motor learning (Shea & Wulf, 1999; Wulf, 2007a; Wulf & Shea, 2002; Wulf et al., 2010), feedback should be given about a variable that can force an external focus of attention. Moreover, the provision of feedback about a general variable in early learning phases and about more specific aspects in later phases is advised (Chollet et al., 1992). The feedback should not force the learner to correct task-irrelevant errors (Liu & Todorov, 2007; Todorov, 2004; Todorov & Jordan, 2002; Wei & Körding, 2009; Wolpert et al., 2011). The mapping function of the feedback should be carefully treated, since a change in perceptual information can alter motor control (Fernandez & Bootsma, 2008; Kovacs, Buchanan, & Shea, 2008). It should be examined whether visual, auditory, and haptic feedback induce a similar dependency, whereby measuring brain activation in different feedback conditions (e.g., Carel et al., 2000; Mima et al., 1999; Weiller et al., 1996), as compared with no-feedback conditions (e.g., Debaere, Wenderoth, Sunaert, Van Hecke, & Swinnen, 2003, 2004) can lead to fundamental insights. Accordingly, it should be explored how this dependency can be minimized, without losing the information that is relevant to calibrating the movement (Robin et al., 2005). In some cases, it can be effective in conveying an effective cognitive strategy to the learners to simplify motor control (Carson & Kelso, 2004), which may be achieved by mediating a metaphor for their pattern of action in a prescriptive way (Schmidt & Wrisberg, 2008). Recently, Krakauer and Mazzoni (2011) suggested that an explicit strategy may help to solve the task, or explicit cognitive processes could enhance implicit processes. Accordingly, augmented feedback might be very effective if it provides information (explicitly) that mediates the relevant aspects of the movement implicitly to best enhance performance in no-feedback conditions.

General conclusions

To date, general conclusions on the efficiency of concurrent feedback cannot be drawn, due to the following main reasons. First, a large diversity of movements have been investigated so far, but a systematic evaluation within movement classes is lacking. Within movement classes, it might be possible to transfer gained knowledge about effectiveness of a certain augmented feedback strategy in the future. Second, a systematic comparison of concurrent feedback with other feedback strategies, such as terminal feedback, is often missing but is necessary to find the best strategy. Third, feedback designs within a modality should be evaluated prior to a comparison with feedback displayed in other modalities. Conclusions such as “visual feedback is more effective than auditory feedback” are weak if the auditory feedback is not well designed. Fourth, this review shows that research on multimodal feedback in complex motor learning is still in its early phases, since not many studies have been reported.

Up to now, mostly low-dimensional, simple, and rather artificial labor tasks have been investigated even though, in real life, most motor tasks are multidimensional and complex (Winstein, 1991; Wulf & Shea, 2002). It is questionable how many of the insights gained from laboratory task can be transferred to other tasks (Krakauer & Mazzoni, 2011) and whether the results gained from motor control studies are also true for complex learning with augmented feedback. The low number of studies in real-world tasks might be explained by the huge (technical) effort that is needed to cope with the given complexity (Wolpert et al., 2011), especially to provide haptic feedback. However, simulators in virtual environments can facilitate research about augmented unimodal and multimodal feedback in motor learning in sports and rehabilitation (Holden, 2005). Simulators in virtual environments, with their displays providing unimodal and multimodal concurrent and terminal feedback, enable investigation in a safe, modifiable, and realistic environment. Task complexity, feedback designs, feedback variables, and modalities can be manipulated in order to optimally challenge the learner, a main factor for accelerating motor learning (Guadagnoli & Lee, 2004). An optimized combination of simulator training with augmented feedback and real training may be very effective (Todorov et al., 1997) and might be the key for a successful inclusion of simulator training in different sports, such as that suggested for rowing (Smith & Loschner, 2002; Ruffaldi et al., 2011; Sigrist, Rauter, et al., 2011) and skiing (Kirby, 2009). As has been highlighted recently, there is a strong need to more carefully validate the effectiveness of (sports) simulator training—that is, the transferability of the trained skills to the real world (Miles, Pop, Watt, Lawrence, & John, 2012; Ruffaldi et al., 2011).

Notes

Acknowledgments

Many thanks go to James Sulzer for proofreading the manuscript, to Laura Marchal-Crespo and Olivier Lambercy for the valuable inputs about haptic feedback strategies, and to Prof. Nicole Wenderoth for enriching comments.

References

  1. Alais, D., & Burr, D. (2004). The ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14(3), 257–262.PubMedGoogle Scholar
  2. Amirabdollahian, F., Loureiro, R., & Harwin, W. (2002). Minimum jerk trajectory control for rehabilitation and haptic applications. In Proceedings of the 2002 IEEE International Conferene on Robotics & Automation (Vol. 4, pp. 3380–3385). IEEE.Google Scholar
  3. Anderson, R., Harrison, A., & Lyons, G. M. (2005). Rowing: Accelerometry-based feedback – can it improve movement consistency and performance in rowing? Sports Biomechanics, 4(2), 179–195.PubMedCrossRefGoogle Scholar
  4. Ayres, A. J. (2005). Sensory integration and the child. Western Psychological Services.Google Scholar
  5. Baddeley, A. D. (1992). Working memory. Science, 255, 556–559.PubMedCrossRefGoogle Scholar
  6. Bark, K., Khanna, P., Irwin, R., Kapur, P., Jax, S. A., Buxbaum, L. J., & Kuchenbecker, K. J. (2011). Lessons in using vibrotactile feedback to guide fast arm motions. In IEEE World Haptics Conference 2011 (pp. 355–360.Google Scholar
  7. Barrass, S. (2005). A perceptual framework for the auditory display of scientific data. ACM Transactions on Applied Perception (TAP), 2(4), 389–402.CrossRefGoogle Scholar
  8. Batavia, M., Gianutsos, J. G., Vaccaro, A., & Gold, J. T. (2001). A do-it-yourself membrane-activated auditory feedback device for weight bearing and gait training: A case report. Archives of Physical Medicine and Rehabilitation, 82(4), 541–545.PubMedCrossRefGoogle Scholar
  9. Baudry, L., Leroy, D., Thouvarecq, R., & Chollet, D. (2006). Auditory concurrent feedback benefits on the circle performed in gymnastics. Journal of Sports Sciences, 24(2), 149–156.PubMedCrossRefGoogle Scholar
  10. Bergman, A. (1994). Auditory scene analysis. MIT Press.Google Scholar
  11. Bernier, P. M., Chua, R., & Franks, I. M. (2005). Is proprioception calibrated during visually guided movements? Experimental Brain Research, 167(2), 292–296.CrossRefGoogle Scholar
  12. Blandin, Y., Toussaint, L., & Shea, C. H. (2008). Specificity of practice: Interaction between concurrent sensory information and terminal feedback. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(4), 994–1000.PubMedCrossRefGoogle Scholar
  13. Blaya, J. A., & Herr, H. (2004). Adaptive control of a variable-impedance ankle-foot orthosis to assist drop-foot gait. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 12(1), 24–31.PubMedCrossRefGoogle Scholar
  14. Bluteau, J., Coquillart, S., Payan, Y., & Gentaz, E. (2008). Haptic guidance improves the visuo-manual tracking of trajectories. PLoS One, 3(3), e1775.PubMedCrossRefGoogle Scholar
  15. Bonebright, T., Miner, N., Goldsmith, T., & Caudell, T. (2005). Data collection and analysis techniques for evaluating the perceptual qualities of auditory stimuli. ACM Transactions on Applied Perception (TAP), 2(4), 505–516.CrossRefGoogle Scholar
  16. Boyles, J., Panzer, S., & Shea, C. H. (2012). Increasingly complex bimanual multi-frequency coordination patterns are equally easy to perform with on-line relative velocity feedback. Experimental Brain Research, 216(4), 515–525.CrossRefGoogle Scholar
  17. Braun, D. A., Mehring, C., & Wolpert, D. M. (2010). Structure learning in action. Behavioural Brain Research, 206(2), 157–165.PubMedCrossRefGoogle Scholar
  18. Brickman, B. J., Hettinger, L. J., Roe, M. M., Lu, L., Repperger, D. W., & Haas, M. W. (1996). Haptic specification of environmental events: Implications for the design of adaptive, virtual interfaces. In Proceedings of the IEEE 1996 Virtual Reality Annual International Symposium, 1996 (pp. 147–153).Google Scholar
  19. Bronkhorst, A. W., Veltman, J. A., & Van Breda, L. (1996). Application of a three-dimensional auditory display in a flight task. Human Factors, 38(1), 23–33.PubMedCrossRefGoogle Scholar
  20. Brown, L. M., Brewster, S. A., Ramloll, S. A., Burton, R., & Riedel, B. (2003). Design guidelines for audio presentation of graphs and tables. In Proceedings of the 2003 International Conference on Auditory Display.Google Scholar
  21. Burke, J. L., Prewett, M. S., Gray, A. A., Yang, L., Stilson, F. R. B., Coovert, M. D., . . . Redden, E. (2006). Comparing the effects of visual-auditory and visual-tactile feedback on user performance: A meta-analysis. In Proceedings of the 8th international conference on Multimodal interfaces (pp. 108–117). New York, NY, USA: ACM.Google Scholar
  22. Calvert, G. A., Campbell, R., & Brammer, M. J. (2000). Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology, 10(11), 649–657.PubMedCrossRefGoogle Scholar
  23. Camachon, C., Jacobs, D. M., Huet, M., Buekers, M., & Montagne, G. (2007). The role of concurrent feedback in learning to walk through sliding doors. Ecological Psychology, 19(4), 367–382.CrossRefGoogle Scholar
  24. Campbell, R., Dodd, B., & Burnham, D. K. (1998). Hearing by eye II: Advances in the psychology of speechreading and auditory-visual speech. Psychology Press.Google Scholar
  25. Carel, C., Loubinoux, I., Boulanouar, K., Manelfe, C., Rascol, O., Celsis, P., & Chollet, F. (2000). Neural substrate for the effects of passive training on sensorimotor cortical representation: A study with functional magnetic resonance imaging in healthy subjects. Journal of Cerebral Blood Flow and Metabolism, 20(3), 478–484.PubMedGoogle Scholar
  26. Carraro, G. U., Cortes, M., Edmark, J. T., & Ensor, J. R. (1998). The peloton bicycling simulator. In Proceedings of the third symposium on Virtual reality modeling language (pp. 63–70). ACM New York, NY, USA.Google Scholar
  27. Carson, R. G., & Kelso, J. A. S. (2004). Governing coordination: Behavioural principles and neural correlates. Experimental Brain Research, 154(3), 267–274.CrossRefGoogle Scholar
  28. Cesqui, B., Aliboni, S., Mazzoleni, S., Carrozza, M., Posteraro, F., & Micera, S. (2008). On the use of divergent force fields in robot-mediated neurorehabilitation. In 2nd IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, 2008. BioRob 2008 (pp. 854–861). IEEE.Google Scholar
  29. Chang, J. Y., Chang, G. L., Chien, C. J. C., Chung, K. C., & Hsu, A. T. (2007). Effectiveness of two forms of feedback on training of a joint mobilization skill by using a joint translation simulator. Physical Therapy, 87(4), 418–430.PubMedCrossRefGoogle Scholar
  30. Chen, X., & Agrawal, S. (2012). Assisting versus repelling force-feedback for human learning of a line following task. In 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012 (pp. 344–349), Rome, Italy. IEEE.Google Scholar
  31. Chen, Y., Huang, H., Xu, W., Wallis, R. I., Sundaram, H., Rikakis, T., . . . He, J. (2006). The design of a real-time, multimodal biofeedback system for stroke patient rehabilitation. In Proceedings of the 14th annual ACM International Conference on Multimedia (pp. 763–772).Google Scholar
  32. Chen, X., Ragonesi, C., Agrawal, S., & Galloway, J. (2010). Training toddlers seated on mobile robots to drive indoors amidst obstacles. In 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2010 (pp. 576–581). IEEE.Google Scholar
  33. Chiviacowsky, S., & Wulf, G. (2002). Self-controlled feedback: Does it enhance learning because performers get feedback when they need it? Research Quarterly for Exercise and Sport, 73(4), 408–415.PubMedGoogle Scholar
  34. Chiviacowsky, S., & Wulf, G. (2005). Self-controlled feedback is effective if it is based on the learner’s performance. Research Quarterly for Exercise and Sport, 76(1), 42–48.PubMedCrossRefGoogle Scholar
  35. Chiviacowsky, S., & Wulf, G. (2007). Feedback after good trials enhances learning. Research Quarterly for Exercise and Sport, 78, 40–47.PubMedCrossRefGoogle Scholar
  36. Chollet, D., Madani, M., & Micallef, J. P. (1992). Biomechanics and medicine in swimming, chapter Effects of two types of biomechanical bio-feedback on crawl performance (pp. 57–62). London: E & FN Spon.Google Scholar
  37. Chollet, D., Micallef, J. P., & Rabischong, P. (1988). Swimming Science V, chapter Biomechanical signals for external biofeedback to improve swimming techniques (pp. 389–396). Champaign, Illinois: Human Kinetics Books.Google Scholar
  38. Chua, P., Crivella, R., Daly, B., Hu, N., Schaaf, R., Ventura, D., . . . Pausch, R. (2003). Training for physical tasks in virtual environments: Tai chi. In Virtual Reality Conference, IEEE (pp. 87–94). Los Alamitos, CA, USA: IEEE Computer Society.Google Scholar
  39. Clarkson, P. M., Robert, J., Watkins, A., & Foley, P. (1986). The effect of augmented feedback on foot pronation during barre exercise in dance. Research Quarterly for Exercise and Sport, 57(1), 33–40.Google Scholar
  40. Conditt, M., Gandolfo, F., & Mussa-Ivaldi, F. (1997). The motor system does not learn the dynamics of the arm by rote memorization of past experience. Journal of Neurophysiology, 78(1), 554.PubMedGoogle Scholar
  41. Criscimagna-Hemminger, S., Donchin, O., Gazzaniga, M., & Shadmehr, R. (2003). Learned dynamics of reaching movements generalize from dominant to nondominant arm. Journal of Neurophysiology, 89(1), 168.PubMedCrossRefGoogle Scholar
  42. Crowell, H. P., & Davis, I. S. (2011). Gait retraining to reduce lower extremity loading in runners. Clinical Biomechanics, 26(2), 78–83.PubMedCrossRefGoogle Scholar
  43. David, N., Bewernick, B. H., Cohen, M. X., Newen, A., Lux, S., Fink, G. R., Shah, N. J., Vogeley, K. (2006). Neural representations of self versus other: Visual-spatial perspective taking and agency in a virtual ball-tossing game. Journal of Cognitive Neuroscience, 18(6), 898–910.PubMedCrossRefGoogle Scholar
  44. Debaere, F., Wenderoth, N., Sunaert, S., Van Hecke, P., & Swinnen, S. P. (2003). Internal vs external generation of movements: Differential neural pathways involved in bimanual coordination performed in the presence or absence of augmented visual feedback. Neuropsychologia, 19(3), 764–776.Google Scholar
  45. Debaere, F., Wenderoth, N., Sunaert, S., Van Hecke, P., & Swinnen, S. P. (2004). Changes in brain activation during the acquisition of a new bimanual coordination task. Neuropsychologia, 42(7), 855–867.PubMedCrossRefGoogle Scholar
  46. Doyle, M. C., & Snowden, R. J. (2001). Identification of visual stimuli is improved by accompanying auditory stimuli: The role of eye movements and sound location. Perception, 30(7), 795–810.PubMedCrossRefGoogle Scholar
  47. Drobny, D., & Borchers, J. (2010). Learning basic dance choreographies with different augmented feedback modalities. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (pp. 3793–3798). ACM.Google Scholar
  48. Drobny, D., Weiss, M., & Borchers, J. (2009). Saltate!: A sensor-based system to support dance beginners. In Proceedings of the 27th international conference extended abstracts on Human factors in computing systems (pp. 3943–3948). ACM.Google Scholar
  49. Dubus, G. (2012). Evaluation of four models for the sonification of elite rowing. Journal on Multimodal User Interfaces, 1–14.Google Scholar
  50. Dürrer, B. (2001). Investigations into the design of auditory displays. PhD thesis, University of Bochum.Google Scholar
  51. Duschau-Wicke, A., von Zitzewitz, J., Caprez, A., Lunenburger, L., & Riener, R. (2010). Path control: A method for patient-cooperative robot-aided gait rehabilitation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 18(1), 38–48.PubMedCrossRefGoogle Scholar
  52. Eaves, D. L., Breslin, G., van Schaik, P., Robinson, E., & Spears, I. R. (2011). The short-term effects of real-time virtual reality feedback on motor learning in dance. Presence: Teleoperators and Virtual Environments, 20(1), 62–77.CrossRefGoogle Scholar
  53. Effenberg, A. O. (2000a). Medien im Sport - zwischen Phänomen und Virtualität, chapter Der bewegungsdefinierte Sound: Ein akustisches Medium für die Darstellung, Vermittlung und Exploration motorischer Prozesse (pp. 67–76). Schorndorf.Google Scholar
  54. Effenberg, A. O. (2000b). Zum Potential komplexer akustischer Bewegungsinformationen für die Technikansteuerung. Leistungssport, 5, 19–25.Google Scholar
  55. Effenberg, A. O. (2005). Movement sonification: Effects on perception and action. IEEE Multimedia, 12(2), 53–59.CrossRefGoogle Scholar
  56. Effenberg, A., & Mechling, H. (1999). Zur Funktion audiomotorischer Verhaltenskomponenten. Sportwissenschaft, 29, 200–215.Google Scholar
  57. Effenberg, A. O., & Mechling, H. (1998). Bewegung hörbar machen — Warum? Zur Perspektive einer systematischen Umsetzung von Bewegung in Klänge. Psychologie und Sport, 5(1), 28–38.Google Scholar
  58. Eldridge, A. (2006). Issues in auditory display. Artificial Life, 12(2), 259–274.PubMedCrossRefGoogle Scholar
  59. Emken, J. L., Benitez, R., & Reinkensmeyer, D. J. (2007). Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed. Journal of Neuroengineering and Rehabilitation, 4(1), 8.PubMedCrossRefGoogle Scholar
  60. Emken, J., & Reinkensmeyer, D. (2005). Robot-enhanced motor learning: Accelerating internal model formation during locomotion by transient dynamic amplification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 13(1), 33–39.PubMedCrossRefGoogle Scholar
  61. Endo, T., Kawasaki, H., Kigaku, K., & Mouri, T. (2007). Transfer method of force information using five-fingered haptic interface robot. In EuroHaptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2007. Second Joint (pp. 599–600). IEEE.Google Scholar
  62. Eriksson, M., & Bresin, R. (2010). Improving running mechanics by use of interactive sonification. In Proceedings of ISon 2010, 3rd Interactive Sonification Workshop (pp. 95–98). Stockholm, Sweden.Google Scholar
  63. Eriksson, M., Halvorsen, K., & Gullstrand, L. (2011). Immediate effect of visual and auditory feedback to control the running mechanics of well-trained athletes. Journal of Sports Sciences, 29(3), 253–262.PubMedCrossRefGoogle Scholar
  64. Erni, T., & Dietz, V. (2001). Obstacle avoidance during human walking: Learning rate and cross-modal transfer. The Journal of Physiology, 534(1), 303.PubMedCrossRefGoogle Scholar
  65. Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870), 429–433.PubMedCrossRefGoogle Scholar
  66. Fernandez, L., & Bootsma, R. J. (2008). Non-linear gaining in precision aiming: Making fitts’ task a bit easier. Acta Psychologica, 129(2), 217–227.PubMedCrossRefGoogle Scholar
  67. Fernery, V. G., Moretto, P. G., Hespel, J. M. G., Thevenon, A., & Lensel, G. (2004). A real-time plantar pressure feedback device for foot unloading. Archives of Physical Medicine and Rehabilitation, 85(10), 1724–1728.CrossRefGoogle Scholar
  68. Feygin, D., Keehner, M., & Tendick, R. (2002). Haptic guidance: Experimental evaluation of a haptic training method for a perceptual motor skill. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 40–47), Orlando, FL, USA.Google Scholar
  69. Fitts, P. M., & Posner, M. I. (1967). Human performance. Brooks/Cole Publishing Co.Google Scholar
  70. Flash, T., & Gurevich, I. (1991). Human motor adaptation to external loads (Vol. 13).Google Scholar
  71. Flash, T., & Hogan, N. (1985). The coordination of arm movements: An experimentally confirmed mathematical model. Journal of Neuroscience, 5, 1688–1703.PubMedGoogle Scholar
  72. Flowers, J. H. (2005). Thirteen years of reflection on auditory graphing: Promises, pitfalls, and potential new directions. In First International Symposium on Auditory Graphs (AGS2005) (pp. 1–5).Google Scholar
  73. Forster, B., Cavina-Pratesi, C., Aglioti, S. M., & Berlucchi, G. (2002). Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time. Experimental Brain Research, 143(4), 480–487.CrossRefGoogle Scholar
  74. Fort, A., Delpuech, C., Pernier, J., & Giard, M. H. (2002). Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans. Cerebral Cortex, 12(10), 1031–1039.PubMedCrossRefGoogle Scholar
  75. Fothergill, S. (2010). Examining the effect of real-time visual feedback on the quality of rowing technique. In Procedia Engineering (Vol. 2, pp. 3083–3088). Elsevier.Google Scholar
  76. Fox, J., & Carlile, J. (2005). SoniMime: Movement sonification for real-time timbre shaping. In Proceedings of the 2005 Conference on New Interfaces for Musical Expression (pp. 242–243). Singapore: National University of Singapore.Google Scholar
  77. Freides, D. (1974). Human information processing and sensory modality: Cross-modal functions, information complexity, memory, and deficit. Psychological Bulletin, 81(5), 284–310.PubMedCrossRefGoogle Scholar
  78. Frisoli, A., Ruffaldi, E., Bagnoli, L., Filippeschi, A., Avizzano, C. A., Vanni, F., & Bergamasco, M. (2008). Preliminary design of rowing simulator for in-door skill training. In Proceedings of the 2008 Ambi-Sys Workshop on Haptic User Interfaces in Ambient Media Systems (pp. 1–8). Quebec City, Canada: ICST Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering.Google Scholar
  79. Friston, K. (2011). What is optimal about motor control? Neuron, 72(3), 488–498.PubMedCrossRefGoogle Scholar
  80. Garcia-Hernandez, N., & Parra-Vega, V. (2009). Active and efficient motor skill learning method used in a haptic teleoperated system. In The 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009. RO-MAN 2009 (pp. 915–920). IEEE.Google Scholar
  81. Gauthier, G. (1985). Visually and acoustically augmented performance feedback as an aid in motor control learning: A study of selected components of the rowing action. Journal of Sports Sciences, 3(1), 3–26.PubMedCrossRefGoogle Scholar
  82. Ghez, C., Rikakis, T., DuBois, R., & Cook, P. (2000). An auditory display system for aiding interjoint coordination. In Proceedings of the International Conference on Auditory Displays, Atlanta, Georgia.Google Scholar
  83. Giard, M. H., & Peronnet, F. (1999). Auditory-visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience, 11(5), 473–490.PubMedCrossRefGoogle Scholar
  84. Giese, M. A., & Poggio, T. (2003). Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience, 4(3), 179–192.PubMedCrossRefGoogle Scholar
  85. Gillespie, R., O’Modhrain, M., Tang, P., Zaretzky, D., & Pham, C. (1998). The virtual teacher. In Proceedings of the ASME Dynamic Systems and Control Division (Vol. 64, pp. 171–178).Google Scholar
  86. Göbel, S., Geiger, C., Heinze, C., & Marinos, D. (2010). Creating a virtual archery experience. In Proceedings of the International Conference on Advanced Visual Interfaces (pp. 337–340), Rome, Italy. ACM.Google Scholar
  87. Godbout, A., & Boyd, J. (2010). Corrective sonic feedback for speed skating: A case study. In Proceedings of the 16th International Conference on Auditory Display.Google Scholar
  88. Goodbody, S., & Wolpert, D. (1998). Temporal and amplitude generalization in motor learning. Journal of Neurophysiology, 79(4), 1825.PubMedGoogle Scholar
  89. Grey, J. M. (1975). An Exploration of Musical Timbre. PhD thesis, CCRMA Dept. of Music, Stanford University.Google Scholar
  90. Grond, F., Hermann, T., Verfaille, V., & Wanderley, M. (2010). Gesture in embodied communication and human computer interaction, LNAI 5934, chapter Methods for Effective Sonification of Clarinetists’ Ancillary Gestures (pp. 171–181). Springer.Google Scholar
  91. Guadagnoli, M., & Kohl, R. (2001). Knowledge of results for motor learning: Relationship between error estimation and knowledge of results frequency. Journal of Motor Behavior, 33(2), 217–224.PubMedCrossRefGoogle Scholar
  92. Guadagnoli, M. A., & Lee, T. D. (2004). Challenge point: A framework for conceptualizing the effects of various practice conditions in motor learning. Journal of Motor Behavior, 36(2), 212–224.PubMedCrossRefGoogle Scholar
  93. Hale, K., & Stanney, K. (2004). Deriving haptic design guidelines from human physiological, psychophysical, and neurological foundations. IEEE Computer Graphics and Applications, 24(2), 39.CrossRefGoogle Scholar
  94. Han, D.-W., & Shea, C. H. (2008). Auditory model: Effects on learning under blocked and random practice schedules. Research Quarterly for Exercise and Sport, 79(4), 476–486.PubMedCrossRefGoogle Scholar
  95. Haruno, M., Wolpert, D., & Kawato, M. (2001). Mosaic model for sensorimotor learning and control. Neural Computation, 13(10), 2201–2220.PubMedCrossRefGoogle Scholar
  96. Hecht, D., & Reiner, M. (2009). Sensory dominance in combinations of audio, visual and haptic stimuli. Experimental Brain Research, 193(2), 307–314.CrossRefGoogle Scholar
  97. Hecht, D., Reiner, M., & Karni, A. (2008). Enhancement of response times to bi-and tri-modal sensory stimuli during active movements. Experimental Brain Research, 185(4), 655–665.CrossRefGoogle Scholar
  98. Helmer, R., Farrow, D., Ball, K., Phillips, E., Farouil, A., & Blanchonette, I. (2011). A pilot evaluation of an electronic textile for lower limb monitoring and interactive biofeedback. Procedia Engineering, 13, 513–518.CrossRefGoogle Scholar
  99. Helmer, R., Farrow, D., Lucas, S., Higgerson, G., & Blanchonette, I. (2010). Can interactive textiles influence a novice’s throwing technique? Procedia Engineering, 2(2), 2985–2990.CrossRefGoogle Scholar
  100. Hermann, T., Honer, O., & Ritter, H. (2006). Acoumotion-an interactive sonification system for acoustic motion control. Lecture Notes in Computer Science, 3881, 312–323.CrossRefGoogle Scholar
  101. Hermann, T., & Hunt, A. (2005). Guest editors’ introduction: An introduction to interactive sonification. IEEE Multimedia, 12(2), 20–24.CrossRefGoogle Scholar
  102. Heuer, H., & Hegele, M. (2008). Constraints on visuo-motor adaptation depend on the type of visual feedback during practice. Experimental Brain Research, 185(1), 101–110.CrossRefGoogle Scholar
  103. Hidler, J., Nichols, D., Pelliccio, M., Brady, K., Campbell, D. D., Kahn, J. H., & Hornby, T. G. (2009). Multicenter randomized clinical trial evaluating the effectiveness of the lokomat in subacute stroke. Neurorehabilitation and Neural Repair, 23(1), 5.PubMedCrossRefGoogle Scholar
  104. Hinder, M. R., Tresilian, J. R., Riek, S., & Carson, R. G. (2008). The contribution of visual feedback to visuomotor adaptation: How much and when? Brain Research, 1197, 123–134.PubMedCrossRefGoogle Scholar
  105. Hinterberger, T., & Baier, G. (2005). Parametric orchestral sonification of eeg in real time. IEEE Multimedia, 12(2), 70–79.CrossRefGoogle Scholar
  106. Ho, C., Basdogan, C., Slater, M., Durlach, N., & Srinivasan, M. (1998). An experiment on the influence of haptic communication on the sense of being together. In BT Presence Workshop. Citeseer.Google Scholar
  107. Hogan, N. (1985). Impedance control: An approach to manipulation: Part III—Applications. Journal of Dynamic Systems, Measurement, and Control, 107, 17–24.CrossRefGoogle Scholar
  108. Holden, M. K. (2005). Virtual environments for motor rehabilitation: Review. Cyberpsychology & Behavior, 8, 187–211.CrossRefGoogle Scholar
  109. Holden, M. K., & Dyar, T. (2002). Virtual environment training: A new tool for neurorehabilitation. Journal of Neurologic Physical Therapy, 26(2), 62–71.Google Scholar
  110. Holden, M., Todorov, E., Callahan, J., & Bizzi, E. (1999). Virtual environment training improves motor performance in two patients with stroke: Case report. Journal of Neurologic Physical Therapy, 23(2), 57–67.Google Scholar
  111. Hornby, T. G., Campbell, D. D., Kahn, J. H., Demott, T., Moore, J. L., & Roth, H. R. (2008). Enhanced gait-related improvements after therapist-versus robotic-assisted locomotor training in subjects with chronic stroke: A randomized controlled study. Stroke; A Journal of Cerebral Circulation, 39(6), 1786–1792.PubMedCrossRefGoogle Scholar
  112. Höver, R., Kósa, G., Székely, G., & Harders, M. (2009). Data-driven haptic rendering—from viscous fluids to visco-elastic solids. IEEE Transactions on Haptics, 2(1), 15–27.CrossRefGoogle Scholar
  113. Huang, H., Ingalls, T., Olson, L., Ganley, K., Rikakis, T., & He, J. (2005). Interactive multimodal biofeedback for task-oriented neural rehabilitation. In 27th Annual International Conference of the Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005 (pp. 2547–2550), Shanghai.Google Scholar
  114. Huang, H., Wolf, S. L., & He, J. (2006). Recent developments in biofeedback for neuromotor rehabilitation. Journal of Neuroengineering and Rehabilitation, 3(1), 1–12.CrossRefGoogle Scholar
  115. Huegel, J. C., Celik, O., Israr, A., & O’Malley, M. K. (2009). Expertise-based performance measures in a virtual training environment. Presence: Teleoperators and Virtual Environments, 18(6), 449–467.CrossRefGoogle Scholar
  116. Huegel, J., & O’Malley, M. K. (2010). Progressive haptic and visual guidance for training in a virtual dynamic task. In Haptics Symposium, 2010 IEEE (pp. 343–350). IEEE.Google Scholar
  117. Huet, M., Camachon, C., Fernandez, L., Jacobs, D. M., & Montagne, G. (2009). Self-controlled concurrent feedback and the education of attention towards perceptual invariants. Human Movement Science, 28(4), 450–467.PubMedCrossRefGoogle Scholar
  118. Hummel, J., Hermann, T., Frauenberger, C., & Stockman, T. (2010). Interactive sonification of german wheel sports movement. In Proceedings of ISon 2010, 3rd Interactive Sonification Workshop.Google Scholar
  119. Hurley, S. R., & Lee, T. D. (2006). The influence of augmented feedback and prior learning on the acquisition of a new bimanual coordination pattern. Human Movement Science, 25(3), 339–348.PubMedCrossRefGoogle Scholar
  120. Husemann, B., Muller, F., Krewer, C., Heller, S., & Koenig, E. (2007). Effects of locomotion training with assistance of a robot-driven gait orthosis in hemiparetic patients after stroke: A randomized controlled pilot study. Stroke; A Journal of Cerebral Circulation, 38(2), 349–354.PubMedCrossRefGoogle Scholar
  121. Israel, J., Campbell, D., Kahn, J., & Hornby, T. (2006). Metabolic costs and muscle activity patterns during robotic-and therapist-assisted treadmill walking in individuals with incomplete spinal cord injury. Physical Therapy, 86(11), 1466–1478.PubMedCrossRefGoogle Scholar
  122. Janelle, C. M., Barba, D. A., Frehlich, S. G., Tennant, L. K., & Cauraugh, H. (1997). Maximizing performance feedback effectiveness through videotape replay and a self-controlled learning environment. Research Quarterly for Exercise and Sport, 68(4), 269–279.PubMedGoogle Scholar
  123. Janelle, C. M., Kim, J., & Singer, R. N. (1995). Subject-controlled performance feedback and learning of a closed motor skill. Perceptual and Motor Skills, 81(2), 627–634.PubMedCrossRefGoogle Scholar
  124. Jansen, C., Oving, A., & Van Veen, H. (2004). Vibrotactile movement initiation. In Proceedings of Eurohaptics (pp. 110–117).Google Scholar
  125. Judkins, T., Oleynikov, D., & Stergiou, N. (2006). Real-time augmented feedback benefits robotic laparoscopic training. Studies in Health Technology and Informatics, 119, 243.PubMedGoogle Scholar
  126. Kapur, A., Tzanetakis, G., Virji-Babul, N., Wang, G., & Cook, P. R. (2005). A framework for sonification of vicon motion capture data. In Proceedings of the 8th Conference on Digital Audio Effects, Madrid, Spain.Google Scholar
  127. Kelly, A., & Hubbard, M. (2000). Design and construction of a bobsled driver training simulator. Sports Engineering, 3(1), 13–24.CrossRefGoogle Scholar
  128. Khatib, O. (1986). Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1), 90.CrossRefGoogle Scholar
  129. Kim, R. S., Seitz, A. R., & Shams, L. (2008). Benefits of stimulus congruency for multisensory facilitation of visual learning. PLoS One, 3(1), e1532.PubMedCrossRefGoogle Scholar
  130. Kirby, R. (2009). Development of a real-time performance measurement and feedback system for alpine skiers. Sports Technology, 2(1–2), 43–52.CrossRefGoogle Scholar
  131. Kleimann-Weiner, J., & Berger, J. (2006). The sound of one arm swinging: A model for multidimensional auditory display of physical motion. In Proceedings of the 12th International Conference on Auditory Display. ICAD (pp. 278–280).Google Scholar
  132. Kockler, H., Scheef, L., Tepest, R., David, N., Bewernick, B. H., Newen, A., Schild H. H., May, M., Vogeley, K. (2010). Visuospatial perspective taking in a dynamic environment: Perceiving moving objects from a first-person-perspective induces a disposition to act. Consciousness and Cognition, 19(3), 690–701.PubMedCrossRefGoogle Scholar
  133. Kolb, B. (1995). Brain plasticity and behavior. Lawrence Erlbaum Associates, Inc.Google Scholar
  134. Konttinen, N., Mononen, K., Viitasalo, J., & Mets, T. (2004). The effects of augmented auditory feedback on psychomotor skill learning in precision shooting. Journal of Sport & Exercise Psychology, 26(2), 306–316.Google Scholar
  135. Koritnik, T., Bajd, T., & Munih, M. (2008). Virtual environment for lower-extremities training. Gait & Posture, 27(2), 323–330.CrossRefGoogle Scholar
  136. Koritnik, T., Koenig, A., Bajd, T., Riener, R., & Munih, M. (2010). Comparison of visual and haptic feedback during training of lower extremities. Gait & Posture.Google Scholar
  137. Kotranza, A., Lind, D., Pugh, C., & Lok, B. (2009). Real-time in-situ visual feedback of task performance in mixed environments for learning joint psychomotor-cognitive tasks. In Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality (pp. 125–134). IEEE Computer Society.Google Scholar
  138. Kousidou, S., Tsagarakis, N. G., Smith, C., & Caldwell, D. G. (2007). Task-orientated biofeedback system for the rehabilitation of the upper limb. In IEEE 10th International Conference on Rehabilitation Robotics, ICORR (pp. 376–384). Noordwijk, Netherlands.Google Scholar
  139. Kovacs, A. J., Boyle, J., Grutmatcher, N., & Shea, C. H. (2010). Coding of on-line and pre-planned movement sequences. Acta Psychologica, 133(2), 119–126.PubMedCrossRefGoogle Scholar
  140. Kovacs, A., Buchanan, J., & Shea, C. (2008). Perceptual influences on fitts’ law. Experimental Brain Research, 190(1), 99–103.CrossRefGoogle Scholar
  141. Kovacs, A. J., & Shea, C. H. (2011). The learning of 90° continuous relative phase with and without lissajous feedback: External and internally generated bimanual coordination. Acta Psychologica, 136(3), 311–320.PubMedCrossRefGoogle Scholar
  142. Krakauer, J., & Mazzoni, P. (2011). Human sensorimotor learning: Adaptation, skill, and beyond. Current Opinion in Neurobiology, 21(4), 636–644.PubMedCrossRefGoogle Scholar
  143. Kramer, G. (1994). Auditory display: Sonification, audification, and auditory interfaces. Reading, MA: Addison-Wesley. An introduction to auditory display.Google Scholar
  144. Krebs, H. I., Palazzolo, J. J., Dipietro, L., Ferraro, M., Krol, J., Rannekleiv, K., Volpe, B. T., Hogan, N. (2003). Rehabilitation robotics: Performance-based progressive robot-assisted therapy. Autonomous Robots, 15(1), 7–20.CrossRefGoogle Scholar
  145. Kruber, D. (1984). Untersuchung zur visuellen Gestaltung von Arbeitskarten im Sport. In R. Daugs (Ed.), 1. Berliner Workshop Medien im Sport: Visualisation sensomotorischer Lehrmedien (pp. 67–79). Berlin, Germany: Akademieschrift der FVA des DSB.Google Scholar
  146. Lai, Q., Shea, C., & Little, M. (2000). Effects of modeled auditory information on a sequential timing task. Research Quarterly for Exercise and Sport, 71(4), 349.PubMedGoogle Scholar
  147. Lambercy, O., Dovat, L., Gassert, R., Burdet, E., Teo, C., & Milner, T. (2007). A haptic knob for rehabilitation of hand function. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 15(3), 356–366.PubMedCrossRefGoogle Scholar
  148. Lambercy, O., Dovat, L., Yun, H., Wee, S., Kuah, C., Chua, K., Gassert, R., Milner, T., Teo, C., Burdet, E. (2011). Effects of a robot-assisted training of grasp and pronation/supination in chronic stroke: A pilot study. Journal of Neuroengineering and Rehabilitation, 8(63), 1–11.Google Scholar
  149. Lee, J., & Choi, S. (2010). Effects of haptic guidance and disturbance on motor learning: Potential advantage of haptic disturbance. In Haptics Symposium, 2010 IEEE (pp. 335–342). IEEE.Google Scholar
  150. Lee, M., Moseley, A., & Refshauge, K. (1990). Effect of feedback on learning a vertebral joint mobilization skill. Physical Therapy, 70(2), 97–102.PubMedGoogle Scholar
  151. Lee, T., Swinnen, S., & Verschueren, S. (1995). Relative phase alterations during bimanual skill acquisition. Journal of Motor Behavior, 27(3), 263–274.PubMedCrossRefGoogle Scholar
  152. Lewiston, C. (2009). MaGKeyS: A haptic guidance keyboard system for facilitating sensorimotor training and rehabilitation. PhD thesis, Massachusetts Institute of Technology.Google Scholar
  153. Li, Y., Patoglu, V., & O’Malley, M. (2009). Negative efficacy of fixed gain error reducing shared control for training in virtual environments. ACM Transactions on Applied Perception (TAP), 6(1), 1–21.Google Scholar
  154. Lieberman, J., & Breazeal, C. (2007). TIKL: Development of a wearable vibrotactile feedback suit for improved human motor learning. IEEE Transactions on Robotics, 23, 919–926.CrossRefGoogle Scholar
  155. Liebermann, D. G., Katz, L., Hughes, M. D., Bartlett, R. M., McClements, J., & Franks, I. M. (2002). Advances in the application of information technology to sport performance. Journal of Sports Sciences, 20(10), 755–769.PubMedCrossRefGoogle Scholar
  156. Liu, Y. C. (2001). Comparative study of the effects of auditory, visual and multimodality displays on drivers performance in advanced traveller information systems. Ergonomics, 44(4), 425–442.PubMedCrossRefGoogle Scholar
  157. Liu, J., Cramer, S., & Reinkensmeyer, D. (2006). Learning to perform a new movement with robotic assistance: Comparison of haptic guidance and visual demonstration. Journal of Neuroengineering and Rehabilitation, 3(1), 20.PubMedCrossRefGoogle Scholar
  158. Liu, D., & Todorov, E. (2007). Evidence for the flexible sensorimotor strategies predicted by optimal feedback control. The Journal of Neuroscience, 27(35), 9354–9368.PubMedCrossRefGoogle Scholar
  159. Liu, J., & Wrisberg, C. A. (1997). The effect of knowledge of results delay and the subjective estimation of movement form on the acquisition and retention of a motor skill. Research Quarterly for Exercise and Sport, 68(2), 145–151.PubMedGoogle Scholar
  160. Lo, A. C., Guarino, P. D., Richards, L. G., Haselkorn, J. K., Wittenberg, G. F., Federman, D. G., . . ., Volpe, B. T., et al. (2010). Robot-assisted therapy for long-term upper-limb impairment after stroke. The New England Journal of Medicine.Google Scholar
  161. Loureiro, R., Amirabdollahian, F., Coote, S., Stokes, E., & Harwin, W. (2001). Using haptics technology to deliver motivational therapies in stroke patients: Concepts and initial pilot studies. Master’s thesis.Google Scholar
  162. Loureiro, R., Amirabdollahian, F., Topping, M., Driessen, B., & Harwin, W. (2003). Upper limb robot mediated stroke therapy—gentle/s approach. Autonomous Robots, 15(1), 35–51.CrossRefGoogle Scholar
  163. Loureiro, R. C. V., & Harwin, W. S. (2007). Reach & grasp therapy: Design and control of a 9-DOF robotic neuro-rehabilitation system. In IEEE 10th International Conference on Rehabilitation Robotics, ICORR (pp. 757–763), Noordwijk, Netherlands.Google Scholar
  164. Marchal-Crespo, L., Furumasu, J., & Reinkensmeyer, D. J. (2010). A robotic wheelchair trainer: Design overview and a feasibility study. Journal of Neuroengineering and Rehabilitation, 7(1), 40–51.PubMedCrossRefGoogle Scholar
  165. Marchal-Crespo, L., McHughen, S., Cramer, S. C., & Reinkensmeyer, D. J. (2009). The effect of haptic guidance, aging, and initial skill level on motor learning of a steering task. Experimental Brain Research, 201(2), 209–220.CrossRefGoogle Scholar
  166. Marchal-Crespo, L., Rauter, G., Wyss, D., von Zitzewitz, J., & Riener, R. (2012). Synthesis and control of a parallel tendon-based robotic tennis trainer. In 4th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012 (pp. 355–360), Rome, Italy. IEEE.Google Scholar
  167. Marchal-Crespo, L., & Reinkensmeyer, D. (2008a). Effect of robotic guidance on motor learning of a timing task. In 2nd IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, 2008. BioRob 2008 (pp. 199–204). IEEE.Google Scholar
  168. Marchal-Crespo, L., & Reinkensmeyer, D. (2008b). Haptic guidance can enhance motor learning of a steering task. Journal of Motor Behavior, 40(6), 545–557.PubMedCrossRefGoogle Scholar
  169. Marchal-Crespo, L., & Reinkensmeyer, D. J. (2009). Review of control strategies for robotic movement training after neurologic injury. Journal of Neuroengineering and Rehabilitation, 6(1), 20.PubMedCrossRefGoogle Scholar
  170. Marschall, F., Bund, A., & Wiemeyer, J. (2007). Does frequent feedback really degrade learning? A meta analysis. E-Journal Bewegung und Training, 1, 75–86.Google Scholar
  171. Maslovat, D., Brunke, K. M., Chua, R., & Franks, I. M. (2009). Feedback effects on learning a novel bimanual coordination pattern: Support for the guidance hypothesis. Journal of Motor Behavior, 41(1), 45–54.PubMedCrossRefGoogle Scholar
  172. Maslovat, D., Chua, R., Lee, T., & Franks, I. (2006). Anchoring strategies for learning a bimanual coordination pattern. Journal of Motor Behavior, 38(2), 101–117.PubMedCrossRefGoogle Scholar
  173. Maslovat, D., Chus, R., Lee, T., & Franks, I. (2004). Contextual interference: Single task versus multi-task learning. Motor Control, 8(2), 213.PubMedGoogle Scholar
  174. Mauney, L. M., & Walker, B. N. (2007). Individual differences and the field of auditory display: Past research, a present study, and an agenda for the future. In 13th International Conference on Auditory Display, Montreal, Canada.Google Scholar
  175. Mayr, A., Kofler, M., Quirbach, E., Matzak, H., Frohlich, K., & Saltuari, L. (2007). Prospective, blinded, randomized crossover study of gait rehabilitation in stroke patients using the lokomat gait orthosis. Neurorehabilitation and Neural Repair, 21(4), 307–314.PubMedCrossRefGoogle Scholar
  176. McNeely, W. A., Puterbaugh, K. D., & Troy, J. J. (2005). Six degree-of-freedom haptic rendering using voxel sampling. In ACM SIGGRAPH 2005 Courses (pp. 401–408), New York, NY, USA. ACM.Google Scholar
  177. McNeely, W. A., Puterbaugh, K. D., & Troy, J. J. (2006). Voxel-based 6-DOF haptic rendering improvements. Haptics-e, 3(7).Google Scholar
  178. Mestre, D., Maïano, C., Dagonneau, V., & Mercier, C. (2011). Does virtual reality enhance exercise performance, enjoyment, and dissociation? an exploratory study on a stationary bike apparatus. Presence: Teleoperators and Virtual Environments, 20(1), 1–14.CrossRefGoogle Scholar
  179. Metzger, J., Lambercy, O., & Gassert, R. (2012). High-fidelity rendering of virtual objects with the rehapticknob-novel avenues in robot-assisted rehabilitation of hand function. In Haptics Symposium (HAPTICS), 2012 IEEE (pp. 51–56). IEEE.Google Scholar
  180. Mihelj, M., Nef, T., & Riener, R. (2007). A novel paradigm for patient-cooperative control of upper-limb rehabilitation robots. Advanced Robotics, 21(8), 843–867.CrossRefGoogle Scholar
  181. Miles, H. C., Pop, S. R., Watt, S. J., Lawrence, G. P., & John, N. W. (2012). A review of virtual environments for training in ball sports. Computers and Graphics, 36(6), 714–726.CrossRefGoogle Scholar
  182. Milot, M. H., Marchal-Crespo, L., Green, C. S., Cramer, S. C., & Reinkensmeyer, D. J. (2010). Comparison of error-amplification and haptic-guidance training techniques for learning of a timing-based motor task by healthy individuals. Experimental Brain Research, 201(2), 119–131.CrossRefGoogle Scholar
  183. Mima, T., Sadato, N., Yazawa, S., Hanakawa, T., Fukuyama, H., Yonekura, Y., & Shibasaki, H. (1999). Brain structures related to active and passive finger movements in man. Brain, 122(10), 1989–1997.PubMedCrossRefGoogle Scholar
  184. Minogue, J., & Jones, M. G. (2006).Haptics in education: Exploring an untapped sensory modality Review of Educational Research, 76(3), 3–17.CrossRefGoogle Scholar
  185. Molier, B., Van Asseldonk, E., Hermens, H., & Jannink, M. (2010). Nature, timing, frequency and type of augmented feedback; does it influence motor relearning of the hemiparetic arm after stroke? A systematic review. Disability and Rehabilitation, 32(22), 1799–1809.PubMedCrossRefGoogle Scholar
  186. Mononen, K. (2007). The effect of augmented feedback on motor skill learning in shooting. PhD thesis, University of Jyväskylä, Finland.Google Scholar
  187. Morris, D., Tan, H., Barbagli, F., Chang, T., & Salisbury, K. (2007). Haptic feedback enhances force skill learning. In EuroHaptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2007. Second Joint (pp. 21–26).Google Scholar
  188. Multon, F., Hoyet, L., Komura, T., & Kulpa, R. (2007). Interactive control of physically-valid aerial motion: Application to VR training system for gymnasts. In Proceedings of the 2007 ACM Symposium on Virtual Reality Software and Technology (pp. 77–80). ACM.Google Scholar
  189. Munhall, K., Gribble, P., Sacco, L., & Ward, M. (1996). Temporal constraints on the mcgurk effect. Attention, Perception, & Psychophysics, 58, 351–362.CrossRefGoogle Scholar
  190. Mussa-Ivaldi, F. A., Hogan, N., & Bizzi, E. (1985). Neural, mechanical, and geometric factors subserving arm posture in humans. Journal of Neuroscience, 5(10), 2732–2745.PubMedGoogle Scholar
  191. Nakamura, A., Tabata, S., Ueda, T., Kiyofuji, S., & Kuno, Y. (2005). Multimodal presentation method for a dance training system. In CHI’05 extended abstracts on Human factors in computing systems (pp. 1685–1688). ACM.Google Scholar
  192. Nef, T., Mihelj, M., & Riener, R. (2007). Armin: A robot for patient-cooperative arm therapy. Medical and Biological Engineering and Computing, 45(9), 887–900.PubMedCrossRefGoogle Scholar
  193. Nesbitt, K. (2003). Designing multi-sensory displays for abstract data. PhD thesis, School of Information Technologies, University of Sydney, Australia.Google Scholar
  194. Neuhoff, J. G., Kramer, G., & Wayand, J. (2002). Pitch and loudness interact in auditory displays: Can the data get lost in the map? Journal of Experimental Psychology. Applied, 8(1), 17–25.PubMedCrossRefGoogle Scholar
  195. Neuhoff, J. G., & Wayand, J. (2002). Pitch change, sonification, and musical expertise: Which way is up? In International Conference on Auditory Displays, Kyoto, Japan.Google Scholar
  196. O’Malley, M. K., & Gupta, A. (2008). Haptic interfaces (pp. 25–74). Morgan-Kaufman Publisher.Google Scholar
  197. Oakley, I., Brewster, S., & Gray, P. (2001). Can you feel the force? An investigation of haptic collaboration in shared editors. In Proceedings of EuroHaptics (pp. 54–59).Google Scholar
  198. Oakley, I., McGee, M., Brewster, S., & Gray, P. (2000). Putting the feel in ‘look and feel’. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 415–422). New York, NY USA: ACM.Google Scholar
  199. Oakley, I., & O’Modhrain, S. (2005). Tilt to scroll: Evaluating a motion based vibrotactile mobile interface. In First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’05) (pp. 40–49). Los Alamitos, CA, USA: IEEE Computer Society.Google Scholar
  200. Ogden, R., & Franz, S. (1917). On cerebral motor control: The recovery from experimentally produced hemiplegia. Psychobiology, 1(1), 33.CrossRefGoogle Scholar
  201. Ohta, K., Umegaki, K., Murofushi, K., A., K., & Sakurai, S. (2009). Training aid system for hammer throw based on accelerometry. In Proceedings of the XXIInd Congress of the International Society of Biomechanics.Google Scholar
  202. Ostendorf, C. G., & Wolf, S. L. (1981). Effect of forced use of the upper extremity of a hemiplegic patient on changes in function. Physical Therapy, 61(7), 1022–1028.PubMedGoogle Scholar
  203. Ouellette, M., LeBrasseur, N., Bean, J., Phillips, E., Stein, J., Frontera, W., & Fielding, R. (2004). High-intensity resistance training improves muscle strength, self-reported function, and disability in long-term stroke survivors. Stroke; A Journal of Cerebral Circulation, 35(6), 1404.PubMedCrossRefGoogle Scholar
  204. Oviatt, S., Coulston, R., & Lunsford, R. (2004). When do we interact multimodally? Cognitive load and multimodal communication patterns. In Proceedings of the 6th international conference on Multimodal interfaces (pp. 129–136). ACM.Google Scholar
  205. Palluel-Germain, R., Bara, F., de Boisferon, A. H., Hennion, B., Gouagout, P., & Gentaz, E. (2007). A visuo-haptic device - telemaque - increases kindergarten children’s handwriting acquisition. In World Haptics Conference (pp. 72–77), Los Alamitos, CA, USA. IEEE Computer Society.Google Scholar
  206. Park, J. H., Shea, C. H., & Wright, D. L. (2000). Reduced-frequency concurrent and terminal feedback: A test of the guidance hypothesis. Journal of Motor Behavior, 32(3), 287–296.PubMedCrossRefGoogle Scholar
  207. Patel, K., Bailenson, J., Hack-Jung, S., Diankov, R., & Bajcsy, R. (2006). The effects of fully immersive virtual reality on the learning of physical tasks. In Proceedings of the 9th Annual International Workshop on Presence, Ohio, USA (pp. 87–94). Citeseer.Google Scholar
  208. Patoglu, V., Li, Y., & O’Malley, M. (2009). On the efficacy of haptic guidance schemes for human motor learning. In World Congress on Medical Physics and Biomedical Engineering (pp. 203–206). Munich, Germany. Springer.Google Scholar
  209. Patton, J., Stoykov, M., Kovic, M., & Mussa-Ivaldi, F. (2006). Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors. Experimental Brain Research, 168(3), 368–383.CrossRefGoogle Scholar
  210. Pauletto, S., & Hunt, A. (2006). The sonification of EMG data. In Proceedings of the 12th International Conference on Auditory Display, London, UK.Google Scholar
  211. Petrofsky, J. (2001). The use of electromyogram biofeedback to reduce trendelenburg gait. European Journal of Applied Physiology, 85(5), 491–495.PubMedCrossRefGoogle Scholar
  212. Petzold, B., Zaeh, M. F., Faerber, B., Deml, B., Egermeier, H., Schilp, J., & Clarke, S. (2004). A study on visual, auditory, and haptic feedback for assembly tasks. Presence: Teleoperators and Virtual Environments, 13(1), 16–21.CrossRefGoogle Scholar
  213. Powell, D., & O’Malley, M. (2011). Efficacy of shared-control guidance paradigms for robot-mediated training. In World Haptics Conference (WHC), 2011 IEEE (pp. 427–432). IEEE.Google Scholar
  214. Powell, D., & O’Malley, M. K. (2012). The task-dependent efficacy of shared-control haptic guidance paradigms. IEEE Transactions on Haptics, 5(3), 208–219.CrossRefGoogle Scholar
  215. Prange, G., Jannink, M., Groothuis-Oudshoorn, C., Hermens, H., & Ijzerman, M. (2006). Systematic review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke. Journal of Rehabilitation Research and Development, 43(2), 171.PubMedCrossRefGoogle Scholar
  216. Proteau, L. (1992). Vision and motor control, volume 85, chapter 4: On the specificity of learning and the role of visual information for movement control (pp. 67–103). Amsterdam: North-Holland.Google Scholar
  217. Proteau, L. (2005). Visual afferent information dominates other sources of afferent information during mixed practice of a video-aiming task. Experimental Brain Research, 161, 441–456.CrossRefGoogle Scholar
  218. Proteau, L., & Isabelle, G. (2002). On the role of visual afferent information for the control of aiming movements toward targets of different sizes. Journal of Motor Behavior, 34(4), 367–384.PubMedCrossRefGoogle Scholar
  219. Ranganathan, R., & Newell, K. M. (2009). Influence of augmented feedback on coordination strategies. Journal of Motor Behavior, 41(4), 317–330.PubMedCrossRefGoogle Scholar
  220. Rath, M., & Rohs, M. (2006). Explorations in sound for tilting-based interfaces. In Proceedings of the 8th International Conference on Multimodal Interfaces (pp. 295–301). ACM New York, NY, USA.Google Scholar
  221. Rauter, G., Baur, K., Sigrist, R., Riener, R., & Wolf, P. (2010). Robotergestütztes Bewegungslernen mit dem M3-Trainer — Vorstellung des Konzepts. In Sportinformatik trifft Sporttechnologie: Tagung der dvs-Sektion Sportinformatik in Kooperation mit der deutschen interdisziplinären Vereinigung für Sporttechnologie, Darmstadt, Germany.Google Scholar
  222. Rauter, G., Brunschweiler, A., Wellner, M., von Zitzewitz, J., Riener, R., & Wolf, P. (2009). Rowing novices can only partly profit from acoustic and visual display of a reference movement of an oar blade. In Progress in Motor Control VII, Marseille, France.Google Scholar
  223. Rauter, G., Sigrist, R., Marchal-Crespo, L., Vallery, H., Riener, R., & Wolf, P. (2011). Assistance or challenge? Filling a gap in user-cooperative control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3068–3073).Google Scholar
  224. Rauter, G., von Zitzewitz, J., Duschau-Wicke, A., Vallery, H., & Riener, R. (2010). A tendon based parallel robot applied to motor learning in sports. In 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2010 (pp. 82–87), Tokyo, Japan.Google Scholar
  225. Reinkensmeyer, D., Akoner, O., Ferris, D., & Gordon, K. (2009). Slacking by the human motor system: Computational models and implications for robotic orthoses. In Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE (pp. 2129–2132). IEEE.Google Scholar
  226. Reinkensmeyer, D. J., Emken, J. L., & Cramer, S. C. (2004). Robotics, motor learning, and neurologic recovery. Annual Review of Biomedical Engineering, 6, 497–525.PubMedCrossRefGoogle Scholar
  227. Reinkensmeyer, D. J., & Patton, J. L. (2009). Can robots help the learning of skilled actions? Exercise and Sport Sciences Reviews, 37(1), 43.PubMedCrossRefGoogle Scholar
  228. Reisman, D. S., Wityk, R., Silver, K., & Bastian, A. J. (2007). Locomotor adaptation on a split-belt treadmill can improve walking symmetry post-stroke. Brain, 130(7), 1861–1872.PubMedCrossRefGoogle Scholar
  229. Ribeiro, D. C., Sole, G., Abbott, J. H., & Milosavljevic, S. (2011). Extrinsic feedback and management of low back pain: A critical review of the literature. Manual Therapy, 16(3), 231–239.PubMedCrossRefGoogle Scholar
  230. Riskowski, J. L., Mikesky, A. E., Bahamonde, R. E., & Burr, D. B. (2009). Design and validation of a knee brace with feedback to reduce the rate of loading. Journal of Biomechanical Engineering, 131(8), 084503-1-6.CrossRefGoogle Scholar
  231. Robin, C., Toussaint, L., Blandin, Y., & Proteau, L. (2005). Specificity of learning in a video-aiming task: Modifying the salience of dynamic visual cues. Journal of Motor Behavior, 37(5), 367–376.PubMedCrossRefGoogle Scholar
  232. Rochat, P., & Senders, S. (1991). Active touch in infancy: Action systems in development. Infant attention: Biological constraints and the influence of experience (pp. 412–442).Google Scholar
  233. Ronsse, R., Koopman, B., Vitiello, N., Lenzi, T., De Rossi, S. M. M., van den Kieboom, J., . . . Ijspeert, A. J. (2011). Oscillator-based walking assistance: A model-free approach. In IEEE International Conference on Rehabilitation Robotics (ICORR) (pp. 1–6). IEEE.Google Scholar
  234. Ronsse, R., Puttemans, V., Coxon, J. P., Goble, D. J., Wagemans, J., Wenderoth, N., & Swinnen, S. P. (2011b). Motor learning with augmented feedback: Modality-dependent behavioral and neural consequences. Cerebral Cortex, 21(6), 1283–1294.PubMedCrossRefGoogle Scholar
  235. Ronsse, R., Vitiello, N., Lenzi, T., van den Kieboom, J., Carrozza, M. C., & Ijspeert, A. J. (2010). Adaptive oscillators with human-in-the-loop: Proof of concept for assistance and rehabilitation. In 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2010 (pp. 668–674). IEEE.Google Scholar
  236. Ronsse, R., Vitiello, N., Lenzi, T., van den Kieboom, J., Carrozza, M. C., & Ijspeert, A. J. (2011c). Human-robot synchrony: Flexible assistance using adaptive oscillators. IEEE Transactions on Bio-Medical Engineering, 58(4), 1001.PubMedCrossRefGoogle Scholar
  237. Rosenthal, J., Edwards, N., Villanueva, D., Krishna, S., McDaniel, T., & Panchanathan, S. (2011). Design, implementation, and case study of a pragmatic vibrotactile belt. IEEE Transactions on Instrumentation and Measurement, 60(1), 114–125.CrossRefGoogle Scholar
  238. Royet, J. (1991). Stereology: A method for analyzing images. Progress in Neurobiology, 37(5), 433–474.PubMedCrossRefGoogle Scholar
  239. Ruffaldi, E., Filippeschi, A., Avizzano, C., Bardy, B., Gopher, D., & Bergamasco, M. (2011). Feedback, affordances, and accelerators for training sports in virtual environments. Presence: Teleoperators and Virtual Environments, 20(1), 33–46.CrossRefGoogle Scholar
  240. Ruffaldi, E., Filippeschi, A., Frisoli, A., Sandoval, O., Avizzano, C., & Bergamasco, M. (2009). Vibrotactile perception assessment for a rowing training system. In EuroHaptics Conference, 2009 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics 2009. Third Joint (pp. 350–355). Salt Lake City, UT: IEEE Computer Society.Google Scholar
  241. Ruffaldi, E., Gonzales, O., Filippeschi, A., Frisoli, A., Avizzano, C., & Bergamasco, M. (2009). Integration of multimodal technologies for a rowing platform. In Proceedings of the 5th IEEE International Conference on Mechatronics, Malaga, Spain.Google Scholar
  242. Sainburg, R. L., & Ghez, C. (1995). Limitations in the learning and generalization of multijoint dynamics. 21, 686.Google Scholar
  243. Salamin, P., Tadi, T., Blanke, O., Vexo, F., & Thalmann, D. (2010). Quantifying effects of exposure to the third and first-person perspectives in virtual-reality-based training. IEEE Transactions on Learning Technologies, 3(3), 272–276.CrossRefGoogle Scholar
  244. Salisbury, J., & Srinivasan, M. (1997). Phantom-based haptic interaction with virtual objects. IEEE Computer Graphics and Applications, 17(5), 6–10.CrossRefGoogle Scholar
  245. Salmoni, S. (1984). Knowledge of results and motor learning. A review and critical reappraisal. Psychological Bulletin, 95(3), 355–386.PubMedCrossRefGoogle Scholar
  246. Sann, C., & Streri, A. (2007). Perception of object shape and texture in human newborns: Evidence from cross-modal transfer tasks. Developmental Science, 10(3), 399–410.PubMedCrossRefGoogle Scholar
  247. Schack, T., Bockemühl, T., Schütz, C., & Ritter, H. (2008). Augemented Reality im Techniktraining — experimentelle Implementation einer neuen Technologie in den Leistungssport. Technical report, BISp-Jahrbuch - Forschungsförderung.Google Scholar
  248. Schack, T., & Heinen, T. (2007). Integriertes Online-Feedback im Spitzensport — Neue Wege eines medienbasierten Techniktrainings Int-O-Feed. Technical report, BISp-Jahrbuch - Forschungsförderung.Google Scholar
  249. Schaffert, N., Barrass, K., & Effenberg, A. (2009). Exploring function and aesthetics in sonification for elite sports. In Proceedings of the Second International Conference on Music Communication Science.Google Scholar
  250. Schaffert, N., Mattes, K., & Effenberg, A. (2009). Sound design for the purposes of movement optimisation in elite sport (using the example of rowing). In Proceedings of the 15th International Conference on Auditory Display, Copenhagen, Denmark.Google Scholar
  251. Scheidt, R., Reinkensmeyer, D., Conditt, M., Rymer, W., & Mussa-Ivaldi, F. (2000). Persistence of motor adaptation during constrained, multi-joint, arm movements. Journal of Neurophysiology, 84(2), 853.PubMedGoogle Scholar
  252. Schmidt, R. A. (1991). Frequent augmented feedback can degrade learning: Evidence and interpretations. Tutorials in Motor Neuroscience, 62, 59–75.CrossRefGoogle Scholar
  253. Schmidt, R., & Wrisberg, C. (2008). Motor learning and performance: A situation-based learning approach. Human Kinetics Publishers.Google Scholar
  254. Schmidt, R. A., & Wulf, G. (1997). Continuous concurrent feedback degrades skill learning: Implications for training and simulation. Human Factors, 39(4), 509–525.PubMedCrossRefGoogle Scholar
  255. Schmidt, R. A., Young, D. E., Swinnen, S., & Shapiro, D. C. (1989). Summary knowledge of results for skill acquisition: Support for the guidance hypothesis. Journal of Experimental Psychology. Learning, Memory, and Cognition, 15(2), 352–359.PubMedCrossRefGoogle Scholar
  256. Secoli, R., Milot, M., Rosati, G., & Reinkensmeyer, D. (2011). Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke. Journal of Neuroengineering and Rehabilitation, 8(1), 1–10.CrossRefGoogle Scholar
  257. Seitz, A. R., & Dinse, H. R. (2007). A common framework for perceptual learning. Current Opinion in Neurobiology, 17(2), 148–153.PubMedCrossRefGoogle Scholar
  258. Seitz, A. R., Kim, R., & Shams, R. (2006). Sound facilitates visual learning. Current Biology, 16(14), 1422–1427.PubMedCrossRefGoogle Scholar
  259. Shadmehr, R., & Moussavi, Z. M. K. (2000). Spatial generalization from learning dynamics of reaching movements. Journal of Neuroscience, 20(20), 7807–7815.PubMedGoogle Scholar
  260. Shadmehr, R., & Mussa-Ivaldi, F. (1994). Adaptive representation of dynamics during learning of a motor task. Journal of Neuroscience, 14(5), 3208–3224.PubMedGoogle Scholar
  261. Shams, L., & Seitz, A. R. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12(11), 411–417.PubMedCrossRefGoogle Scholar
  262. Shea, C. H., & Wulf, G. (1999). Enhancing motor learning through external-focus instructions and feedback. Human Movement Science, 18(4), 553–571.CrossRefGoogle Scholar
  263. Shea, C., Wulf, G., Park, J., & Gaunt, B. (2001). Effects of an auditory model on the learning of relative and absolute timing. Journal of Motor Behavior, 33(2), 127–138.PubMedCrossRefGoogle Scholar
  264. Shmuelof, L., Krakauer, J. W., & Mazzoni, P. (2012). How is a motor skill learned? Change and invariance at the levels of task success and trajectory control. Journal of Neurophysiology.Google Scholar
  265. Sielhorst, T., Feuerstein, M., & Navab, N. (2008). Advanced medical displays: A literature review of augmented reality. Journal of Display Technology, 4(4), 451–467.CrossRefGoogle Scholar
  266. Sigrist, R., Rauter, G., Riener, R., & Wolf, P. (2011). Self-controlled feedback for a complex motor task. In BIO Web of Conferences; The International Conference SKILLS (Vol. 1), Montpellier, France.Google Scholar
  267. Sigrist, R., Schellenberg, J., Rauter, G., Broggi, S., Riener, R., & Wolf, P. (2011b). Visual and auditory augmented concurrent feedback in a complex motor task. Presence: Teleoperators and Virtual Environments, 20(1), 15–32.CrossRefGoogle Scholar
  268. Smethurst, C. J., & Carson, R. G. (2001). The acquisition of movement skills: Practice enhances the dynamic stability of bimanual coordination. Human Movement Science, 20(4–5), 499–529.PubMedCrossRefGoogle Scholar
  269. Smith, R. M., & Loschner, C. (2002). Biomechanics feedback for rowing. Journal of Sports Sciences, 20(10), 783–791.PubMedCrossRefGoogle Scholar
  270. Smith, D. R., & Walker, B. N. (2005). Effects of auditory context cues and training on performance of a point estimation sonification task. Applied Cognitive Psychology, 19(8), 1065–1087.CrossRefGoogle Scholar
  271. Snodgrass, S. J., Rivett, D. A., Robertson, V. J., & Stojanovski, E. (2010). Real-time feedback improves accuracy of manually applied forces during cervical spine mobilisation. Manual Therapy, 15, 19–25.PubMedCrossRefGoogle Scholar
  272. Spelmezan, D., Hilgers, A., & Borchers, J. (2009). A language of tactile motion instructions. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’09 (pp. 29:1–29:5). New York, NY, USA: ACM.Google Scholar
  273. Spelmezan, D., Jacobs, M., Hilgers, A., & Borchers, J. (2009). Tactile motion instructions for physical activities. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI ’09 (pp. 2243–2252), New York, NY, USA. ACM.Google Scholar
  274. Spinks, W. L., & Smith, R. M. (1994). The effects of kinetic information feedback on maximal rowing performance. Journal of Human Movement Studies, 27(1), 17–36.Google Scholar
  275. Stepp, C., & Matsuoka, Y. (2011). Object manipulation improvements due to single session training outweigh the differences among stimulation sites during vibrotactile feedback. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 19(6), 677–685.PubMedCrossRefGoogle Scholar
  276. Stepp, C., & Matsuoka, Y. (2012). Vibrotactile sensory substitution for object manipulation: Amplitude versus pulse train frequency modulation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(1), 31–37.PubMedCrossRefGoogle Scholar
  277. Sterr, A., Elbert, T., Berthold, I., Kölbel, S., Rockstroh, B., & Taub, E. (2002). Longer versus shorter daily constraint-induced movement therapy of chronic hemiparesis: An exploratory study. Archives of Physical Medicine and Rehabilitation, 83(10), 1374–1377.PubMedCrossRefGoogle Scholar
  278. Sulzenbruck, S., & Heuer, H. (2011). Type of visual feedback during practice influences the precision of the acquired internal model of a complex visuo-motor transformation. Ergonomics, 54(1), 34–46.PubMedCrossRefGoogle Scholar
  279. Sun, M., Ren, X., & Cao, X. (2011). Effects of multimodal error feedback on human performance in steering tasks. Information and Media Technologies, 6(1), 193–201.Google Scholar
  280. Swindells, C., Unden, A., & Sang, T. (2003). Torquebar: An ungrounded haptic feedback device. In Proceedings of the 5th International Conference on Multimodal Interfaces (pp. 52–59). New York, NY, USA: ACM.Google Scholar
  281. Swinnen, S. P., Lee, T. D., Verschueren, S., Serrien, D. J., & Bogaerds, H. (1997). Interlimb coordination: Learning and transfer under different feedback conditions. Human Movement Science, 16(6), 749–785.CrossRefGoogle Scholar
  282. Swinnen, S. P., Schmidt, R. A., Nicholson, D. E., & Shapiro, D. C. (1990). Information feedback for skill acquisition: Instantaneous knowledge of results degrades learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(4), 706–716.CrossRefGoogle Scholar
  283. Swinnen, S., Verschueren, S., Bogaerts, H., Dounskaia, N., Lee, T., Stelmach, G., & Serrien, D. (1998). Age-related deficits in motor learning and differences in feedback processing during the production of a bimanual coordination pattern. Cognitive Neuropsychology, 15(5), 439–466.CrossRefGoogle Scholar
  284. Takahashi, C. D., Nemet, D., Rose-Gottron, C. M., Larson, J. K., Cooper, D. M., & Reinkensmeyer, D. J. (2003). Neuromotor noise limits motor performance, but not motor adaptation, in children. Journal of Neurophysiology, 90(2), 703–711.PubMedCrossRefGoogle Scholar
  285. Takahashi, C. D., Scheidt, R. A., & Reinkensmeyer, D. J. (2001). Impedance control and internal model formation when reaching in a randomly varying dynamical environment. Journal of Neurophysiology, 86(2), 1047–1051.PubMedGoogle Scholar
  286. Takeuchi, T. (1993). Auditory information in playing tennis. Perceptual and Motor Skills, 76, 1323–1328.PubMedCrossRefGoogle Scholar
  287. Tang, J., Carignan, C., & Olsson, P. (2006). Tandam canoeing over the internet using haptic feedback. In Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 281–285). Alexandria, Virginia, USA.Google Scholar
  288. Tannen, R. S., Nelson, W. T., Bolia, R. S., Warm, J., & Dember, W. N. (2004). Evaluating adaptive multisensory displays for target localization in a flight task. The International Journal of Aviation Psychology, 14(3), 297–312.CrossRefGoogle Scholar
  289. Thoroughman, K., & Shadmehr, R. (2000). Learning of action through adaptive combination of motor primitives. Nature, 407(6805), 742.PubMedCrossRefGoogle Scholar
  290. Timmermans, A. A. A., Seelen, H. A. M., Willmann, R. D., & Kingma, H. (2009). Technology-assisted training of arm-hand skills in stroke: Concepts on reacquisition of motor control and therapist guidelines for rehabilitation technology design. Journal of Neuroengineering and Rehabilitation, 6, 1.PubMedCrossRefGoogle Scholar
  291. Todorov, E. (2004). Optimality principles in sensorimotor control. Nature Neuroscience, 7, 907–915.PubMedCrossRefGoogle Scholar
  292. Todorov, E., & Jordan, M. I. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5, 1226–1235.PubMedCrossRefGoogle Scholar
  293. Todorov, E., Shadmehr, R., & Bizzi, E. (1997). Augmented feedback presented in a virtual environment accelerates learning of a difficult motor task. Journal of Motor Behavior, 29(2), 147–158.PubMedCrossRefGoogle Scholar
  294. Tzetzis, G., Votsis, E., & Kourtessis, T. (2008). The effect of different corrective feedback methods on the outcome and self confidence of young athletes. Journal of Sports Science and Medicine, 7, 371–378.Google Scholar
  295. Underwood, S. M. (2009). Effects of augmented real-time auditory feedback on top-level precision shoting performance. Master’s thesis, University of Kentucky.Google Scholar
  296. Utley, A., & Astill, S. (2008). Motor control, learning and development. Bios Instant Notes. Taylor & Francis.Google Scholar
  297. Vallery, H., Duschau-Wicke, A., & Riener, R. (2009a). Generalized elasticities improve patient-cooperative control of rehabilitation robots. In IEEE International Conference on Rehabilitation Robotics, ICORR 2009 (pp. 535–541).Google Scholar
  298. Vallery, H., Guidali, M., Duschau-Wicke, A., & Riener, R. (2009b). Patient-cooperative control: Providing safe support without restricting movement. In World Congress on Medical Physics and Biomedical Engineering (pp. 166–169). Munich, Germany: Springer.Google Scholar
  299. van Beers, R. J. (2009). Motor learning is optimally tuned to the properties of motor noise. Neuron, 63(3), 406–417.PubMedCrossRefGoogle Scholar
  300. van Beers, R. J., Sittig, A. C., & Gon, J. J. (1999). Integration of proprioceptive and visual position-information: An experimentally supported model. Journal of Neurophysiology, 81(3), 1355.PubMedGoogle Scholar
  301. Van der Linde, R. Q., Lammertse, P., Frederiksen, E., & Ruiter, B. (2002). The hapticmaster, a new high-performance haptic interface. In Proceedings of Eurohaptics (pp. 1–5). Citeseer.Google Scholar
  302. Van der Linden, D. W., Cauraugh, J. H., & Greene, T. A. (1993). The effect of frequency of kinetic feedback on learning an isometric force production task in nondisabled subjects. Physical Therapy, 73(2), 79–87.Google Scholar
  303. van Erp, J. B. F., Saturday, I., & Jansen, C. (2006). Application of tactile displays in sports: Where to, how and when to move. In Proceedings of the Eurohaptics International Conference.Google Scholar
  304. Van Erp, J., & Van Veen, H. (2004). Vibrotactile in-vehicle navigation system. Transportation Research Part F: Psychology and Behaviour, 7(4–5), 247–256.CrossRefGoogle Scholar
  305. van Vliet, P. M., & Wulf, G. (2006). Extrinsic feedback for motor learning after stroke: What is the evidence? Disability and Rehabilitation, 28(13), 831–840.PubMedCrossRefGoogle Scholar
  306. Varni, G., Dubus, G., Oksanen, S., Volpe, G., Fabiani, M., Bresin, R., . . . Camurri, A. (2011). Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices. Journal on Multimodal User Interfaces, 1–17.Google Scholar
  307. Viviani, P., & Flash, T. (1995). Minimum-jerk, two-thirds power law, and isochrony: Converging approaches to movement planning. Journal of Experimental Psychology. Human Perception and Performance, 21(1), 32–53.PubMedCrossRefGoogle Scholar
  308. Vogeley, K., May, M., Ritzl, A., Falkai, P., Zilles, K., & Fink, G. (2004). Neural correlates of first-person perspective as one constituent of human self-consciousness. Journal of Cognitive Neuroscience, 16(5), 817–827.PubMedCrossRefGoogle Scholar
  309. Vogt, K. (2008). Sonification in computational physics-qcd-audio. In Proceedings of SysMus-1st International Conference of Students of Systematic Musicology, Graz, Austria.Google Scholar
  310. Vogt, K., Pirró, D., Kobenz, I., Höldrich, R., & Eckel, G. (2009). Physiosonic — movement sonification as auditory feedback. In Proceedings of the 15th International Conference of Auditory Display, Copenhagen, Denmark.Google Scholar
  311. Vogt, K., Pirró, D., Kobenz, I., Holdrich, R., & Eckel, G. (2010). Physiosonic — evaluated movement sonification as auditory feedback in physiotherapy. In S. Ystad, M. Aramaki, R. Kronland-Martinet, & K. Jensen (Eds.), Auditory display (pp. 103–120). Berlin/Heidelberg: Springer.CrossRefGoogle Scholar
  312. von Zitzewitz, J., Wolf, P., Novakovic, V., Wellner, M., Rauter, G., Brunschweiler, A., & Riener, R. (2008). Real-time rowing simulator with multimodal feedback. Sports Technology, 1(6), 257–266.CrossRefGoogle Scholar
  313. Walker, B. N., & Nees, M. A. (2011). Theory of sonification, volume The Sonification Handbook. New York: Academic Press.Google Scholar
  314. Wallis, I., Ingalls, T., Rikakis, T., Olson, L., Chen, Y., Xu, W., & Sundaram, H. (2007). Real-time sonification of movement for an immersive stroke rehabilitation environment. In Proceedings of the 13th International Conference on Auditory Display, Montréal, Canada.Google Scholar
  315. Wei, K., & Körding, K. (2009). Relevance of error: What drives motor adaptation? Journal of Neurophysiology, 101(2), 655–664.PubMedCrossRefGoogle Scholar
  316. Weiller, C., Jüptner, M., Fellows, S., Rijntjes, M., Leonhardt, G., Kiebel, S., Müller, S., Diener, H., Thilmann, A., et al. (1996). Brain representation of active and passive movements. NeuroImage, 4(2), 105–110.PubMedCrossRefGoogle Scholar
  317. Welch, R. B., & Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88(3), 638–667.PubMedCrossRefGoogle Scholar
  318. Wellner, M., Schaufelberger, A., Zitzewitz, J., & Riener, R. (2008). Evaluation of visual and auditory feedback in virtual obstacle walking. Presence: Teleoperators and Virtual Environments, 17(5), 512–524.CrossRefGoogle Scholar
  319. Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177.CrossRefGoogle Scholar
  320. Wierinck, E., Puttemans, V., Swinnen, S., & van Steenberghe, D. (2005). Effect of augmented visual feedback from a virtual reality simulation system on manual dexterity training. European Journal of Dental Education, 9(1), 10.PubMedCrossRefGoogle Scholar
  321. Winstein, C. J. (1991). Knowledge of results and motor learning — implications for physical therapy. Physical Therapy, 71(2), 140–149.PubMedGoogle Scholar
  322. Winstein, C. J., Pohl, P. S., Cardinale, C., Green, A., Scholtz, L., & Waters, C. S. (1996). Learning a partial-weight-bearing skill: Effectiveness of two forms of feedback. Physical Therapy, 76(9), 985–993.PubMedGoogle Scholar
  323. Wishart, L. R., Lee, T. D., Cunningham, S. J., & Murdoch, J. E. (2002). Age-related differences and the role of augmented visual feedback in learning a bimanual coordination pattern. Acta Psychologica, 110(2–3), 247–263.PubMedCrossRefGoogle Scholar
  324. Wolf, S., Lecraw, D., Barton, L., & Jann, B. (1989). Forced use of hemiplegic upper extremities to reverse the effect of learned nonuse among chronic stroke and head-injured patients. Experimental Neurology, 104(2), 125–132.PubMedCrossRefGoogle Scholar
  325. Wolpert, D. M., Diedrichsen, J., & Flanagan, J. R. (2011). Principles of sensorimotor learning. Nature Reviews Neuroscience, 12, 739–749.PubMedGoogle Scholar
  326. Wolpert, D., & Flanagan, J. (2010). Motor learning. Current Biology, 20(11), R467–R472.PubMedCrossRefGoogle Scholar
  327. Wolpert, D., Ghahramani, Z., & Flanagan, J. (2001). Perspectives and problems in motor learning. Trends in Cognitive Sciences, 5(11), 487–494.PubMedCrossRefGoogle Scholar
  328. Wulf, G. (2007a). Attentional focus and motor learning: A review of 10 years of research. E-Journal Bewegung und Training, 1, 4–14.Google Scholar
  329. Wulf, G. (2007b). Self-controlled practice enhances motor learning: Implications for physiotherapy. Physiotherapy, 93(2), 96–101.CrossRefGoogle Scholar
  330. Wulf, G., Hörger, M., & Shea, C. H. (1999). Benefits of blocked over serial feedback on complex motor skill learning. Journal of Motor Behavior, 31(1), 95–103.PubMedCrossRefGoogle Scholar
  331. Wulf, G., & Shea, C. H. (2002). Principles derived from the study of simple skills do not generalize to complex skill learning. Psychonomic Bulletin & Review, 9(2), 185–211.CrossRefGoogle Scholar
  332. Wulf, G., Shea, C., & Lewthwaite, R. (2010). Motor skill learning and performance: A review of influential factors. Medical Education, 44(1), 75–84.PubMedCrossRefGoogle Scholar
  333. Wulf, G., Shea, C. H., & Matschiner, S. (1998). Frequent feedback enhances complex motor skill learning. Journal of Motor Behavior, 30(2), 180–192.PubMedCrossRefGoogle Scholar
  334. Yamamoto, G., Shiraki, K., Takahata, M., Sakane, Y., & Takebayashi, Y. (2004). Multimodal knowledge for designing new sound environments. In The International Conference on Human Computer Interaction with Mobile Devices and Services. Citeseer.Google Scholar
  335. Yang, X.-D., Bischof, W. F., & Boulanger, P. (2008). Validating the performance of haptic motor skill training. In Proceedings of Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems Haptics 2008 (pp. 129–135), Reno, NE.Google Scholar
  336. Yang, U., & Kim, G. (2002). Implementation and evaluation of “just follow me”: An immersive, VR-based, motion-training system. Presence: Teleoperators and Virtual Environments, 11(3), 304–323.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2012

Authors and Affiliations

  • Roland Sigrist
    • 1
    • 2
  • Georg Rauter
    • 1
  • Robert Riener
    • 1
  • Peter Wolf
    • 1
  1. 1.Sensory-Motor Systems Lab, ETH Zurich & Spinal Cord Injury CenterUniversity Hospital BalgristZurichSwitzerland
  2. 2.ETH Zurich, Sensory-Motor Systems LabZürichSwitzerland

Personalised recommendations