Attention, Perception, & Psychophysics

, Volume 78, Issue 8, pp 2558–2568 | Cite as

Multisensory integration of redundant trisensory stimulation



Integration of sensory information across modalities can confer behavioral advantages by decreasing perceptual ambiguity, increasing reaction time, and increasing detection accuracy relative to unisensory stimuli. We asked how combinations of auditory, visual, and somatosensory events alter response time. Participants detected stimulation on one side of space (right or left) while ignoring stimulation on the other side of space. There were seven types of suprathreshold stimuli: auditory (tones from speakers), visual (sinusoidal contrast gratings), somatosensory (fingertip vibrations), audio-visual, somato-visual, audio-somatosensory, and audio-somato-visual. Response enhancement and race model analysis confirmed that bisensory and trisensory trials enhanced response time relative to unisensory trials. Exploratory analysis of individual differences in intersensory facilitation revealed that participants fit into one of two groups: those who benefitted from trisensory information and those who did not.


Multisensory Focused attention Auditory Visual Somatosensory 

Research on multisensory integration has emerged as a critical area of neuroscience in the past decade (Calvert, Spence, & Stein, 2004; Foxe & Molholm, 2009; Ghazanfar & Schroeder, 2006; Spence & Driver, 2004). Because humans operate in a multisensory environment, it is vital to assess how perception and cognition are affected by contributions from multiple sensory modalities, such as auditory, visual, and somatosensory stimulation. Recently, some applications of trisensory enhancement have been investigated in technology. For instance, multisensory processing of smartphone vibrations, sounds, and flashes facilitates taking a phone call (Pomper, Brincker, Harwood, Prikhodko, & Senkowski, 2014) and multisensory warnings can enhance risk communication (van Erp, Toet, & Janssen, 2015). This nascent line of research indicates that when milliseconds matter, trisensory cues may be critical to enhancing behavioral responses to stimuli in a process called redundancy gain.

Early research (Raab, 1962) proposed that bisensory stimulation in redundant-target experiments results in parallel separate activations of unisensory channels. Given two response time distributions that overlap, it is probabilistic that one of the two detection processes will reach threshold for response faster than unisensory stimulation. This “race” between parallel sensory inputs results in faster response times, on average, than allowed by either individual input, in a process called statistical facilitation. If response times are statistically faster than can be explained by separate activation in the race model, coactivation models must be considered (Miller, 1982), inherently requiring multisensory integration (MSI).

Biological support for coactivation originally came from evidence of multisensory neurons in the superior colliculus (SC) of cats (Meredith, Nemitz, & Stein, 1987; Meredith & Stein, 1983). The SC receives afferent projections from unisensory sources and integrates them via multisensory neurons (Stein & Stanford, 2008). In cats and monkeys, inputs from multiple sensory modalities presented within a small temporal window (the temporal rule) (King & Palmer, 1985; Meredith et al., 1987) and close spatial proximity (the spatial rule) (Meredith & Stein, 1986) resulted in a firing rate in the SC greater than expected by summing the signals of two separately activated neurons (Stein, Meredith, & Wallace, 1993; Wallace, Wilkinson, & Stein, 1996). Subsequent work at multiple levels suggests that there is evidence for MSI in behavioral response time (Forster, Cavina-Pratesi, Aglioti, & Berlucchi, 2002; Mordkoff & Miller, 1993), single neuron activity (Stein, Stanford, Ramachandran, Perrault, & Rowland, 2009), neurophysiological responses (Besle, Fort, & Giard, 2004; Brandwein et al., 2011; Molholm et al., 2002; Russo et al., 2010), and functional neural activation in humans (Foxe et al., 2002; Hertz & Amedi, 2010; Kayser, Petkov, Remedios, & Logothetis, 2012; Laurienti, Perrault, Stanford, Wallace, & Stein, 2005).

Multisensory research in humans has generally been conducted using redundant target (Forster et al., 2002), focused attention (Colonius & Diederich, 2011), or selective attention paradigms (Gomez-Ramirez et al., 2007; Spence & Driver, 1997; Spence, Ranson, & Driver, 2000). In redundant target studies, participants detect and respond to stimulation in any modality. In focused attention studies, participants detect and respond to a target modality, whereas in selective attention experiments, participants detect and respond to stimuli occurring in a particular feature dimension. Evidence shows that MSI occurs automatically even in the absence of attention but can be manipulated based on attentional set (Spence & Driver, 2004). Although the studies described have advanced our understanding of how the brain responds to bisensory stimulation, very few studies have examined basic processes underlying human trisensory integration. Understanding the mechanisms that allow for the perception of information in three distinct modalities will help determine how the brain manages more than two inputs.

Todd (1912) conducted the first experimental assessment of trisensory processing by measuring reaction times (RTs) to combinations of light, tone, and electric shock in three participants using a focused attention paradigm. Todd observed reduced RTs to pairs of stimuli relative to individual stimuli, and even shorter RTs to the simultaneous combination of all three stimuli relative to pairs, regardless the modality to which participants were instructed to react. With this research, Todd found evidence of the redundant signals effect (RSE), which demonstrates that combining stimuli reduces response time. At the time, there were no algorithms established to evaluate coactivation. Since then, however, a small number of experiments on trisensory processing have been conducted, utilizing varying methodology and analysis techniques and usually finding response time facilitation. These studies generally assessed the potential for trisensory integration of successive event sequences with a range of differing stimulus onset asynchronies (SOA) (Bresciani, Dammeier, & Ernst, 2008; Diederich & Colonius, 2004; Wozny, Beierholm, & Shams, 2008) or how dynamic trisensory events enhance responses (Sella, Reiner, & Pratt, 2014), rather than the effects of the simplest case of simultaneous, redundant presentation. Although it is valuable to test different SOA, we sought to simplify the procedure in the present experiment by presenting only synchronized stimuli.

Diederich and Colonius (2004) conducted a study of trisensory integration that is particularly relevant to the analysis techniques reported in the current paper. They (mostly) assessed the effects of sequential presentation of combinations of auditory, tactile, and visual inputs on behavior. The authors cleverly extended statistical analysis of bisensory stimuli to the trisensory domain. They tested four participants and analyzed their data with two distinct methods: multisensory response enhancement (MRE) and race model inequality (RMI). MRE is a coarse descriptive measure of mean RTs in multisensory conditions relative to mean RTs in unisensory conditions. It allows for an examination of percent enhancement in RT but does not provide support for or against coactivation. In other words, MRE is a measure of the amount of RT facilitation but does not address whether MSI has actually occurred. The race model is a finer-grained test that evaluates whether separate activation is a sufficient explanation for facilitated RTs across their full distribution. When separate activation cannot statistically explain the RTs, coactivation, and thus MSI, can be invoked. With both approaches, Diederich and Colonius found a trisensory enhancement effect on reaction time that existed over and above the bisensory enhancement effects. Maximum enhancement occurred when the time between events in the three modalities was shortest, but a fully synchronized condition was not tested. Decreasing the intensity of the auditory or the tactile stimulus also increased multisensory enhancement in the bisensory combinations supporting the inverse effectiveness hypothesis, which states that greater perceptual difficulty causes greater reliance on multisensory information. Thus, if the best unisensory response is weak, such as due to low intensity, multisensory stimuli are more robustly integrated (Holmes, 2007, 2009; Meredith & Stein, 1983).

MSI has previously been considered as a universal and automatic process in all people (Calvert et al., 2004; Ghazanfar & Schroeder, 2006). In experiments with small numbers of participants (Diederich & Colonius, 2004; Todd, 1912), it is difficult to detect potential individual differences. Previous analysis of individual differences in multisensory research is scant, but it is beginning to be recognized as an important component of this line of inquiry. Spence and Squire (2003) note that “the underlying causes of the large individual differences in the perception of multisensory synchrony” (p. R521) has been insufficiently investigated. In one example, individual differences were noted for the point at which auditory and visual stimuli are perceived as occurring simultaneously (Stone et al., 2001). Further, Mollon and Perkins (1996) determined that judgments of stellar transit in 1796 differed between observers due to individual differences in audio-visual perception. There is no compelling reason to believe simultaneity judgments might vary between individuals, but not other aspects of sensory integration such as the degree of coactivation. As such, Stevenson, Zemtsov, and Wallace (2012) found evidence for individual differences in temporal binding windows using the McGurk effect and the sound-induced flash illusion (Shams, Kamitani, & Shimojo, 2000). Their results indicate that wider binding windows are associated with stronger MSI. Other studies have found an association between video game playing and precision of MSI (Donohue, Woldorff, & Mitroff, 2010) and evidence that activation of the left superior temporal sulcus, an area associated with auditory categorization, dictates susceptibility to the McGurk effect (Nath & Beauchamp, 2012).

Given the precedent set by previous studies of trisensory integration and of individual differences in multisensory integration processes, the present study was motivated by 1) a need to understand the simplest, most straightforward case of trisensory integration during redundant stimulation on one side of space, and 2) a need to determine whether individuals who integrate multisensory inputs do so across all combinations of modalities or whether there are certain combinations of modalities that are more likely to be integrated by some individuals than others. We asked 30 untrained participants to respond to the presence of non-aversive, simultaneously presented suprathreshold stimuli on either the left or right side. Tactile stimulation was delivered to either the left or the right hand in the form of vibration in the same general location as the visual and auditory stimuli. Catch trials containing no stimulation were included to ascertain that participants were following instructions. The different combinations of modalities and side of stimulation varied from trial to trial to prevent participants from being able to anticipate the stimulus. We asked participants to detect any sensory stimulation on one side of space and to ignore stimulation on the contralateral side. We employed two analytic approaches previously used by Diederich and Colonius (2004) in their redundant target paradigm for group analysis. For individual difference analysis, we divided our participants into two groups: those who did and those who did not integrate trisensory inputs. We believe that this was the most conservative approach and the one most relevant to the current study because the trisensory condition was expected to produce the greatest RT facilitation relative to the unisensory conditions.



The Psychology Research Participation Pool at Syracuse University was utilized to obtain data from 30 participants tested in the Care Lab. These participants were completing research studies for credit for entry-level psychology courses. The mean age was 20.3 years (SD = 2.7 years), and all participants were right-handed. There were 11 males and 19 females.

Apparatus and Stimuli

Stimuli were presented using MATLAB on a 22.5” VIEWPixx monitor (VPixx Technologies, Inc., 1920 x 1200 resolution).


Auditory stimulation was a 705 kbps, 16-bit, 44.100-kHz, 1000-Hz tone presented on the right or left side for 240 ms from two Bose Companion2, Series II Multimedia Speaker System speakers adjacent to the left and right side of the screen 24 inches from the participant at a decibel level of 37.34. Brown noise, a filtered signal that generates energy at low frequencies with a spectral density inversely proportional to its frequency squared that decreases by 6 dB per octave, was generated by the myNoise BVBA application and was played from adjacent speakers at a level of 21 dB via an iPad mini through two Bose Companion2, Series III speakers.


The visual stimuli were Gabor patches (300 by 200 pixels, 881 units of Michelson contrast) presented 300 pixels from the center either to the right or left of fixation for 240 ms. The visual stimuli had a visual angle of 19.46 degrees subtending to the left or the right of fixation and a vertical visual angle of 13.55 degrees.


Somatosensory stimulation was delivered to the left or right index fingers using two CM-5 somatosensory stimulators (Cortical Metrics) for 240 ms. Intensity was set at 125 microns.


After obtaining consent, participants were instructed to detect stimulation on one side of space (either right or left) while ignoring stimulation on the other side of space. This side is referred to as the attended side. Side assignment alternated with every participant.

There were seven trial types: auditory (A), visual (V), somatosensory (S), audio-visual (AV), somato-visual (SV), audio-somato (AS), and audio-somato-visual (ASV). Each trial type was presented six times per side per block. There also were six blank or catch trials, which were included to evaluate false alarm rates, resulting in 90 trials per block. Six self-initiated blocks were presented, with breaks in between. Participants were instructed to respond when they perceived stimulation on their attended side by pressing a foot pedal (Savant Elite/USB, Kinesis Corp.) with the same foot as their attended side in the first block. Prior to every subsequent block, they were instructed to switch responding foot but continue attending to their designated side.

Each trial began with a fixation cross in the center of the screen for 100 ms. The cross disappeared and was replaced by two circles on the right and left side of the screen after 500 ms of a blank screen. The target stimuli were presented between 500 and 1250 ms (randomly chosen from a uniform distribution) after the appearance of the circles and lasted for 240 ms. Upon response, or after 2.6 s, the next trial began.

Analysis approach

Blocks were binned into pairs in order to capture the equivalent number of trials requiring left and right foot responses in one bin. Analysis of mean reaction time (RTs) on correct trials over the six blocks in an Attended side x Block pair x Response foot ANOVA revealed an effect of block pair, F(2,56) = 13.96, p < 0.001, ηG2 = 0.052, and a marginal effect of foot, F(1,28) = 3.77, p = 0.06, ηG2 = 0.004. Pairwise t tests confirmed the block effect was a result of significantly higher mean RTs in the first bin than the second and third, ts(59) > 4.7, ps < 0.001. This indicated that the first encounter with a new foot in the first and second block caused participants to respond more slowly relative to later blocks. To correct for this training artifact, we eliminated the first two blocks from analysis, treating them as practice blocks. An Attended side x Block pair x Response foot ANOVA of RTs in the second and third pairs of blocks produced no significant effects or interactions of foot, block, or attended side. The general pattern of results from both of these algorithms did not change whether we included all blocks or just the last four. To avoid effects of block and foot, all subsequent analyses exclude data from the first two blocks.

Following the analyses of Diederich and Colonius (2004), we used multisensory response enhancement (MRE) and the race model to determine the conditions that facilitated responses.

MRE calculations

MRE for each bisensory or trisensory trial type was calculated by finding the fastest mean RT from among the component trial types, subtracting the mean RT of the multisensory trial type, and dividing by the fastest mean RT of the component trial types. Multiplying that value by 100 gives a percent enhancement in RT provided by a combination of stimuli. Two types of MRE were calculated: 1) trisensory and bisensory RTs relative to unisensory RTs, and 2) trisensory RTs relative to bisensory RTs. This resulted in five MRE measures: one for each of the three bisensory conditions, one for the trisensory relative to unisensory conditions (Tri/Uni), and one for the trisensory relative to bisensory conditions (Tri/Bi).

Race Model calculations

Multisensory stimuli produce separate activations in each sensory channel. The behavioral response is elicited by the fastest signal. There is evidence that response times can be influenced by response competition (Fournier & Eriksen, 1990); however, we assume context invariance for race model analysis such that response time distributions for an input are the same in unisensory and multisensory conditions. In addition, we assume the processes required to respond to signals from different channels are not independent, as recommended by Miller (2016).

A hypothetical sum of cumulative distribution function (CDF) of the component modalities is calculated for each decile of response times. This predicted value is what we would expect if there were separate activation. For each participant’s RTs to bisensory conditions, we applied Ulrich, Miller, and Schröter’s (2007) Race Model Inequality (RMI) algorithm in Matlab and followed guidelines by Gondan and Heckel (2008) and Gondan and Minikata (2015) for proper race model analysis, including the kill-the-twin procedure and a permutation test for multiple time points, which controls for Type I errors in correlated significance tests. For trisensory measures relative to unisensory and bisensory conditions, we adapted Ulrich et al.’s code to be in accordance with the guidelines detailed by Diederich and Colonius (2004) for the race model analysis applied to a trisensory condition.

The race model difference (RMD) was determined by comparing RT distributions for the trisensory condition to bisensory and unisensory conditions separately, and for the bisensory conditions to their component unisensory conditions. This resulted in five measures that mirrored those used in the MREs: AV, SV, AS, Tri/Uni, and Tri/Bi. Race model differences greater than zero are taken as statistical evidence against separate activation and in support of coactivation.

To evaluate the limit of response enhancement with two sensory modalities x and y under separate activation conditions, we evaluated the Miller inequality based on the CDFs of the response time distribution partitioned into deciles (Miller, 1982). The upper bound resulting from the Miller inequality is considered the benchmark for tests of separate activation with two stimuli (Townsend & Nozawa, 1995). The difference between this upper bound and the observed multisensory CDF is the race model difference (RMD). Because we assumed that the processes required for responding to stimuli are not independent, the asymptote of the bisensory race model difference is −1.
$$ \begin{array}{l}\mathrm{F}\left(\mathrm{t}\right)=\mathrm{P}\left(\mathrm{T}\le\ \mathrm{t}\right)\hfill \\ {}\mathrm{F}\mathrm{x}\mathrm{y}\left(\mathrm{t}\right)\ \le\ \mathrm{F}\mathrm{x}\left(\mathrm{t}\right)+\mathrm{F}\mathrm{y}\left(\mathrm{t}\right)\hfill \end{array} $$
To evaluate response time enhancement in the trisensory condition relative to unisensory conditions, we calculated the upper bound as follows, with x, y, and z, representing each of the three unisensory conditions:
$$ \mathrm{T}\mathrm{r}\mathrm{i}/\mathrm{U}\mathrm{n}\mathrm{i}:\ \mathrm{Fx}\mathrm{y}\mathrm{z}\left(\mathrm{t}\right)\ \le\ \mathrm{F}\mathrm{x}\left(\mathrm{t}\right)+\mathrm{F}\mathrm{y}\left(\mathrm{t}\right)+\mathrm{F}\mathrm{z}\left(\mathrm{t}\right) $$
This formulation, also called summed unisensory by Diederich and Colonius (2004), is the direct extension of the race model inequality, but because three random variables cannot be pairwise negatively independent to a high degree, the right hand side does not constitute a distribution function for min (RTx, RTy, RTz) (Colonius & Diederich, 2006). The Tri/Uni bound can therefore be violated if there is bisensory coactivation. A method that allows for an approximate assessment of trisensory RTs relative to bisensory RTs is to sum two bisensory RTs and subtract the unisensory RT that is shared between them, as demonstrated in the following inequality:
$$ \mathrm{T}\mathrm{r}\mathrm{i}/\mathrm{Bi}:\ \mathrm{Fx}\mathrm{y}\mathrm{z}\left(\mathrm{t}\right)\le\ \mathrm{F}\mathrm{x}\mathrm{y}\left(\mathrm{t}\right)+\mathrm{F}\mathrm{x}\mathrm{z}\left(\mathrm{t}\right)\hbox{--} \mathrm{F}\mathrm{x}\left(\mathrm{t}\right) $$

Interchanging x, y, and z for the three bisensory conditions AV, AS, and SV yields three inequalities in the form of Inequality 3 for the Tri/Bi measure, which also is referred to as sum bisensory - unisensory. The Tri/Bi upper bound for separate activation is the minimum of the three values at each quantile.

In performing the race model analysis, we applied 10 divisions to the CDF across all subjects and groups instead of allowing this parameter to vary based on each subject’s minimum number of trials across conditions (Ulrich et al., 2007). This approach allowed us to retain all but one subject in race model analysis. Whether we used 10 divisions or each subject’s minimum trial count did not impact the general pattern of results.


Reaction Time

Reaction time was measured from the offset of the stimulus to the first foot pedal press. To account for fast guesses, we applied the kill-the-twin correction to the fastest RTs of correct responses (Eriksen, 1988; Gondan & Minakata, 2015). Reported results include the kill-the-twin correction. Overall, the trisensory condition resulted in the fastest mean reaction times (M = 515 ms, SE = 23), followed closely by the three bisensory conditions (Fig. 1). Trisensory RTs were significantly faster than all other conditions, ts(29) > 2.5, ps < 0.010. Bisensory RTs were in turn significantly faster than all unisensory conditions, ts(29) > 4.19, ps < 0.002, and no different from each other. Among unisensory conditions, auditory RTs were significantly slower than visual and somatosensory RTs, ts(29) > 3.53, ps < 0.001.
Fig. 1

Boxplot distributions of reaction time in each of the seven conditions. Thick bars indicate median of mean RTs, boxes encompass 25 % and 75 % of the RTs on either side of the median. Whiskers represent 1.5 times the interquartile range, and circles are individual means outside that range

Accuracy was measured as the proportion correct responses (hits) out of total trials containing stimuli on the attended side. Accuracy was very high across all conditions (>93 %) but was particularly high in the bisensory and trisensory conditions (Fig. 2). Trisensory accuracy was significantly higher than in all unisensory conditions, ts(29) > 2.7, ps < 0.020, but no different than bisensory conditions. Accuracy within unisensory (ts(29) < 1.58, ps > 0.120) and bisensory (ts(29) < 1.2, ps > 0.230) conditions did not differ. False alarms occurred at a rate of 2.6 % overall and pairwise comparisons showed no differences between conditions (p > 0.120). False alarms on catch trials were extremely rare, occurring on only 13 of 360 trials across all participants (3.6 %), but some very early false alarms may have been missed because RT was measured from stimulus offset. In the unisensory conditions, false alarms occurred on 1.4 % (V), 3.3 % (A), and 2.5 % (S) of trials. In bisensory conditions, false-alarm rates were 1.7 % (AV), 2.3 % (AS), and 4.0 % (SV). The trisensory false alarm rate was 2.0 %.
Fig. 2

Boxplot distributions of accuracy in each of the seven conditions. Thick bars indicate median of mean accuracy for each participant and boxes encompass 25 % and 75 % of the RTs on either side of the median. Whiskers represent 1.5 times the interquartile range, and circles are individual means outside that range

Multisensory Response Enhancement (MRE)

Each condition’s mean MRE is shown in Fig. 3. Overall bisensory mean RTs in Fig. 1 suggest the AS condition may have determined MRE, but the individual participant’s fastest bisensory response was used, which could have been the AV condition for one participant and the AS condition for another. Mean RTs were used in this analysis to replicate directly the procedures used by Diederich and Colonius (2004). The three bisensory response enhancements were all significantly greater than 0, ts(29) = 2.81, p < 0.008. None of the bisensory MREs were significantly different from each other, but the trisensory over unisensory enhancement was significantly greater than the bisensory enhancements, ts(29) > 3.01, ps < 0.005. All three bisensory enhancements were significantly greater than Tri/Bi MRE, ts(29) > 2.4, ps < 0.030. Mean enhancement of trisensory over unisensory stimuli was significantly above 0 with an average of 10.4 %, t(29) = 7.39, p < 0.001, whereas mean enhancement of trisensory over bisensory stimuli was 0.01 % (n.s., t(29) = 0.01).
Fig. 3

Multisensory response enhancement of trisensory over unisensory stimuli (left bar), bisensory over unisensory stimuli (middle three bars), and trisensory over bisensory stimuli (right bar). Error bars are SEM

Race Model

The race model differences of bisensory conditions are shown in the left panel of Fig. 4. The right panel of Fig. 4 shows the CDFs of the trisensory response times. Over the first five deciles, at the group level, the response times to trisensory stimuli fall between the thresholds dictated by inequalities 2 and 3. Thus, we have evidence that trisensory stimuli provided reaction time benefit relative to unisensory (squares), but not bisensory (circles), combinations of stimuli in the race model.
Fig. 4

Left: Race model differences for each of the three bisensory conditions relative to component unisensory conditions across the vincentized response time distribution, calculated assuming non-independent processing channels. Values greater than zero indicate violation of the race model. Right: CDFs for the trisensory condition relative to summed unimodal (Inequality 2: Tri/Uni, squares) and summed bimodal-unimodal (Inequality 3: Tri/Bi, black circles)

A permutation test (Gondan, 2010) was used to evaluate whether the race model difference was significantly greater than zero in the first 5 deciles to determine the overall group-level race model violation over the fastest half of response times. In a permutation test, the test statistic is produced via simulation and the Type 1 error is controlled for by using the second greatest t value, such that more than one of the multiple t tests is significant. Under these conditions, all three measures comparing bisensory to unisensory components were significant, as was trisensory relative to unisensory components (Table 1). Trisensory relative to bisensory components were not significantly different from zero.
Table 1

Results of permutation test of race model violation across deciles 1 to 5

























Individual Differences

The findings using multisensory response enhancement and race model inequalities were generally concordant in their conclusions, supporting their combined use in multisensory integration analysis. However, these two algorithms resulted in very low RT performance benefits at the group level in comparisons between trisensory and bisensory stimuli (Tri/Bi), which prompted us to explore the individual data. We examined whether there was a systematic pattern of RT enhancement within individuals and across conditions. The Tri/Bi RMDs were used as the basis for our analysis, because it appeared that some individuals were benefitting from the trisensory inputs while others clearly were not. This can be seen by comparing the presence of many light-colored, race model-violating deciles in the top half of the Tri/B in Fig. 5 compared with the bottom half of the same panel. We selected this as our grouping variable to better understand how trisensory enhancement relates to bisensory integration.
Fig. 5

Heatmap showing race model differences across deciles (columns) for each subject (rows). Positive race model differences (violations) are light-colored and negative differences (nonviolations) are dark colored. Subjects are ordered from top to bottom by maximum race model difference in the Tri/Bi condition, and this order was preserved for the other four panels

Participants’ data were rank ordered based on their maximum race model difference in the Tri/Bi measure across deciles. Figure 5 depicts each individual’s race model differences across all ten deciles, with violating deciles shaded in light brown and negative (nonviolating) deciles shaded in dark brown. Although we cannot eliminate the race model assumptions with the trisensory data due to the problem of combining three stochastic variables (Joe, 1997), comparing degree of enhancement across deciles is highly informative. It is clear from Fig. 5 that some individuals showed more evidence for response enhancement than others. Even in the measure of trisensory relative to bisensory RT distributions (Tri/Bi, right panel), several participants exhibited trisensory enhancement in multiple deciles across much of the RT distribution. Such individual enhancements had been masked by the group CDFs shown in Fig. 4b.

To ascertain whether the same participants that showed trisensory enhancement also showed bisensory enhancement, we calculated each individual’s mean RMD across all deciles in the Tri/Bi measure and conducted a two-means cluster analysis using squared Euclidean distances of that value. Cluster 1 contained 12 participants, and Cluster 2 had 17 participants.

Mean RMD based on the Tri/Bi clustering over the violating deciles in each measure is shown in Fig. 6. A Cluster x Measure ANOVA of RMD resulted in significant effects of Cluster, F(1,27) = 19.01, p < 0.001, ηG2 = 0.116, and Measure, F(4,108) = 6.04, p < 0.001, ηG2 = 0.1539, as well as a Cluster x Measure interaction, F(4,108) = 3.8, p = 0.006, ηG2 = 0.102. Two sample t tests revealed that Cluster 1 produced higher RMD than Cluster 2 in the AV, t(27) = 2.6, p = 0.01, Tri/Uni, t(27) = 6.2, p < 0.001, and Tri/Bi Measures, t(27) = 2.1, p = 0.05. There was no association between age, sex, or computer use and MRE, RMI, or cluster membership.
Fig. 6

Mean race model differences derived from Miller’s Inequality for each cluster across bisensory and trisensory measures in the violating deciles. Error bars are SEM. Horizontal reference line indicates zero, the threshold for race model violation

To determine if cluster differences were due to a ceiling effect on RT, mean RT in the unisensory, bisensory, and trisensory conditions were correlated with cluster assignment. No correlation approached significance.


The present experiment was designed to assess integration of suprathreshold, unisensory, bisensory, and trisensory events simultaneously presented on the same side of space. We used two established methods for evaluating the extent of response time enhancement due to multisensory integration: multisensory response enhancement and race model inequality (Diederich & Colonius, 2004; Raab, 1962; Miller, 1982). Both of these models show RT facilitation for trisensory relative to unisensory stimuli, but no facilitation for trisensory relative to bisensory stimulation on the group level. There was evidence of statistical facilitation for all bisensory conditions in the race model analysis, allowing rejection of the race model in favor of coactivation in those conditions. Statistical limitations of the three-body problem prevent us from making the same claim about trisensory stimuli (Joe, 1997). However, we can confirm that trisensory stimuli enhance response times, although perhaps not over and above the enhancement provided by bisensory stimuli.

One drawback of using a single suprathreshold intensity for each modality is that we cannot be sure that response times were equated across modalities, nor could we assess whether trisensory advantages would be noted if the component stimuli were difficult to detect. That is, was the somatosensory vibration perceived as being at a comparable intensity level as the auditory tone or the visual Gabor patch? Because accuracy and RT were fairly similar across trial types, it appears the intensities were well specified. Nevertheless, future research should employ multiple intensities to see if the principal of inverse effectiveness holds across individuals belonging to different clusters.

After further exploration of the data, we determined that not all participants integrated multisensory stimuli the same way. A two-means cluster analysis revealed that one group of participants (Cluster 1; N = 12) clearly benefited from the trisensory combination while the other group (Cluster 2; N = 17) did not. Cluster 1 participants’ RTs produced larger RMDs than in Cluster 2, most notably in AV and trisensory conditions. These findings suggest that group level data demonstrating multisensory effects might not reveal the whole picture of MSI (Stevenson et al., 2012). It therefore appears that only a minority of individuals may automatically benefit from redundant stimulation, at least when the stimuli involved are static, suprathreshold, and easy to detect. These findings have implications for our understanding of MSI and suggest that future research should attempt to replicate prospectively and to extend these findings beyond reaction time to examine whether gradated integration also is observed in human and animal neurophysiology.

Although our cluster analysis was exploratory, we can speculate about the factors that allow some people to benefit from trisensory stimulation but not others. One possibility comes from Hecht and Reiner (2009), who had participants detect uni-, bi-, and trisensory events that occurred in the same location in space simultaneously. There were no catch trials, so stimulation occurred in every trial, and trial type was blocked, such that all trisensory trials were contained within one block, along with ~80 % unisensory trials. Participants were instructed to press one, two, or three modality-specific buttons in response to stimulation on each trial. On trisensory trials, the most common error (5.5 % of all trials) was to respond with only two of the three response keys. Of these errors, the numerically most common choice was AV, though not significantly. In the present study, the two trial types that produced the greatest difference between Clusters 1 and 2 were AV and AVS. Given the (slight) tendency to respond to trisensory trials with AV in Hecht and Reiner’s study, it is possible our integrating participants were responding to AV in both trial types as if it were a unified sensation. Could the participants in Cluster 1 simply have more AV-specific neurons, allowing for enhancement in both AV and AVS conditions? The participants who showed the most trisensory response enhancement also showed the least AS enhancement, perhaps because they needed the visual stimulus to activate their AV-specific neurons. This idea corresponds with findings that “trisensory” neurons in the SC of cats are based on certain neurons being responsive to different pairs of bisensory stimulation instead of being simultaneously trisensory (Stein & Meredith, 1993).

Individual differences might be most commonly expressed in children where systems and connections between them are still developing, making development an important focus of study in understanding MSI. In kittens, sensory-responsive neurons start out unimodal, but over time and with experience and environmental input, multisensory neurons develop until they account for over 60 % of sensory neurons (Wallace & Stein, 1997). Given a similar developmental trajectory in humans that relies on behavioral and environmental experience (Bahrick & Lickliter, 2012; Foxe et al., 2015), it would make sense that different individuals’ multisensory connections are at different stages of connectivity, yielding a range of multisensory enhancement. Thus, future studies of multisensory integration in children should carefully examine individual differences.

The present research contributes to our understanding of multisensory integration in individuals and at the group level. We hope that future studies will further assess the possible bipartite clustering of individuals into those who benefit from trisensory stimulation, and those who do not. Given the preliminary results presented, future multisensory research must consider that multisensory integration may not be as universal as was once thought.



We are indebted to an anonymous reviewer for carefully evaluating and improving our manuscript through multiple revisions. This project was made possible by grant support from the NIH (1R01MH101536–01 to NR). The authors declare no conflict of interest.


  1. Bahrick, L. E., & Lickliter, R. (2012). The role of intersensory redundancy in early perceptual, cognitive, and social development. In D. J. L. A. Bremner & C. Spence (Eds.), Multisensory development (pp. 183–205). Oxford England: Oxford University Press.CrossRefGoogle Scholar
  2. Besle, J., Fort, A., & Giard, M.-H. (2004). Interest and validity of the additive model in electrophysiological studies of multisensory interactions. Cognitive Processing, 5(3), 189–192.CrossRefGoogle Scholar
  3. Brandwein, A. B., Foxe, J. J., Russo, N. N., Altschuler, T. S., Gomes, H., & Molholm, S. (2011). The development of audiovisual multisensory integration across childhood and early adolescence: A high-density electrical mapping study. Cerebral Cortex, 21(5), 1042–1055. doi: 10.1093/cercor/bhq170 CrossRefPubMedGoogle Scholar
  4. Bresciani, J.-P., Dammeier, F., & Ernst, M. O. (2008). Tri-modal integration of visual, tactile and auditory signals for the perception of sequences of events. Brain Research Bulletin, 75(6), 753–760.CrossRefPubMedGoogle Scholar
  5. Calvert, G., Spence, C., & Stein, B. E. (2004). The handbook of multisensory processes. Cambridge: MIT press.Google Scholar
  6. Colonius, H., & Diederich, A. (2006). The race model inequality: interpreting a geometric measure of the amount of violation. Psychological review, 113(1), 148.Google Scholar
  7. Colonius, H., & Diederich, A. (2011). Computing an optimal time window of audiovisual integration in focused attention tasks: Illustrated by studies on effect of age and prior knowledge. Experimental Brain Research, 212(3), 327–337. doi: 10.1007/s00221-011-2732-x CrossRefPubMedGoogle Scholar
  8. Diederich, A., & Colonius, H. (2004). Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. Perception & Psychophysics, 66(8), 1388–1404.CrossRefGoogle Scholar
  9. Donohue, S. E., Woldorff, M. G., & Mitroff, S. R. (2010). Video game players show more precise multisensory temporal processing abilities. Attention, Perception, & Psychophysics, 72(4), 1120–1129.CrossRefGoogle Scholar
  10. Eriksen, C. W. (1988). A source of error in attempts to distinguish coactivation from separate activation in the perception of redundant targets. Attention, Perception, & Psychophysics, 44(2), 191–193.CrossRefGoogle Scholar
  11. Forster, B., Cavina-Pratesi, C., Aglioti, S. M., & Berlucchi, G. (2002). Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time. Experimental Brain Research, 143(4), 480–487.CrossRefPubMedGoogle Scholar
  12. Fournier, L. R., & Eriksen, C. W. (1990). Coactivation in the perception of redundant targets. Journal of Experimental Psychology: Human Perception and Performance, 16(3), 538.PubMedGoogle Scholar
  13. Foxe, J. J., & Molholm, S. (2009). Ten years at the Multisensory Forum: Musings on the evolution of a field. Brain Topography, 21(3-4), 149–154. doi: 10.1007/s10548-009-0102-9 CrossRefPubMedGoogle Scholar
  14. Foxe, J. J., Molholm, S., Del Bene, V. A., Frey, H.-P., Russo, N. N., … & Ross, L. A. (2015). Severe multisensory speech integration deficits in high-functioning school-aged children with autism spectrum disorder (ASD) and their resolution during early adolescence. Cerebral Cortex, 25(2), 298-312.Google Scholar
  15. Foxe, J. J., Wylie, G. R., Martinez, A., Schroeder, C. E., Javitt, D. C., … & Murray, M. M. (2002). Auditory-somatosensory multisensory processing in auditory association cortex: An fMRI study. Journal of Neurophysiology, 88(1), 540-543.Google Scholar
  16. Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10(6), 278–285. doi: 10.1016/j.tics.2006.04.008 CrossRefPubMedGoogle Scholar
  17. Gomez-Ramirez, M., Higgins, B. A., Rycroft, J. A., Owen, G. N., Mahoney, J., Shpaner, M., & Foxe, J. J. (2007). The deployment of intersensory selective attention: A high-density electrical mapping study of the effects of theanine. Clinical Neuropharmacology, 30(1), 25–38. doi: 10.1097/01.WNF.0000240940.13876.17 CrossRefPubMedGoogle Scholar
  18. Gondan, M. (2010). A permutation test for the race model inequality. Behavior Research Methods, 42(1), 23–28.CrossRefPubMedGoogle Scholar
  19. Gondan, M., & Heckel, A. (2008). Testing the race inequality: A simple correction procedure for fast guesses. Journal of Mathematical Psychology, 52(5), 322–325.CrossRefGoogle Scholar
  20. Gondan, M., & Minakata, K. (2015). A tutorial on testing the race model inequality. Attention, Perception, & Psychophysics, 1-13.Google Scholar
  21. Hecht, D., & Reiner, M. (2009). Sensory dominance in combinations of audio, visual and haptic stimuli. Experimental Brain Research, 193(2), 307–314.CrossRefPubMedGoogle Scholar
  22. Hertz, U., & Amedi, A. (2010). Disentangling unisensory and multisensory components in audiovisual integration using a novel multifrequency fMRI spectral analysis. NeuroImage, 52(2), 617–632. doi: 10.1016/j.neuroimage.2010.04.186 CrossRefPubMedGoogle Scholar
  23. Holmes, N. P. (2007). The law of inverse effectiveness in neurons and behaviour: Multisensory integration versus normal variability. Neuropsychologia, 45(14), 3340–3345. doi: 10.1016/j.neuropsychologia.2007.05.025 CrossRefPubMedGoogle Scholar
  24. Holmes, N. P. (2009). The principle of inverse effectiveness in multisensory integration: Some statistical considerations. Brain Topography, 21(3-4), 168–176. doi: 10.1007/s10548-009-0097-2 CrossRefPubMedGoogle Scholar
  25. Joe, H. (1997). Multivariate models and multivariate dependence concepts: CRC Press.Google Scholar
  26. Kayser, C., Petkov, C. I., Remedios, R., & Logothetis, N. K. (2012). Multisensory Influences on Auditory Processing: Perspectives from fMRI and Electrophysiology.Google Scholar
  27. King, A., & Palmer, A. (1985). Integration of visual and auditory information in bimodal neurones in the guinea-pig superior colliculus. Experimental Brain Research, 60(3), 492–500.CrossRefPubMedGoogle Scholar
  28. Laurienti, P. J., Perrault, T. J., Stanford, T. R., Wallace, M. T., & Stein, B. E. (2005). On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Experimental Brain Research, 166(3-4), 289–297. doi: 10.1007/s00221-005-2370-2 CrossRefPubMedGoogle Scholar
  29. Meredith, M. A., Nemitz, J. W., & Stein, B. E. (1987). Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. The Journal of Neuroscience, 7(10), 3215–3229.PubMedGoogle Scholar
  30. Meredith, M. A., & Stein, B. E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389–391.CrossRefPubMedGoogle Scholar
  31. Meredith, M. A., & Stein, B. E. (1986). Spatial factors determine the activity of multisensory neurons in cat superior colliculus. Brain Research, 365(2), 350–354.CrossRefPubMedGoogle Scholar
  32. Miller, J. (1982). Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology, 14(2), 247–279.CrossRefPubMedGoogle Scholar
  33. Miller, J. (2016). Statistical facilitation and the redundant signals effect: What are race and coactivation models? Attention, Perception, & Psychophysics, 78, 516–519.CrossRefGoogle Scholar
  34. Molholm, S., Ritter, W., Murray, M. M., Javitt, D. C., Schroeder, C. E., & Foxe, J. J. (2002). Multisensory auditory-visual interactions during early sensory processing in humans: A high-density electrical mapping study. Cognitive Brain Research, 14(1), 115–128.CrossRefPubMedGoogle Scholar
  35. Mollon, J. D., & Perkins, A. J. (1996). Errors of judgement at Greenwich in 1796. Nature. Google Scholar
  36. Mordkoff, J. T., & Miller, J. (1993). Redundancy gains and coactivation with two different targets: The problem of target preferences and the effects of display frequency. Perception & Psychophysics, 53(5), 527–535.CrossRefGoogle Scholar
  37. Nath, A. R., & Beauchamp, M. S. (2012). A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion. NeuroImage, 59(1), 781–787. 10.1016/j.neuroimage.2011.07.024.CrossRefPubMedGoogle Scholar
  38. Pomper, U., Brincker, J., Harwood, J., Prikhodko, I., & Senkowski, D. (2014). Taking a call is facilitated by the multisensory processing of smartphone vibrations, sounds, and flashes. PLoS One, 9(8), e103238. doi: 10.1371/journal.pone.0103238 CrossRefPubMedPubMedCentralGoogle Scholar
  39. Raab, D. H. (1962). Statistical facilitation of simple reaction times. Transactions of the New York Academy of Sciences, 24(5 Series II), 574–590.CrossRefPubMedGoogle Scholar
  40. Russo, N., Foxe, J. J., Brandwein, A. B., Altschuler, T., Gomes, H., & Molholm, S. (2010). Multisensory processing in children with autism: High-density electrical mapping of auditory-somatosensory integration. Autism Research, 3(5), 253–267. doi: 10.1002/aur.152 CrossRefPubMedGoogle Scholar
  41. Sella, I., Reiner, M., & Pratt, H. (2014). Natural stimuli from three coherent modalities enhance behavioral responses and electrophysiological cortical activity in humans. International Journal of Psychophysiology, 93(1), 45–55.CrossRefPubMedGoogle Scholar
  42. Shams, L., Kamitani, Y., & Shimojo, S. (2000). Illusions: What you see is what you hear. Nature, 408(6814), 788–788.CrossRefPubMedGoogle Scholar
  43. Spence, C., & Driver, J. (1997). On measuring selective attention to an expected sensory modality. Perception & Psychophysics, 59(3), 389–403.CrossRefGoogle Scholar
  44. Spence, C., & Driver, J. (2004). Crossmodal space and crossmodal attention: Oxford University Press.Google Scholar
  45. Spence, C., Ranson, J., & Driver, J. (2000). Cross-modal selective attention: On the difficulty of ignoring sounds at the locus of visual attention. Perception & Psychophysics, 62(2), 410–424.CrossRefGoogle Scholar
  46. Spence, C., & Squire, S. (2003). Multisensory integration: Maintaining the perception of synchrony. Current Biology, 13(13), R519–R521.CrossRefPubMedGoogle Scholar
  47. Stein, B. E., Meredith, M. A., & Wallace, M. T. (1993). The visually responsive neuron and beyond: Multisensory integration in cat and monkey. Progress in Brain Research, 95, 79–90.CrossRefPubMedGoogle Scholar
  48. Stein, B. E., & Stanford, T. R. (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9(4), 255–266. doi: 10.1038/nrn2331 CrossRefPubMedGoogle Scholar
  49. Stein, B. E., Stanford, T. R., Ramachandran, R., Perrault, T. J., Jr., & Rowland, B. A. (2009). Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research, 198(2-3), 113–126. doi: 10.1007/s00221-009-1880-8 CrossRefPubMedPubMedCentralGoogle Scholar
  50. Stevenson, R. A., Zemtsov, R. K., & Wallace, M. T. (2012). Individual Differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. Journal of Experimental Psychology: Human Perception and Performance. doi: 10.1037/a0027339 PubMedPubMedCentralGoogle Scholar
  51. Stone, J., Hunkin, N., Porrill, J., Wood, R., Keeler, V., … & Porter, N. (2001). When is now? Perception of simultaneity. Proceedings of the Royal Society of London B: Biological Sciences, 268(1462), 31-38.Google Scholar
  52. Todd, J. W. (1912). Reaction to multiple stimuli: Science Press.Google Scholar
  53. Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology, 39(4), 321–359.CrossRefGoogle Scholar
  54. Ulrich, R., Miller, J., & Schroter, H. (2007). Testing the race model inequality: An algorithm and computer programs. Behavior Research Methods, 39(2), 291–302.CrossRefPubMedGoogle Scholar
  55. van Erp, J. B., Toet, A., & Janssen, J. B. (2015). Uni-, bi-and tri-modal warning signals: Effects of temporal parameters and sensory modality on perceived urgency. Safety Science, 72, 1–8.CrossRefGoogle Scholar
  56. Wallace, M. T., & Stein, B. E. (1997). Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience, 17(7), 2429–2444.PubMedGoogle Scholar
  57. Wallace, M. T., Wilkinson, L. K., & Stein, B. E. (1996). Representation and integration of multiple sensory inputs in primate superior colliculus. Journal of Neurophysiology, 76(2), 1246–1266.PubMedGoogle Scholar
  58. Wozny, D. R., Beierholm, U. R., & Shams, L. (2008). Human trimodal perception follows optimal statistical inference. Journal of Vision, 8(3), 24.CrossRefPubMedGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2016

Authors and Affiliations

  1. 1.Department of PsychologySyracuse UniversitySyracuseUSA

Personalised recommendations