Multisensory integration of redundant trisensory stimulation
Integration of sensory information across modalities can confer behavioral advantages by decreasing perceptual ambiguity, increasing reaction time, and increasing detection accuracy relative to unisensory stimuli. We asked how combinations of auditory, visual, and somatosensory events alter response time. Participants detected stimulation on one side of space (right or left) while ignoring stimulation on the other side of space. There were seven types of suprathreshold stimuli: auditory (tones from speakers), visual (sinusoidal contrast gratings), somatosensory (fingertip vibrations), audio-visual, somato-visual, audio-somatosensory, and audio-somato-visual. Response enhancement and race model analysis confirmed that bisensory and trisensory trials enhanced response time relative to unisensory trials. Exploratory analysis of individual differences in intersensory facilitation revealed that participants fit into one of two groups: those who benefitted from trisensory information and those who did not.
KeywordsMultisensory Focused attention Auditory Visual Somatosensory
Research on multisensory integration has emerged as a critical area of neuroscience in the past decade (Calvert, Spence, & Stein, 2004; Foxe & Molholm, 2009; Ghazanfar & Schroeder, 2006; Spence & Driver, 2004). Because humans operate in a multisensory environment, it is vital to assess how perception and cognition are affected by contributions from multiple sensory modalities, such as auditory, visual, and somatosensory stimulation. Recently, some applications of trisensory enhancement have been investigated in technology. For instance, multisensory processing of smartphone vibrations, sounds, and flashes facilitates taking a phone call (Pomper, Brincker, Harwood, Prikhodko, & Senkowski, 2014) and multisensory warnings can enhance risk communication (van Erp, Toet, & Janssen, 2015). This nascent line of research indicates that when milliseconds matter, trisensory cues may be critical to enhancing behavioral responses to stimuli in a process called redundancy gain.
Early research (Raab, 1962) proposed that bisensory stimulation in redundant-target experiments results in parallel separate activations of unisensory channels. Given two response time distributions that overlap, it is probabilistic that one of the two detection processes will reach threshold for response faster than unisensory stimulation. This “race” between parallel sensory inputs results in faster response times, on average, than allowed by either individual input, in a process called statistical facilitation. If response times are statistically faster than can be explained by separate activation in the race model, coactivation models must be considered (Miller, 1982), inherently requiring multisensory integration (MSI).
Biological support for coactivation originally came from evidence of multisensory neurons in the superior colliculus (SC) of cats (Meredith, Nemitz, & Stein, 1987; Meredith & Stein, 1983). The SC receives afferent projections from unisensory sources and integrates them via multisensory neurons (Stein & Stanford, 2008). In cats and monkeys, inputs from multiple sensory modalities presented within a small temporal window (the temporal rule) (King & Palmer, 1985; Meredith et al., 1987) and close spatial proximity (the spatial rule) (Meredith & Stein, 1986) resulted in a firing rate in the SC greater than expected by summing the signals of two separately activated neurons (Stein, Meredith, & Wallace, 1993; Wallace, Wilkinson, & Stein, 1996). Subsequent work at multiple levels suggests that there is evidence for MSI in behavioral response time (Forster, Cavina-Pratesi, Aglioti, & Berlucchi, 2002; Mordkoff & Miller, 1993), single neuron activity (Stein, Stanford, Ramachandran, Perrault, & Rowland, 2009), neurophysiological responses (Besle, Fort, & Giard, 2004; Brandwein et al., 2011; Molholm et al., 2002; Russo et al., 2010), and functional neural activation in humans (Foxe et al., 2002; Hertz & Amedi, 2010; Kayser, Petkov, Remedios, & Logothetis, 2012; Laurienti, Perrault, Stanford, Wallace, & Stein, 2005).
Multisensory research in humans has generally been conducted using redundant target (Forster et al., 2002), focused attention (Colonius & Diederich, 2011), or selective attention paradigms (Gomez-Ramirez et al., 2007; Spence & Driver, 1997; Spence, Ranson, & Driver, 2000). In redundant target studies, participants detect and respond to stimulation in any modality. In focused attention studies, participants detect and respond to a target modality, whereas in selective attention experiments, participants detect and respond to stimuli occurring in a particular feature dimension. Evidence shows that MSI occurs automatically even in the absence of attention but can be manipulated based on attentional set (Spence & Driver, 2004). Although the studies described have advanced our understanding of how the brain responds to bisensory stimulation, very few studies have examined basic processes underlying human trisensory integration. Understanding the mechanisms that allow for the perception of information in three distinct modalities will help determine how the brain manages more than two inputs.
Todd (1912) conducted the first experimental assessment of trisensory processing by measuring reaction times (RTs) to combinations of light, tone, and electric shock in three participants using a focused attention paradigm. Todd observed reduced RTs to pairs of stimuli relative to individual stimuli, and even shorter RTs to the simultaneous combination of all three stimuli relative to pairs, regardless the modality to which participants were instructed to react. With this research, Todd found evidence of the redundant signals effect (RSE), which demonstrates that combining stimuli reduces response time. At the time, there were no algorithms established to evaluate coactivation. Since then, however, a small number of experiments on trisensory processing have been conducted, utilizing varying methodology and analysis techniques and usually finding response time facilitation. These studies generally assessed the potential for trisensory integration of successive event sequences with a range of differing stimulus onset asynchronies (SOA) (Bresciani, Dammeier, & Ernst, 2008; Diederich & Colonius, 2004; Wozny, Beierholm, & Shams, 2008) or how dynamic trisensory events enhance responses (Sella, Reiner, & Pratt, 2014), rather than the effects of the simplest case of simultaneous, redundant presentation. Although it is valuable to test different SOA, we sought to simplify the procedure in the present experiment by presenting only synchronized stimuli.
Diederich and Colonius (2004) conducted a study of trisensory integration that is particularly relevant to the analysis techniques reported in the current paper. They (mostly) assessed the effects of sequential presentation of combinations of auditory, tactile, and visual inputs on behavior. The authors cleverly extended statistical analysis of bisensory stimuli to the trisensory domain. They tested four participants and analyzed their data with two distinct methods: multisensory response enhancement (MRE) and race model inequality (RMI). MRE is a coarse descriptive measure of mean RTs in multisensory conditions relative to mean RTs in unisensory conditions. It allows for an examination of percent enhancement in RT but does not provide support for or against coactivation. In other words, MRE is a measure of the amount of RT facilitation but does not address whether MSI has actually occurred. The race model is a finer-grained test that evaluates whether separate activation is a sufficient explanation for facilitated RTs across their full distribution. When separate activation cannot statistically explain the RTs, coactivation, and thus MSI, can be invoked. With both approaches, Diederich and Colonius found a trisensory enhancement effect on reaction time that existed over and above the bisensory enhancement effects. Maximum enhancement occurred when the time between events in the three modalities was shortest, but a fully synchronized condition was not tested. Decreasing the intensity of the auditory or the tactile stimulus also increased multisensory enhancement in the bisensory combinations supporting the inverse effectiveness hypothesis, which states that greater perceptual difficulty causes greater reliance on multisensory information. Thus, if the best unisensory response is weak, such as due to low intensity, multisensory stimuli are more robustly integrated (Holmes, 2007, 2009; Meredith & Stein, 1983).
MSI has previously been considered as a universal and automatic process in all people (Calvert et al., 2004; Ghazanfar & Schroeder, 2006). In experiments with small numbers of participants (Diederich & Colonius, 2004; Todd, 1912), it is difficult to detect potential individual differences. Previous analysis of individual differences in multisensory research is scant, but it is beginning to be recognized as an important component of this line of inquiry. Spence and Squire (2003) note that “the underlying causes of the large individual differences in the perception of multisensory synchrony” (p. R521) has been insufficiently investigated. In one example, individual differences were noted for the point at which auditory and visual stimuli are perceived as occurring simultaneously (Stone et al., 2001). Further, Mollon and Perkins (1996) determined that judgments of stellar transit in 1796 differed between observers due to individual differences in audio-visual perception. There is no compelling reason to believe simultaneity judgments might vary between individuals, but not other aspects of sensory integration such as the degree of coactivation. As such, Stevenson, Zemtsov, and Wallace (2012) found evidence for individual differences in temporal binding windows using the McGurk effect and the sound-induced flash illusion (Shams, Kamitani, & Shimojo, 2000). Their results indicate that wider binding windows are associated with stronger MSI. Other studies have found an association between video game playing and precision of MSI (Donohue, Woldorff, & Mitroff, 2010) and evidence that activation of the left superior temporal sulcus, an area associated with auditory categorization, dictates susceptibility to the McGurk effect (Nath & Beauchamp, 2012).
Given the precedent set by previous studies of trisensory integration and of individual differences in multisensory integration processes, the present study was motivated by 1) a need to understand the simplest, most straightforward case of trisensory integration during redundant stimulation on one side of space, and 2) a need to determine whether individuals who integrate multisensory inputs do so across all combinations of modalities or whether there are certain combinations of modalities that are more likely to be integrated by some individuals than others. We asked 30 untrained participants to respond to the presence of non-aversive, simultaneously presented suprathreshold stimuli on either the left or right side. Tactile stimulation was delivered to either the left or the right hand in the form of vibration in the same general location as the visual and auditory stimuli. Catch trials containing no stimulation were included to ascertain that participants were following instructions. The different combinations of modalities and side of stimulation varied from trial to trial to prevent participants from being able to anticipate the stimulus. We asked participants to detect any sensory stimulation on one side of space and to ignore stimulation on the contralateral side. We employed two analytic approaches previously used by Diederich and Colonius (2004) in their redundant target paradigm for group analysis. For individual difference analysis, we divided our participants into two groups: those who did and those who did not integrate trisensory inputs. We believe that this was the most conservative approach and the one most relevant to the current study because the trisensory condition was expected to produce the greatest RT facilitation relative to the unisensory conditions.
The Psychology Research Participation Pool at Syracuse University was utilized to obtain data from 30 participants tested in the Care Lab. These participants were completing research studies for credit for entry-level psychology courses. The mean age was 20.3 years (SD = 2.7 years), and all participants were right-handed. There were 11 males and 19 females.
Apparatus and Stimuli
Stimuli were presented using MATLAB on a 22.5” VIEWPixx monitor (VPixx Technologies, Inc., 1920 x 1200 resolution).
Auditory stimulation was a 705 kbps, 16-bit, 44.100-kHz, 1000-Hz tone presented on the right or left side for 240 ms from two Bose Companion2, Series II Multimedia Speaker System speakers adjacent to the left and right side of the screen 24 inches from the participant at a decibel level of 37.34. Brown noise, a filtered signal that generates energy at low frequencies with a spectral density inversely proportional to its frequency squared that decreases by 6 dB per octave, was generated by the myNoise BVBA application and was played from adjacent speakers at a level of 21 dB via an iPad mini through two Bose Companion2, Series III speakers.
The visual stimuli were Gabor patches (300 by 200 pixels, 881 units of Michelson contrast) presented 300 pixels from the center either to the right or left of fixation for 240 ms. The visual stimuli had a visual angle of 19.46 degrees subtending to the left or the right of fixation and a vertical visual angle of 13.55 degrees.
Somatosensory stimulation was delivered to the left or right index fingers using two CM-5 somatosensory stimulators (Cortical Metrics) for 240 ms. Intensity was set at 125 microns.
After obtaining consent, participants were instructed to detect stimulation on one side of space (either right or left) while ignoring stimulation on the other side of space. This side is referred to as the attended side. Side assignment alternated with every participant.
There were seven trial types: auditory (A), visual (V), somatosensory (S), audio-visual (AV), somato-visual (SV), audio-somato (AS), and audio-somato-visual (ASV). Each trial type was presented six times per side per block. There also were six blank or catch trials, which were included to evaluate false alarm rates, resulting in 90 trials per block. Six self-initiated blocks were presented, with breaks in between. Participants were instructed to respond when they perceived stimulation on their attended side by pressing a foot pedal (Savant Elite/USB, Kinesis Corp.) with the same foot as their attended side in the first block. Prior to every subsequent block, they were instructed to switch responding foot but continue attending to their designated side.
Each trial began with a fixation cross in the center of the screen for 100 ms. The cross disappeared and was replaced by two circles on the right and left side of the screen after 500 ms of a blank screen. The target stimuli were presented between 500 and 1250 ms (randomly chosen from a uniform distribution) after the appearance of the circles and lasted for 240 ms. Upon response, or after 2.6 s, the next trial began.
Blocks were binned into pairs in order to capture the equivalent number of trials requiring left and right foot responses in one bin. Analysis of mean reaction time (RTs) on correct trials over the six blocks in an Attended side x Block pair x Response foot ANOVA revealed an effect of block pair, F(2,56) = 13.96, p < 0.001, ηG2 = 0.052, and a marginal effect of foot, F(1,28) = 3.77, p = 0.06, ηG2 = 0.004. Pairwise t tests confirmed the block effect was a result of significantly higher mean RTs in the first bin than the second and third, ts(59) > 4.7, ps < 0.001. This indicated that the first encounter with a new foot in the first and second block caused participants to respond more slowly relative to later blocks. To correct for this training artifact, we eliminated the first two blocks from analysis, treating them as practice blocks. An Attended side x Block pair x Response foot ANOVA of RTs in the second and third pairs of blocks produced no significant effects or interactions of foot, block, or attended side. The general pattern of results from both of these algorithms did not change whether we included all blocks or just the last four. To avoid effects of block and foot, all subsequent analyses exclude data from the first two blocks.
Following the analyses of Diederich and Colonius (2004), we used multisensory response enhancement (MRE) and the race model to determine the conditions that facilitated responses.
MRE for each bisensory or trisensory trial type was calculated by finding the fastest mean RT from among the component trial types, subtracting the mean RT of the multisensory trial type, and dividing by the fastest mean RT of the component trial types. Multiplying that value by 100 gives a percent enhancement in RT provided by a combination of stimuli. Two types of MRE were calculated: 1) trisensory and bisensory RTs relative to unisensory RTs, and 2) trisensory RTs relative to bisensory RTs. This resulted in five MRE measures: one for each of the three bisensory conditions, one for the trisensory relative to unisensory conditions (Tri/Uni), and one for the trisensory relative to bisensory conditions (Tri/Bi).
Race Model calculations
Multisensory stimuli produce separate activations in each sensory channel. The behavioral response is elicited by the fastest signal. There is evidence that response times can be influenced by response competition (Fournier & Eriksen, 1990); however, we assume context invariance for race model analysis such that response time distributions for an input are the same in unisensory and multisensory conditions. In addition, we assume the processes required to respond to signals from different channels are not independent, as recommended by Miller (2016).
A hypothetical sum of cumulative distribution function (CDF) of the component modalities is calculated for each decile of response times. This predicted value is what we would expect if there were separate activation. For each participant’s RTs to bisensory conditions, we applied Ulrich, Miller, and Schröter’s (2007) Race Model Inequality (RMI) algorithm in Matlab and followed guidelines by Gondan and Heckel (2008) and Gondan and Minikata (2015) for proper race model analysis, including the kill-the-twin procedure and a permutation test for multiple time points, which controls for Type I errors in correlated significance tests. For trisensory measures relative to unisensory and bisensory conditions, we adapted Ulrich et al.’s code to be in accordance with the guidelines detailed by Diederich and Colonius (2004) for the race model analysis applied to a trisensory condition.
The race model difference (RMD) was determined by comparing RT distributions for the trisensory condition to bisensory and unisensory conditions separately, and for the bisensory conditions to their component unisensory conditions. This resulted in five measures that mirrored those used in the MREs: AV, SV, AS, Tri/Uni, and Tri/Bi. Race model differences greater than zero are taken as statistical evidence against separate activation and in support of coactivation.
Interchanging x, y, and z for the three bisensory conditions AV, AS, and SV yields three inequalities in the form of Inequality 3 for the Tri/Bi measure, which also is referred to as sum bisensory - unisensory. The Tri/Bi upper bound for separate activation is the minimum of the three values at each quantile.
In performing the race model analysis, we applied 10 divisions to the CDF across all subjects and groups instead of allowing this parameter to vary based on each subject’s minimum number of trials across conditions (Ulrich et al., 2007). This approach allowed us to retain all but one subject in race model analysis. Whether we used 10 divisions or each subject’s minimum trial count did not impact the general pattern of results.
Multisensory Response Enhancement (MRE)
Results of permutation test of race model violation across deciles 1 to 5
Participants’ data were rank ordered based on their maximum race model difference in the Tri/Bi measure across deciles. Figure 5 depicts each individual’s race model differences across all ten deciles, with violating deciles shaded in light brown and negative (nonviolating) deciles shaded in dark brown. Although we cannot eliminate the race model assumptions with the trisensory data due to the problem of combining three stochastic variables (Joe, 1997), comparing degree of enhancement across deciles is highly informative. It is clear from Fig. 5 that some individuals showed more evidence for response enhancement than others. Even in the measure of trisensory relative to bisensory RT distributions (Tri/Bi, right panel), several participants exhibited trisensory enhancement in multiple deciles across much of the RT distribution. Such individual enhancements had been masked by the group CDFs shown in Fig. 4b.
To ascertain whether the same participants that showed trisensory enhancement also showed bisensory enhancement, we calculated each individual’s mean RMD across all deciles in the Tri/Bi measure and conducted a two-means cluster analysis using squared Euclidean distances of that value. Cluster 1 contained 12 participants, and Cluster 2 had 17 participants.
To determine if cluster differences were due to a ceiling effect on RT, mean RT in the unisensory, bisensory, and trisensory conditions were correlated with cluster assignment. No correlation approached significance.
The present experiment was designed to assess integration of suprathreshold, unisensory, bisensory, and trisensory events simultaneously presented on the same side of space. We used two established methods for evaluating the extent of response time enhancement due to multisensory integration: multisensory response enhancement and race model inequality (Diederich & Colonius, 2004; Raab, 1962; Miller, 1982). Both of these models show RT facilitation for trisensory relative to unisensory stimuli, but no facilitation for trisensory relative to bisensory stimulation on the group level. There was evidence of statistical facilitation for all bisensory conditions in the race model analysis, allowing rejection of the race model in favor of coactivation in those conditions. Statistical limitations of the three-body problem prevent us from making the same claim about trisensory stimuli (Joe, 1997). However, we can confirm that trisensory stimuli enhance response times, although perhaps not over and above the enhancement provided by bisensory stimuli.
One drawback of using a single suprathreshold intensity for each modality is that we cannot be sure that response times were equated across modalities, nor could we assess whether trisensory advantages would be noted if the component stimuli were difficult to detect. That is, was the somatosensory vibration perceived as being at a comparable intensity level as the auditory tone or the visual Gabor patch? Because accuracy and RT were fairly similar across trial types, it appears the intensities were well specified. Nevertheless, future research should employ multiple intensities to see if the principal of inverse effectiveness holds across individuals belonging to different clusters.
After further exploration of the data, we determined that not all participants integrated multisensory stimuli the same way. A two-means cluster analysis revealed that one group of participants (Cluster 1; N = 12) clearly benefited from the trisensory combination while the other group (Cluster 2; N = 17) did not. Cluster 1 participants’ RTs produced larger RMDs than in Cluster 2, most notably in AV and trisensory conditions. These findings suggest that group level data demonstrating multisensory effects might not reveal the whole picture of MSI (Stevenson et al., 2012). It therefore appears that only a minority of individuals may automatically benefit from redundant stimulation, at least when the stimuli involved are static, suprathreshold, and easy to detect. These findings have implications for our understanding of MSI and suggest that future research should attempt to replicate prospectively and to extend these findings beyond reaction time to examine whether gradated integration also is observed in human and animal neurophysiology.
Although our cluster analysis was exploratory, we can speculate about the factors that allow some people to benefit from trisensory stimulation but not others. One possibility comes from Hecht and Reiner (2009), who had participants detect uni-, bi-, and trisensory events that occurred in the same location in space simultaneously. There were no catch trials, so stimulation occurred in every trial, and trial type was blocked, such that all trisensory trials were contained within one block, along with ~80 % unisensory trials. Participants were instructed to press one, two, or three modality-specific buttons in response to stimulation on each trial. On trisensory trials, the most common error (5.5 % of all trials) was to respond with only two of the three response keys. Of these errors, the numerically most common choice was AV, though not significantly. In the present study, the two trial types that produced the greatest difference between Clusters 1 and 2 were AV and AVS. Given the (slight) tendency to respond to trisensory trials with AV in Hecht and Reiner’s study, it is possible our integrating participants were responding to AV in both trial types as if it were a unified sensation. Could the participants in Cluster 1 simply have more AV-specific neurons, allowing for enhancement in both AV and AVS conditions? The participants who showed the most trisensory response enhancement also showed the least AS enhancement, perhaps because they needed the visual stimulus to activate their AV-specific neurons. This idea corresponds with findings that “trisensory” neurons in the SC of cats are based on certain neurons being responsive to different pairs of bisensory stimulation instead of being simultaneously trisensory (Stein & Meredith, 1993).
Individual differences might be most commonly expressed in children where systems and connections between them are still developing, making development an important focus of study in understanding MSI. In kittens, sensory-responsive neurons start out unimodal, but over time and with experience and environmental input, multisensory neurons develop until they account for over 60 % of sensory neurons (Wallace & Stein, 1997). Given a similar developmental trajectory in humans that relies on behavioral and environmental experience (Bahrick & Lickliter, 2012; Foxe et al., 2015), it would make sense that different individuals’ multisensory connections are at different stages of connectivity, yielding a range of multisensory enhancement. Thus, future studies of multisensory integration in children should carefully examine individual differences.
The present research contributes to our understanding of multisensory integration in individuals and at the group level. We hope that future studies will further assess the possible bipartite clustering of individuals into those who benefit from trisensory stimulation, and those who do not. Given the preliminary results presented, future multisensory research must consider that multisensory integration may not be as universal as was once thought.
We are indebted to an anonymous reviewer for carefully evaluating and improving our manuscript through multiple revisions. This project was made possible by grant support from the NIH (1R01MH101536–01 to NR). The authors declare no conflict of interest.
- Brandwein, A. B., Foxe, J. J., Russo, N. N., Altschuler, T. S., Gomes, H., & Molholm, S. (2011). The development of audiovisual multisensory integration across childhood and early adolescence: A high-density electrical mapping study. Cerebral Cortex, 21(5), 1042–1055. doi:10.1093/cercor/bhq170 CrossRefPubMedGoogle Scholar
- Calvert, G., Spence, C., & Stein, B. E. (2004). The handbook of multisensory processes. Cambridge: MIT press.Google Scholar
- Colonius, H., & Diederich, A. (2006). The race model inequality: interpreting a geometric measure of the amount of violation. Psychological review, 113(1), 148.Google Scholar
- Foxe, J. J., Molholm, S., Del Bene, V. A., Frey, H.-P., Russo, N. N., … & Ross, L. A. (2015). Severe multisensory speech integration deficits in high-functioning school-aged children with autism spectrum disorder (ASD) and their resolution during early adolescence. Cerebral Cortex, 25(2), 298-312.Google Scholar
- Foxe, J. J., Wylie, G. R., Martinez, A., Schroeder, C. E., Javitt, D. C., … & Murray, M. M. (2002). Auditory-somatosensory multisensory processing in auditory association cortex: An fMRI study. Journal of Neurophysiology, 88(1), 540-543.Google Scholar
- Gomez-Ramirez, M., Higgins, B. A., Rycroft, J. A., Owen, G. N., Mahoney, J., Shpaner, M., & Foxe, J. J. (2007). The deployment of intersensory selective attention: A high-density electrical mapping study of the effects of theanine. Clinical Neuropharmacology, 30(1), 25–38. doi:10.1097/01.WNF.0000240940.13876.17 CrossRefPubMedGoogle Scholar
- Gondan, M., & Minakata, K. (2015). A tutorial on testing the race model inequality. Attention, Perception, & Psychophysics, 1-13.Google Scholar
- Joe, H. (1997). Multivariate models and multivariate dependence concepts: CRC Press.Google Scholar
- Kayser, C., Petkov, C. I., Remedios, R., & Logothetis, N. K. (2012). Multisensory Influences on Auditory Processing: Perspectives from fMRI and Electrophysiology.Google Scholar
- Laurienti, P. J., Perrault, T. J., Stanford, T. R., Wallace, M. T., & Stein, B. E. (2005). On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Experimental Brain Research, 166(3-4), 289–297. doi:10.1007/s00221-005-2370-2 CrossRefPubMedGoogle Scholar
- Mollon, J. D., & Perkins, A. J. (1996). Errors of judgement at Greenwich in 1796. Nature. Google Scholar
- Spence, C., & Driver, J. (2004). Crossmodal space and crossmodal attention: Oxford University Press.Google Scholar
- Stein, B. E., Stanford, T. R., Ramachandran, R., Perrault, T. J., Jr., & Rowland, B. A. (2009). Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research, 198(2-3), 113–126. doi:10.1007/s00221-009-1880-8 CrossRefPubMedPubMedCentralGoogle Scholar
- Stone, J., Hunkin, N., Porrill, J., Wood, R., Keeler, V., … & Porter, N. (2001). When is now? Perception of simultaneity. Proceedings of the Royal Society of London B: Biological Sciences, 268(1462), 31-38.Google Scholar
- Todd, J. W. (1912). Reaction to multiple stimuli: Science Press.Google Scholar