Abstract
This paper reviews theoretical research and projects in data representation that use different sensory modalities, embodiment, physical objects, and immersive environments. Other topics include the impact of cross-modal perception on data representation and the role audiovisual aesthetics play in the interpretation of data. Research has shown that cross-modal perception enhances sensory stimuli. Sound, touch, gesture, and movement engage the user and create holistic environments that provide multi-dimensional representations of complex data relationships. These data representations include data sculptures, ambient displays, and multisensory environments that use our intuitive abilities to process information from different sensory modalities. By using multiple senses, it is possible to increase the number of variables and relationships that can be represented simultaneously in complex data sets.
Keywords
1 Introduction
There are numerous visualization tools that make it possible to represent complex data sets using two-dimensional diagrams, animations, and virtual models. Interactive functions allow users to filter, sort, and compare different sets of variables to highlight specific relationships. Microsoft Excel, Tableau, and Google chart tools and Fusion Tables are just a few of the tools users can access to visualize and share data. Programming environments for data representation include Processing (http://procssing.org/), which designers and artists favor for creating animated visualizations that exist outside the browser, and D3.js (http://d3js.org/), which was launched in 2011 by the Stanford Visualization Group and includes a JavaScript library for creating web-based interactive data visualizations.
However, three-dimensional data visualizations that incorporate tactile objects, physical spaces, and blended spaces (that integrate virtual and physical data representations) can enhance our understanding of data relationships by tapping into our intuitive abilities to process data by using multiple senses. These representations use symbolic, iconic, and indexical references to data which may be defined by different sensory modalities [1]. Three-dimensional models incorporate interaction, kinesthetic design, embodiment, cross-modal perception, and multimodal semantic structures that define a new type of information aesthetic.
2 Three-Dimensional Data Representation
There are several forms of three-dimensional data representations that incorporate physical objects or physical space. Data sculptures are data-based, physical objects that signify data relationships [2]. They can range from three-dimensional extensions of two-dimensional graphs to unique abstract forms and metaphorical representations.
Research has shown that external representations can enhance our understanding of numerical tasks [3]. Vande Moere and Patel [4] demonstrated that physical data sculptures create dynamic narratives that illustrate process as well as outcomes. Data sculptures can represent quantitative relationships and qualitative information such as emotion and context. Physical materials or objects can represent literal connections with the data variables. For example, one of the data sculptures cited by Vande Moere and Patel [4] uses different types of cables (electric, electronic, headphone, phone, coaxial, and network cables) to construct a physical timeline that represents an individual’s daily activities that use cables (pp. 10–11).
In interaction design, interfaces that use tangible connections to the physical world engage the senses and augment the learning experience [5, 6]. Dourish [6] noted that interaction with physical objects enhances cognition because tangible computing “is a physical realization of a symbolic reality, and the symbolic reality is, often, the world being manipulated.” [p. 207]. Tangible interface designs can be applied to three-dimensional models and metaphorical references for data representation. For example, haptic interfaces can use inertia, force, torque, vibration, texture, and temperature to represent data variables and relationships in the physical world. Haptic interfaces enable users to interpret spatial relationships through the sense of touch. Palmerius [7] pointed out that “our sense of touch and kinesthetics is capable of supplying large amounts of intuitive information about the location, structure, stiffness and other material properties of objects” (p. 154).
Three-dimensional virtual models can be integrated into the surrounding physical space, allowing users to move in and around data representations projected into the environment. These environments may include ambient displays that turn elements in the surrounding architectural space, including physical objects, gases, and liquids, into “interfaces” that represent data [8]. Ambient displays communicate specific details as well as general information about the data variables and relationships. Data can be represented by different forms of sensory stimuli and create multiple levels of perception that lead to alternative perspectives and a holistic understanding of the information. Visual data representations can be augmented by auditory displays. For example, weather data might be enhanced by ambient sounds of rain or wind that reflect the force and velocity of these elements. The temperature of the room can mirror the actual outside temperature. With ambient displays, users can employ multiple senses to analyze relationships that might otherwise be missed [8].
However, the use of many different media and types of data representation can be distracting and overload the user with too much information. Current research is investigating the thresholds for ambient data designs to determine when there are too many media and data representations and how these thresholds transition from background (ambient) data to foreground data during different tasks [8].
Physical and virtual three-dimensional representations of data also provide another axis for mapping relationships, including dynamic changes over space and time. Three-dimensional models generate alternative perspectives and angles for viewing information. These different perspectives can highlight unexpected data relationships that might not be visible with two-dimensional representations.
Ameres and Clement [9], researchers at Rensselaer Polytechnic Institute (Troy, NY), have developed a unique three-dimensional computing interface call Campfire that allows a small group of users to collaborate on information analysis. The platform is a three-dimensional projection device, about six feet in diameter and two feet high, that allows participants to view data projected onto the walls and flat circular floor of the device (Fig. 1). Additional information can also be projected onto the walls in the room that houses the device. The goal is to expand the power of computers in collaborative decision-making by allowing users to intuitively share and manipulate data. Ameres [9] feels Campfire has the potential to enable users to “look inside the data” (para. 8) and expand data exploration beyond three-dimensional representations and traditional “one-to-one correlations between dimensionality and presentation” (para. 7).
3 Kinesthetic Design and Embodiment
Three-dimensional models invite interaction and exploration which can also lead to new insights about the data [4]. This type of interaction design, called kinesthetic design, helps the user understand the visual and cognitive relationships in the spatial representation of the information [10]. Berkeley [11] demonstrated that kinesthetic and tactile experiences shape our perception of space. Klemmer, Hartmann, and Takayma [5] noted that “our bodies play a central role in shaping human experience in the world, understanding of the world, and interactions in the world” (p. 140). When we physically interact with models or other tactile representations of data, we use reflective practice to work through ideas rather than just think about them [5].
Physical interaction is defined as an epistemic action that helps us understand relationships [12, 13]. Researchers have documented the significance of “drawing” relationships in physical space with hand and arm movement to clarify conceptual relationships and enhance memory and recall [14, 15]. Haptic interfaces and interactive hardware use physical movement to augment our understanding of information by leveraging “body-centric experiential cognition” [5, p. 144].
Vande Moere and Patel [4] used the term “embodiment” to describe the physical materialization of the data relationships in data sculptures. Embodiment also refers to the viewer’s interpretation of the data through the perception of the data in the physical world. Researchers have noted that we perceive information in relation to our orientation [16]. We intuitively learn about audio, visual, spatial, and temporal relationships by moving in physical environments and touching objects. Piaget [17] noted that logic and the cognitive processing of information are derived from physical and mental interaction, and it is the coordination of action that leads to reflective abstraction.
The cognitive semantics theory of conceptual metaphor states that logic and reasoning are founded on image schemas formed by “patterns of our bodily orientations, movements, and interaction” that we develop into abstract references [18, p. 90]. As a result, physical movement through space and interaction with tangible objects leads to symbolic representations and quantitative analyses [19, p. 2]. As we use gestures and objects, we gain new perspectives and see additional relationships based on our physical interaction with the objects. Abrahamson and Lindgren [19] noted that “we develop the skill of controlling and interpreting the world through the mediating artifact” (p. 4).
Gestures and bodily movements are also intuitive ways of learning and communicating because they constitute a universal visual language that is based on shared and tangible experiences [20]. LeBaron and Streeck [20] pointed out that gestures provide a bridge between tactile experiences and the abstract conceptualization of the experiences. They highlighted the work of the French philosopher Condillac who felt gestures “constituted the original, natural language of humankind” because they formed symbols and a social language based on common experiences [20, p. 118]. Condillac [21] called these symbols or signs sensations transformées or transformed sensations (p. 61) because they referred to “the entire complex of affect, desire, sensory perception, and motor action that makes up what nowdays we might call ‘embodied experience’” [20, p. 118].
Gestures can play an important role in kinesthetic design for multisensory data representation. Research has shown that gestures increase creativity [22], reduce cognitive overhead [23], and help us translate our experiences with objects into cognitive interpretations [24, 25]. We have already seen how interactive phones and tablets make use of our intuitive understanding of gesture to facilitate interaction with mobile devices and engage us in the communication process.
4 Cross-Modal Perception
Research has shown that we intuitively integrate stimuli from different sensory modalities. The multisensory integration of audio and visual stimuli is a physiological process that takes place within the neurons in the brain [26–28]. Researchers have identified enhanced activity in the visual cortex in congenitally blind people when they analyze speech [29], moving sounds [30], or localized sounds [31].
Research has shown that cross-modal perception heightens perceptual awareness and enhances our ability to process information from individual sensory modalities when the combinations of stimuli are organized or random [32–35]. Freides [36] concluded that perception that involves more than one sensory modality is more accurate than information that is represented with one sense. This is especially true if the cross-modal perception involves the integration of visual or audio information with haptic and kinesthetic stimuli.
There has been extensive research on cross-modal perception that involves the integration of audio and visual stimuli. Research has shown that the perception of visual information is altered when sound is added to the visuals [37–40]. Vroomen and de Gelder [37] also demonstrated that the temporal organization of auditory stimuli impacts visual perception. A random high tone (in a sequence of low tones) improved the perception of a visual target when the tone and the visual stimuli were presented synchronously. However, there was no effect when the high tone was presented before the visual information. The effect was also reduced when there was less contrast between the high and low tones, and when the high tone was part of a melody.
Sound can enhance the detection of specific individual visual elements as well as improve the detection of motion [28, 37]. Beer and Watanabe [28] demonstrated that visual motion detection improved when sounds were paired simultaneously with the visual stimuli. Chen and Yeh [41] discovered that the addition of repetitive sounds to visuals alleviated “repetition blindness” which is the failure to perceive visuals that repeat in rapid succession.
Visual and auditory stimuli can also impact the perception of spatial location. Audio and visual stimuli that are synchronized, but exist in different spatial locations, may appear to come from the same location [42–45]. In addition, research has shown that visual and auditory stimuli that come from the same location seem to emanate from the same source if the visual stimuli precede the sound by 50 ms [46, 47]. Talsma, Senkowski, and Woldorff [48] concluded that this timing difference is due to the different velocities of light and sound, which have caused the brain to develop a higher neural transmission rate for auditory stimuli to compensate for the fact that sound reaches the auditory nerve approximately 50 ms after visual stimuli.
The different velocities of auditory and visual stimuli also impact the perception of time and whether or not sounds and visuals appear to be synchronized. There has been conflicting research in this area with some research showing the auditory stimuli must come first in order for sounds to appear to be simultaneous with visual stimuli [49], while other research indicated that the visual stimuli must come first [50–52]. These different findings suggested that other variables, in addition to velocity, impact how we perceive the temporal order and synchronicity of auditory and visual stimuli. Research had indicated that the relative intensities of sensory stimuli effect the perception of temporal order by showing that a stimulus with a higher intensity was perceived before a stimulus with a lower intensity [53]. Boenke, Deliano, and Ohl [54] confirmed that intensity plays a role in the temporal perception of auditory and visual stimuli. They further defined the temporal dynamics of auditory and visual stimuli by showing that the duration of a stimulus also impacts the perception of time, noting that asynchronies in the perception of multiple stimuli appear to be stabilized when the duration of the stimuli is increased [54].
Finally, Freides [36] noted that with complex spatial or temporal pattern recognition, the sensory modality used to represent the data is more critical than the contextual and parametric variables themselves because each modality processes information in a different way, and we automatically use the modality best suited to process variables that represent spatial, temporal, tactile, or kinesthetic relationships.
Research in cross-modal perception plays an important role in the design of multisensory data representations. By using multiple sensory modalities, it is possible to expand the number of data variables that can be represented simultaneously and increase the potential for discovering patterns, trends, anomalies, and outliers. Cross-modal stimuli can enhance the perception of visual and audio information, and it can impact the perception of spatial and temporal relationships. However, when different sensory modalities are used to represent multiple variables in a complex information space, the choice of media is not the only factor to consider. As indicated in the research, other important factors that impact perception include how and when the stimuli are introduced and the location, intensity, speed, and duration of the stimuli. Research has shown that random sounds can enhance the perception of visual information. However, in multisensory data design, the use of auditory stimuli to represent data may result in repetitive or recursive audio patterns, and it is not clear from the current research in cross-modal perception, how repetitive or recursive patterns impact the perception of visual stimuli and the perception of temporal and spatial relationships in data sets.
5 Aesthetics of Data Representation
Aesthetics is another design element that impacts the interpretation of data representations [55, 56]. Information aesthetics refers to the way design is used to organize data and define relationships. Researchers have broadened the definition to refer to the user experience, engagement, and interaction with the data representations, as opposed to merely defining patterns and trends. This definition also highlights the narratives and underlying processes and principles represented by the data [4]. Information aesthetics is also defined by the database design and the way information is organized, filtered, and retrieved to form different associations [57].
Visual and audio designs create relationships that we perceive as “aesthetically pleasing” because they adhere to principles of design, defined by artists, designers, and musicians, that we have learned over time. Aesthetically pleasing designs define “good Gestalt” and use Gestalt principles of perception to help us simplify and organize information intuitively.
Information aesthetics, based on these design concepts, has been applied to graph theory and design [58] to improve the user’s ability to locate information, compare relationships, and complete tasks. With interactive systems, research has shown that the aesthetics of an interface design can impact user engagement, completion time, and error rate [59–61]. In these research experiments, the aesthetics of each design was defined by Gestalt laws of perception and grouping (similarity, proximity, continuation, closure, figure/ground), as well as established concepts in visual design theory that define how to use “harmonious” color palettes, contrast, focal points, balance, symmetry, and asymmetry. In some cases, an aesthetically pleasing information design or interface design did not yield the fastest time in task completion, but the visual appeal of the design encouraged the users to stay engaged and ultimately, complete the tasks [62].
However, multisensory data representation can result in unfamiliar audiovisual patterns that do not conform to established principles of design. Multisensory data representation and cross-modal perception are defining new dimensions in information aesthetics that impact the interpretation of data relationships. We have considerable experience reading linear and hierarchical charts, but as we explore new forms of data representation that combine different sensory modalities, physical and virtual spaces, ambient displays, haptic interfaces, and interaction design, we are defining new ways of using perception and cognition to analyze and interpret complex relationships. For example, with the Campfire example previously discussed, participants are presented with an open space in the center of the device that does not contain specific information. However, the space signifies connections between the data on the sides and bottom of the display. The participants can use this space to create cognitive connections between the physical and virtual representations of the data—connections that define additional dimensions that expand beyond two-dimensional data charts and the three dimensional properties of the display itself.
Kinesthetic design in data representation is also defining new dimensions in the aesthetics of information design. In interactive sports simulators, where the participant performs specific physical motions (e.g., swinging a golf club, throwing/kicking a ball) to produce actions and events in the virtual game, the participant’s physical interaction promotes engagement and creates mental and physical connections with the information in the virtual space. We can apply these concepts from game design to interactive data representation and use embodiment, spatial movement/distance, rhythm, and time to define data relationships. Kinesthetic design adds sensory information to the user experience that augments the virtual representations of the data and creates a holistic approach to data analysis and interpretation.
In my research, I am designing interactive, multimedia art installations to explore new concepts in kinesthetic design and information aesthetics [1]. In the installations, participants interact simultaneously with two different computer programs and create dynamic visual patterns and sounds in the surrounding environment. The gestures and physical movements the participants make, as they move the interactive hardware to control the computer programs, create layers of visual patterns called “hyperplanes” that are at right angles to the virtual patterns displayed in the space (Fig. 2). Audio stimuli define additional hyperplanes as sounds penetrate the environment and immerse the viewer with sensory stimuli from different angles and directions. The hyperplanes create a counterpoint of audio, visual, and rhythmic patterns that define geometric grids of intersecting spatiotemporal planes that change as the user alters the variables in the data representations [1].
6 Future Directions
In three-dimensional, multisensory data representations, arrays of sensory stimuli and discursive patterns represent simultaneous and sequential relationships and events. Physical and virtual spaces, interactivity, and individual sensory modalities create a system of perceptual and semantic relational codes that define the data relationships.
Cross-modal perception can enhance and alter the way we interpret information that is represented with different sensory stimuli. It also impacts how we interpret spatial and temporal relationships. Research in cross-modal perception needs to expand into the field of multisensory data design and evaluate how different sensory stimuli, blended spaces, and kinesthetic design impact the interpretation of complex data relationships, including how we perceive the transformation of data relationships over time. The research needs to include studies in the perception of rhythm which is an important element in data representation. In multisensory data design, layers of rhythms, created by the audio and visual stimuli and kinesthetic interaction, highlight the temporal dynamics in data relationships. Spence, Senkowski, and Röder [63] pointed out that current research in cross-modal perception seems to be shifting from a focus on spatial information processing to the impact of sensory modalities on the temporal processing of information. This new emphasis on the temporal dynamics of information processing will play a significant role in defining new directions for multisensory data design.
As new forms of data representation emerge, it will also be important to evaluate how new technologies and multisensory stimuli redefine information aesthetics. With interactive technologies, kinesthetic design and cross-modal perception will continue to add new dimensions to information aesthetics and expand the definition of “aesthetically pleasing” designs. These changes will, in turn, lead to even more innovative ways of representing data because we will no longer be constrained by established definitions of aesthetics and information design. We will be able to envision and develop technologies that not only leverage our intuitive abilities to process information through multiple senses, but also create interactive experiences that integrate virtual and physical objects, actions, and sensory stimuli into dynamic information spaces for data analysis.
References
Search, P.: New media perspectives for information and data design. In: Fabel, L., Spinillo, C., Tiradentes Souto, V. (eds.) Proceedings of the 7th Information Design International Conference, pp. 222–229. Brazilian Society of Information Design, Brasilia, Brazil (2015)
Zhao, J., Vande Moere, A.: Embodiment in data sculpture: a model of the physical visualization of information. In: Proceedings of the Conference on Digital Interactive Media in Entertainment and Arts (DIMEA 2008), pp. 343–350. ACM, Athens, Greece (2008)
Zhang, J., Wang, H.: The effect of external representations on numeric tasks. Q. J. Exp. Psychol. 58A(5), 817–838 (2005)
Vande Moere, A., Patel, S.: The physical visualization of information: designing data sculptures in an educational context. In: Huang, M., Nguyen, Q., Zhang, K. (eds.) Visual Information Communication (VINCI 2009), pp. 1–23. Springer, Sydney (2009)
Klemmer, S.R., Hartmann, B., Takayama, L.: How bodies matter: five themes for interaction design. In: Proceedings of Designing Interactive Systems (DIS 2006), pp. 140–148 (2006)
Dourish, P.: Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge (2001)
Palmerius, K.L., Forsell, C.: The impact of feedback design in haptic volume visualization. In: Third Joint EuroHaptics Conference 2009 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, World Haptics 2009, pp. 154–159. IEEE Press, New York (2009)
Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P.: Ambient displays: turning architectural space into an interface between people and digital information. In: Yuan, F., Konomi, S., Burkhardt, H.-J. (eds.) CoBuild 1998. LNCS, vol. 1370, pp. 22–32. Springer, Heidelberg (1998)
Martialay, M.: Immersive experience: The Campfire. The Approach: Discovery, Innovation, and Imagination at Rensselaer Polytechnic Institute (November 18) (2015). http://approach.rpi.edu/2015/11/18/immersive-experience-the-campfire/
Search, P.: The metastructural dynamics of interactive electronic design. Visible Lang. Cult. Dimensions Vis. Commun. 37(2), 146–165 (2003)
Berkeley, G.: A New Theory of Vision and Other Writings. E. P. Dutton, New York (1922)
Kirsh, D., Maglio, P.: On distinguishing epistemic from pragmatic actions. Cogn. Sci. 18(4), 513–549 (1994)
Hollan, J., Hutchins, E., Kirsh, D.: Distributed cognition: toward a new foundation for human-computer interaction research. ACM Trans. Comput. Hum. Interact. 7(2), 174–196 (2000)
Fish, J., Scrivener, S.: Amplifying the mind’s eye: sketching and visual cognition. Leonardo 23(1), 117–126 (1990)
Fish, J.: The Cognitive Functions of the Sketch. Cheltenham & Gloucester College of Higher Education, CAD Centre internal Report, Cheltenham (1993)
Klatzky, R.L.: Allocentric and egocentric spatial representations: definitions, distinctions, and inter-connections. In: Freksa, C., Habel, C., Wender, K.F. (eds.) Spatial Cognition: An Interdisciplinary Approach to Representation and Processing of Spatial Knowledge. LNAI, vol. 1404, pp. 1–17. Springer, Berlin (1998)
Piaget, J.: The Origins of Intelligence in Children. International University Press, New York (1952)
Lakoff, G., Johnson, M.L.: Metaphors We Live By. The University of Chicago Press, Chicago (1980)
Abrahamson, D., Lindgren, R.: Embodiment and embodied design. In: Sawyer, R.K. (ed.) The Cambridge Handbook of the Learning Sciences, 2nd edn, pp. 357–376. Cambridge University Press, Cambridge (2014)
LeBaron, C., Streeck, J.: Gestures, knowledge, and the world. In: McNeill, D. (ed.) Language and Gesture, pp. 118–138. Cambridge University Press, Cambridge (2000)
Condillac, E.: An Essay on the Origin of Human Knowledge, Being a Supplement to Mr. Locke’s Essay on the Human Understanding. J. Noursse, London (1746)
Wang, Q., Nass, C.: Less visible and wireless: two experiments on the effects of microphone type on users’ performance and perception. In: Proceedings of the 23rd ACM SIGCHI Human Factors in Computing Systems, Portland, Oregon, pp. 809–818 (2005)
Goldin-Meadow, S., Nusbaum, H., Delly, S.D., Wagner, S.: Explaining math: gesturing lightens the load. Psychol. Sci. 12(6), 516–522 (1991)
Goldin-Meadow, S., Beilock, S.L.: Action’s influence on thought: the case of gesture. Perspect. Psychol. Sci. 5(6), 664–674 (2010)
Kirsch, D.: Embodied cognition and the magical future of interaction design. In: Marshall, P., Antle, A.N., Hoven, E.V.D., Rogers, Y. (eds.) The Theory and Practice of Embodied Interaction HCI and Interaction Design (special issue). ACM Trans. Hum. Comput. Interact. 20(1), pp. 1–30 (2013)
Sams, M., Imada, T.: Integration of auditory and visual information in the human brain: neuromagnetic evidence. Soc. Neurosci. Abs. 23, 1305 (1997)
Driver, J., Noesselt, T.: Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron 57(1), 11–23 (2008)
Beer, A., Watanabe, T.: Specificity of auditory-guided visual perceptual learning suggests crossmodal plasticity in early visual cortex. Exp. Brain Res. 198(2), 353–361 (2009)
Röder, B., Stock, O., Bien, S., Neville, H., Rösler, F.: Speech processing activates visual cortex in congenitally blind humans. Eur. J. Neurosci. 16(5), 930–936 (2002)
Poirier, C., Collignon, O., Scheiber, C., Renier, L., Vanlierde, A., Tranduy, D., Veraart, C., De Volder, A.G.: Auditory motion perception activates visual motion areas in early blind subjects. Neuroimage 31(1), 279–285 (2006)
Weeks, R., Horwitz, B., Aziz-Sultan, A., Tian, B., Wessinger, C.M., Cohen, L.G., Hallett, M., Rauschecker, J.P.: A positron emission tomographic study of auditory localization in the congenitally blind. J. Neurosci. 20(4), 2664–2672 (2000)
Hershenson, M.: Reaction time as a measure of intersensory facilitation. J. Exp. Psychol. 63(3), 289–293 (1962)
Nickerson, R.S.: Intersensory facilitation of reaction time: energy summation or preparation enhancement? Psychol. Rev. 80(6), 168–173 (1973)
Posner, M.I., Nissen, M., Klein, R.M.: Visual dominance: an information-processing account of its origins and significance. Psychol. Rev. 83(2), 157–171 (1976)
Simon, J.R., Craft, J.L.: Effects of an irrelevant auditory stimulus on visual choice reaction time. J. Exp. Psychol. 86(2), 272–274 (1970)
Freides, D.: Human information processing and sensory modality: cross-modal functions, information complexity, memory, and deficit. Psychol. Bull. 81(5), 284–310 (1974)
Vroomen, J., de Gelder, B.: Sound enhances visual perception: cross-modal effects of auditory organization on vision. Hum. Percept. Perform. 26(5), 1583–1590 (2000)
Mazza, V., Turatto, M., Rossi, M., Umiltà, C.: How automatic are audiovisual links in exogenous spatial attention? Neuropsychologia 45(3), 514–522 (2007)
McDonald, J.J., Teder-Sälejärvi, W.A., Hillyard, S.A.: Involuntary orienting to sound improves visual per-ception. Nature 407(6806), 906–908 (2000)
Spence, C., Driver, J.: Audiovisual links in endogenous covert spatial orienting. Percept. Psychophysics 59(1), 1–22 (1997)
Chen, Y., Yeh, S.: Catch the moment: multisensory enhancement of rapid visual events by sound. Exp. Brain Res. 198(2), 209–219 (2009)
Bertelson, P., Radeau, M.: Cross-modal bias and perceptual fusion with auditory-visual spatial discordance. Percept. Psychophysics 29(6), 578–584 (1981)
Vroomen, J., Bertelson, P., de Gelder, B.: A visual influence in the discrimination of auditory location. In: Proceedings of the International Conference on Auditory-Visual Speech Processing (AVSP 1998), pp. 131–135. Causal Productions, Sydney, Australia (1998)
Vroomen, J.: Ventriloquism and the nature of the unity assumption. In: Aschersleben, G., Bachmann, T., Müsseler, J. (eds.) Cognitive Contributions to the Perception of Spatial and Temporal Events, pp. 388–394. Elsevier Science, New York (1999)
Vroomen, J., Bertelson, P., de Gelder, B.: The ventriloquist effect does not depend on the direction of deliberate visual attention. Percept. Psychophysics 63(4), 651–659 (2001)
Lewald, J., EhrenBee, W.H., Guski, R.: Spatio-temporal constraints for auditory-visual integration. Behav. Brain Res. 121(1–2), 69–79 (2001)
Leward, J., Guski, R.: Cross-modal perceptual integration of spatially and temporary disparate auditory and visual stimuli. Cogn. Brain. Res. 16(3), 468–478 (2003)
Talsma, D., Senkowski, D., Woldorff, M.: Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli. Exp. Brain Res. 198(2–3), 313–328 (2009)
Neumann, O., Niepel, M.: Timing of “perception” and perception of “time”. In: Kärnbach, C., Schröger, E., Müller, H. (eds.) Psychophysics Beyond Sensation: Laws and Invariants of Human Cognition, pp. 245–269. Erlbaum, Mahwah (2004)
Jaśkowski, P.: Simple reaction time and perception of temporal order: dissociations and hypotheses. Percept. Mot. Skills 82(3, Pt 1), 707–730 (1996)
Zampini, M., Shore, D.I., Spence, C.: Audiovisual temporal-order judgments. Exp. Brain Res. 152(2), 198–210 (2003)
Zampini, M., Shore, D.I., Spence, C.: Multisensory temporal-order judgments: the role of hemispheric redundancy. Int. J. Psychophysiol. 50(1), 165–180 (2003)
Neumann, O., Koch, R., Niepel, M., Tappe, T.: Reaction time and temporal-order judgment: correspondence or dissociation. Zeitschrift fur Experimentelle und Angewandte Psychologie 39(4), 621–645 (1992)
Boenke, L.T., Deliano, M., Ohl, F.W.: Stimulus duration influences perceived simultaneity in audiovisual temporal-order judgment. Exp. Brain Res. 198(2–3), 233–244 (2009)
Lau, A., Vande Moere, A.: Towards a model of information aesthetic visualization. In: Proceedings of the IEEE International Conference on Information Visualisation (IV 2007), Zurich, Switzerland, pp. 87–92 (2007)
Sack, W.: Aesthetics of information visualization. In: Lovejoy, M., Paul, C., Vesna, V. (eds.) Context Providers, pp. 123–150. Intellect, Bristol (2007)
Vesna, V.: Database Aesthetics: Art in the Age of Information Overflow. University of Minnesota Press, Minneapolis (2007)
Purchase, H., Allder, J.A., Carrington, D.: Metrics for graph drawing aesthetics. J. Vis. Lang. Comput. 13(5), 501–516 (2002)
Kurosu, M., Kashimura, K.: Apparent usability vs. inherent usability: experimental analysis on the determinants of the apparent usability. In: Conference Companion on Human Factors in Computing Systems, CHI 1995, Denver, Colorado, USA, pp. 292–293 (1995)
Ngo, D., Byrne, J.G.: Another look at a model for evaluating interface aesthetics. Int. J. Appl. Math. Comput. Sci. 11(2), 515–535 (2001)
Stasko, J., Catrambone, R., Guzdial, M., McDonald, K.: An evaluation of space-filling information visualizations for depicting hierarchical structures. Int. J. Hum Comput Stud. 53(5), 663–694 (2000)
Cawthon, N., Vande Moere, A.: The effect of aesthetic on the usability of data visualization. In: Proceedings of the IEEE International Conference Information Visualization (IV 2007), Zurich, Switzerland, pp. 637–648 (2007)
Spence, C., Senkowski, D., Röder, B.: Crossmodal processing. Exp. Brain Res. 198(2), 107–111 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Search, P. (2016). Multisensory Physical Environments for Data Representation. In: Marcus, A. (eds) Design, User Experience, and Usability: Technological Contexts. DUXU 2016. Lecture Notes in Computer Science(), vol 9748. Springer, Cham. https://doi.org/10.1007/978-3-319-40406-6_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-40406-6_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40405-9
Online ISBN: 978-3-319-40406-6
eBook Packages: Computer ScienceComputer Science (R0)