Keywords

1 Introduction

The human somatosensory system is a complex sensory channel comprised of many different subsystems including the skin, vestibular system, tendons, muscles, joints, and the cochlea, in addition to the cognitive processes behind the perception and response to physical sensations. The sensations available to the body include temperature, vibration, balance, position, and pain, stretching of the skin, movement, organs, joints, muscles, and all of the sensations we can feel. These sensations represent a lexicon of physical perceptions that can be considered the language of the body. But while the body can detect so many different types of stimuli, it remains underused as a communication channel for computer displays: even with the growing problem of sensory overload from the increasing numbers of devices competing for our auditory and visual attention. To address this problem, physical displays are reviewed and categorized along four axes based on application, physiology, technology, and psychology. This multidisciplinary approach to understanding interfaces for the physical and somatic (touch) senses may also provide the groundwork for developing a framework that can support greater cohesion within the disciplines towards increasing usability, accessibility, and development of display systems for the body.

2 Background

The concept of physical interactions refers to both input and output devices and interfaces that communicate through the body. In this work, the focus is on display systems that communicate to the body in the same way audio and visual displays present information to the ears and eyes.

It has been close to 100 years since the first sensory substitution system was developed to convert sounds into vibrations to help deaf children understand speech [1], yet, physical display systems are primarily used only for research. Classifications, frameworks, and toolkits can offer guidelines and help in making sense of specific areas of touch interactions, [46], but the discipline remains fragmented and dispersed across a vast network of researchers, systems, and devices that do not quite meet the requirements to make it to the consumer electronics market.

The study of display interfaces for the body draws on knowledge from different fields, including physiology, psychology, computer science, and engineering. Early works of E.H. Weber and Fechner, who pioneered research on the somatosensory system, and in particular, the somatic senses, or sense of touch, revealed much of what is currently understood about mechanoreceptors, psychophysics, and the different sensations the body can detect. More recently work by haptics researchers such as Klatsky and Ledermen have moved the field forward considerably [1, 3]. However, given the complexities of the sensory system, along with the perceptual, physical, and processing demands required to develop and use physical displays, it is not surprising that so few novel systems have made their way into everyday computing displays and interactions.

2.1 Making Sense of the Sense of Touch

An important problem to address is the disconnect that exists between the different disciplines, and the need to have a better understanding of these areas in order to improve the interfaces, applications, and devices that support physical interactions. While the sense of touch represents the most commonly referenced sense for physical interactions, tactile sensations relate primarily to the cutaneous system, and the receptors of the skin [2], In contrast, haptic sensations are the combined sense of both tactile and kinesthetic senses [3]. The human “feel” sense actually consists of three main senses, which are often difficult to distinguish: tactile or somatic senses, the kinesthetic sense, and haptic sense. This is an oversimplified representation of the physical senses, however, outlines sensations that can be leveraged for physical display interactions.

Touch. The tactile senses, or “touch senses,” are based on the cutaneous system, and the mechanoreceptors located in the skin [2]. There are two types of skin on the human body: glabrous (non-hairy) and non-glabrous (hairy). Glabrous skin is positioned mainly along the ventral areas of the fingers, hands, feet, lips and genitalia, and is more sensitive to stimuli than non-glabrous skin covering the rest of the body [2, 3]. Tactile receptors include mechanoreceptors (pressure, vibration, stretch), thermoreceptors (temperature), and nociception (pain), and are primarily experienced through a ‘passive’ form of stimulation [8]. These receptors cover the skin, epithelia, skeletal muscles, bones, joints, internal organs, and cardiovascular system [2, 3]. Sensations are dependent on rapidly adapting and slow adapting mechanoreceptors (RA I, SA I, RA II, SA II), embedded in the skin. Touch sense has been identified with Pacinian Corpuscles; the RAII receptors, but in fact, is known to be associated with all the glabrous skin afferents, including Meissner corpuscles, Ruffinii corpuscles and the Merkel complex cells.

Kinesthesia. Kinesthesia or “body sense” is the result of a combined awareness of movement, position and orientation of parts of the human body. The human kinesthetic capability senses the position of the body parts. Limbs and movements are perceived as a single unified perception despite their origination in diverse sensory sources such a joint loads, muscle stretch, skin stretch and the vestibular balance and movement detections system. Kinesthesia includes the operation of mechanoreceptors, and proprioceptors (position, movement, tissue compression) that are sensitive to forces in the skin, muscles, tendons and joints, which are interpreted in conjunction with knowledge of efference or outgoing motor signals, visual feedback and muscle stretch receptors. This, and proprioception, are both remarkable and poorly understood, as they involve considerable sensory fusion problems combined with the integration of numerous incompatible spatial and temporal coordinate systems. Hence, hand space must be somehow mapped to end of arm location relative to the head coordinates and this is related to visual view-centered coordinates and vestibular acceleration information. Also, time course of the various sensors information differs, generating a difficult sensor fusion task. It is known that some visual information is required for accurate calibration of kinesthesia, and hence for localizing the position of a touch sensation.

Haptics. Haptic perception involves the active gathering of information about objects outside of the body through a combination of tactile and kinesthetic senses. Active perception requires exploratory movement control to communicate shape, texture, form, edge detection, force, or motion [2, 8, 9]. Active perception, like moving fingertips along a surface to determine its texture, is a different sensation than passive perception, which receives information that can be detected without exploration, like temperature or physical motion. Haptic perception incorporates sensations from the proprioceptors, mechanoreceptors, as well as the vestibular system (balance), to represent a broader range of potential interaction opportunities.

When a receptor is activated by some kind of device or stimuli, this results in the depolarization of sensor nerve endings, and leads to the initialization of an action potential that will travel up along the spinal cord and trigger a neurological process that initially occurs in parietal lobe. There are also secondary processing centers to consider, such as the cerebellum, which contribute to our sense of balance, in addition to the somatosensory processes within the parietal lobe. However, this can be explored further in the literature and related works to ensure there is a strong grasp of the physiological factors that relate to the specific physical interface intended to support an application when designing physical-displays [25].

2.2 Somatosensory Displays and Systems

While the existing human-machine interface (HMI) interaction paradigm relies principally on visual feedback, often supported by sound, the use of tactile or haptic displays are often restricted to the physical interactions with specific input devices such as feeling the mouse or a control wheel or basic vibratory stimulators. This leaves the skin under-utilized, and the lack of articulatory feedback [10] makes interaction more difficult and suggests a new potential carrier channel for information. In an attempt to clarify the different types of touch-based displays based on understanding the sense of touch and the somatosensory system, a brief review of somatosensory-based displays is presented and organized according to the three main somatic senses: touch, kinesthesia, and haptic.

Touch displays are commonly designed to provide messages to the body that can be simple notifications [11, 12], or complex images or sounds [13, 14]. Typically, the complexity of the information reflects in the number and positioning of the tactile display. For example, a tactile-vision sensory substitution system (TVSS) uses an array of 400 transducers to map images onto the back [15], while a notification in a mobile phone may use only a single vibration motor. Different sensations include vibrotactile, electro-tactile, ultrasonic, or gusts of air [1618]. The fingers, back, arm, and tongue are most commonly used for touch-based displays, although most any part of the body can be targeted as a location for displaying information to the body using some form of technology given the correct display characteristics.

Kinesthetic displays are also costly and not accessible to everyone; however, they provide the user with a sense of motion and position that are essential to full body immersion or simulations. Movement, actuation, and orientation provide the user with a physical experience that replicates some of the sensations one would expect from driving, flying, or riding a roller coaster. Coupled with a VR display and stereo sound, one could achieve an incredible state of realism from a virtual experience. However, size and costs limit use of these systems, as does the need for content that can provide a realistic and effective experience for users to experience and adopt.

Haptic displays are also used in feedback, notification, and entertainment applications, leveraging both kinesthetic and cutaneous senses [1921]. Haptic displays are commonly used in remote operations environments [5, 22], remote object detection and manipulation [23], and virtual and augmented reality [19]. Devices such as the Phantom Omni 6DOF Haptic device [24] create physical sensations that enable remote and virtual objects to display their physical characteristics to the user but are primarily used as research tools to support continuous or force-varying virtual haptic displays. Disney research demonstrates a haptic device driven by gusts of air [18], but like many of technologies, they are either too expensive, or still in the research phase.

3 A Framework for Somatosensory-Based Display Systems

To aid in promoting clarity in the study and development of physical display systems and interactions, a proposed framework is presented in an attempt to quantify and qualify the diverse interaction technologies, applications, physiological, and psychological factors within this field. One approach is to organize interfaces into categories that highlight critical parameters to support effective design and development of physical interactions and display systems. Critical parameters serve as useful tools for supporting cross-domain comparisons and evaluations of applications, designs, performance and technologies [26]. The categories proposed in this paper (Fig. 1) represent a high-level structure on which to build a framework that links the critical parameters from the different disciplines, to the system being designed or evaluated, and provides a method for using the parameters across different systems and applications.

Fig. 1.
figure 1

Example categories and critical parameters for a perspective on somatosensory-based interactions.

3.1 Application

From the application perspective, the goals, functional requirements, and expected outcome of using a physical display serve as high-level critical parameters that define the interaction scenario. Goals may include communication, simulation, or augmentation, and within each of these parameters, further requirements and expectations that define the application can be derived and presented. The goals and specific functions can be determined, evaluated, and compared using each critical parameter, allowing systems to be assessed on the same axes. Communication may include sensory substitution, notification, or feedback as factors, with each of the critical parameters providing a clearer set of metrics to use when developing and designing applications that use physical displays (Fig. 2). Further levels can be included to drill down into a more detailed set of requirements for the interaction. All parameters can be weighted for importance and flexibility, and should reflect enough about the application requirements to support decisions involving the other categories that will best match the goals for the application interaction.

Fig. 2.
figure 2

Suggested critical parameters for the application factors

3.2 Physiology

Critical parameters describe interaction requirements, constraints and scenarios relating to the senses and the physical mechanics of presenting information to the right place on the body, using the right kind of stimuli. Parameters like body position, sensitivity, sense, and receptors represent some of the physical constraints to consider when designing the system interaction. In addition, understanding the interaction context of the application can also determine optimal placement of sensations on the body. This can also influence the type of form factor, and other ergonomic characteristics that the display system should support. For example, desktop applications imply the user will be seated, while mobile devices may not. Constraints of the physical state of the user during interaction will guide the placement, sensations, and form factors required to support the application goals. Critical factors will be based on the three types of physical sense, and defined by the specific type of effect that the application aims to produce for the user, and an example is presented in Fig. 3.

Fig. 3.
figure 3

Examples of critical parameters to describe the physiological factors of physical interfaces.

3.3 Technology

Technology represents one of the most constraining factors within the framework, supporting application goals and driving the physical sensations. In a pure research environment, it is possible to develop new devices that can create sensations for the body, but it is always easier to work with systems that already exist. Most physical displays are built using existing motors, transducers, or contactors, which are assembled into arrays or other configurations to provide the desired physical stimulation. However, from an end user’s perspective, there are very few tactile, haptic, or kinesthetic systems that are easily accessible, usable, and affordable for most people. Outside of the direct force feedback systems, rumble pads, or vibration motors, most of the systems described in the literature are a long way from being available to consumers for everyday interactions (Fig. 4).

Fig. 4.
figure 4

An example of potential critical parameters based on the technology category

Again, even after so many systems have been designed, researched, and deemed valuable, there remains a problem in getting these to the people who want and need them. For example, pin-arrays, wayfinding systems, and tactile-vision display systems that could improve the lives of people with sensory disabilities are a long way from making a mark on the consumer electronics market. Simple devices such as vibration motors in gaming controllers are common, but are a low-end solution to delivering immersion and realism that touch-based systems have been shown to deliver in the literature.

3.4 Psychology

Perceptual processing, cognitive load, learning, and emotional effects of using touch displays represent some potential critical parameters from a psychological perspective. The end goal is to create cognitive impact through the physical sensing mechanisms that will produce the desired effect intended to support the application interaction design. Complicated signals can be understood if presented to the right location using the right signal, as can simple single-point notifications that simply aim to get the user’s attention. There are additional psychological factors at play, especially when multiple sensory modalities are combined, which can potentially lead to the creation of a new sensations. Sensory substitution applications that aim to create a sense of an image through the vibrations applied to a user’s body may or may not result in the same percept as seeing the image. However, part of the development process involves evaluating novel interfaces to determine what the user actually perceives, and how they can interpret, map, and process a physical-cognitive sensation. This is a new research field that has considerable potential. Cross-modal perceptions could even be viewed as new senses, similar to the fusion of senses in haptic perception.

Similarly, multimodal interactions may also be developing new perceptional mechanisms in the brain [2729], which represent an exciting area of research in tactile displays that needs to be better understood. Critical factors from psychology thus include sensation identification, multi-modal integration effects, perceptual cognition, interpretation requirements, comprehension, identification, and emotional and affective responses [30]. There may also be relationships to the psychological processes involved in Synesthesia: the crossed perception of modalities (as when sound or abstract entities such as numerals or days of the week are perceived as having colour); or the perception of touch from observation of others, so called, Mirror-touch Synesthesia. These parameters can then be evaluated, measured, and validated using psychophysical methods and techniques, which provides an empirical means of studying the perceptual responses of users to touch-based sensations that can be in addition to self reporting data, or observations. The factors are shown in Fig. 5.

Fig. 5.
figure 5

Psychological factors relating to the design of physical interfaces

Evaluating Touch-Displays. Usability studies are essential for assessing the overall effects and affects of any interaction, and psychophysics provides the tools and methods to enable empirical evaluations of stimuli and response across different applications, technologies, and information. Psychophysics is the study of quantitative relations between sensations and the stimuli that produce them. There are many texts that suggest methods and approaches to conducting psychophysical experiments, often based on the early works of Weber and Fechner [2, 3]. While there are many studies that have been conducted to evaluate interactive physical displays [31, 32], and new approaches can be developed to support the expanding range of sensations and perceptions that are being explored in the literature. A few examples illustrating the structure of the framework and the categories and parameters are presented next.

4 Applying the Framework

Sensory substitution applications represent one of the earliest examples of using the skin as a display technology. In 1929, Dr. Gault developed a 5-channel system that converted audio into physical vibrations. The device influenced many systems that followed in the next century, including the tactile Vocoder [17], and the Emoti-Chair [14]. However, Bach-y-Rita was one of the most innovative researchers in the field of touch-based displays, who developed several influential sensory substitution systems including the tactile vision sensory substitution system (TVSS) [15], a tongue display [33], and finger-displays [22]. Application goals may include communication of detailed information, with high demands on cognitive processing and comprehension. The interaction context for ‘viewing’ images on the body may be best implemented in a chair form factor however, different requirements will apply to mobile context.

Physical factors relating to the application goals include body location (large area to support image data), receptors (mechanoreceptors), sensory system (cutaneous), skin type (glabrous, non-glabrous) and intensity (must support cutaneous stimulation). These will vary, but ultimately, thinking along these lines can support decision matrices, and other methods that can allow the comparison and evaluation of different combinations of parameters to support more informed design decisions, leading to better displays. Tactile-stimulation is likely the best option for providing detailed information to the body, as a high level of resolution is required to translate detailed image information onto the body.

Changing the application parameters has led to a variety of alternative systems based on the TVSS, which explored the abdomen, thigh, and fingertips as display locations [22], but these represent common physical factors that enabled the solenoids to be used on different parts of the body. However, in a new scenario, the tongue was considered as a location for the TVSS, which led to different devices, namely electrotactile stimulators, arranged in a 49 point, 1.8 × 1.8 cm array, showing similar results to the TVSS system for the back [33].

In summary, all of the categories used to determine critical factors for any physical display system must be considered both individually, and as a whole. Each factor must reflect the goals, and requirements that motivate the use of physical displays, and will vary across different scenarios and interactions. However, these can also be applied to different systems in general, which can facilitate research, development, design, and evaluation activities in physical. This can apply to the current move to bring VR displays to the forefront of consumer products, with the introduction of several devices including the Oculus rift, where the need to increase immersion and realism of these devices can be addressed through the introduction of viable displays that can give the user a physical enhancement to the audio and visual experience.

5 Conclusions

Through the development of frameworks, different systems can be organized, understood, and compared to support a more cohesive community of researchers exploring the domain of touch-based display interactions. Using the same factors, we can compare different applications, technologies, and implementation techniques to enable the science of touch-based interactions to emerge. The goal of this paper is to stimulate discussions that will contribute to an evolving framework that will serve researchers and developers working the field of touch-based displays, and potentially help to bring these technologies to a more prominent position in our everyday computing interactions.

Perhaps it is the expense and size of tactile devices that are holding back uptake, possibly the lack of understanding of the somatosensory system, or even poor application choices. In any case, the research suggests that there is interest and valid application cases for the sense of touch to be used as an information display. From this discussion, the expectation is that more touch-based devices, system, and interactions may make their way into the public domain, supporting a more seamless and effortless move out of the research labs, and into the hands and bodies of those who need and want to leverage the body as a new medium of communication for computer interactions.