1 Introduction

Advances in mobile and ubiquitous technologies have opened countless possibilities for the growth of wearable computers. To do research and prototype new systems, and in contrast to off-the-shelf devices, composite wearables are assembled from multiple sensors (e.g., GPS, magnetometers), effectors (e.g., haptics), and potentially integrated devices (e.g., head-mounted displays, smart watches, rings) to accomplish tasks and provide contextual support. However, designing such complex systems and integrating devices in a meaningful way is not an easy task for wearable designers. Designers need to be able to select the right components for a composite wearable to provide the necessary UI inputs and feedback mechanisms, while remaining compatible (e.g., fit on different body parts, can be used simultaneously when needed, operate on the same platform). While prior research has focused on the design of purpose-built systems and frameworks that support ergonomics [13, 112], and identify design principles for wearable UIs [31, 46, 115], they do not guide how these devices can be selected, how they are worn, and how the selection provides specific affordances and constraints to the wearer. Further, feedback devices have been studied previously (e.g., visual [65, 88], haptic [74], auditory [83]), yet input devices have not been fully explored.

To address this gap in the literature, the present research develops a design framework for input devices for composite wearable computers. Through this framework, designers can select appropriate wearable input devices for their intended systems. The framework guides wearable system designers through the process of selecting a wearable input device or a set of wearable input devices upon which they can build their systems. We pose the following research questions:

  • RQ1: What wearable input devices are presently available?

  • RQ2: What capabilities can existing input devices provide?

  • RQ3: What devices are compatible with each other on the body of a single user?

To answer these questions, we use grounded theory to analyze wearable input devices, their documentation, and studies using them, creating a prescriptive design framework. The resulting framework contains four axes: type of interactivity, associated output modalities, mobility, and body location, which can be used to guide designers in making choices on how to build composite wearable computers. We develop practical guidance through a set of design scenarios, a how-to guide, to help designers with the essential components on how to use the framework to build composite wearable system that supports the intended task. We address key issues designers must consider when composing wearable computers, including supported interaction, impact on mobility, and human anatomy. We also shed light on issues of comfort, aesthetics, and social acceptance of wearable computers.

2 Background

We synthesize background on wearable technologies, input modalities, motion and gesture interfaces, and prior studies on wearable devices.

2.1 Wearable Technologies

Wearable computers can be equipped on various locations on a person’s body [10, 59, 97]. These devices establish constant interaction between the environment and the user and often form their a network of intercommunicating effectors and sensors. Wearable input devices vary widely in terms of how they acquire input from a user. Some include mini-QWERTY keyboards or virtual keyboards and pointing devices, mimicking desktop designs, but wearables open up a range of possibilities for full-body interaction and sensor-based, context-aware designs [11]. When visual output is needed, a number of interfaces exist, including full-color, high-resolution head-mounted displays (HMDs); monochrome low-resolution HMDs; and wrist-worn displays. Auditory and haptic feedback are also possible. A key concern in wearable design centers on mobility.

Interaction with wearable computers can be explicit or implicit [84, 85]. Explicit interaction involves manipulating a UI directly, yet wearables can also recognize user actions and behaviors, and implicitly interpret them as inputs, which are integrated within the user’s primary task. This implicit interaction with the wearable system allows the integration to be natural, enhancing efficiency and mobility [84].

Wearable computers can supply context-sensitive support, which reduces the need for direct interaction [60]. These devices collect information about the physical, emotional, and environmental state of the wearer, making use of awareness of a user’s context [1]. For example, with a position sensor (e.g., Global Positioning System (GPS)), constant location information of the wearer can be collected and presented to the user via different output modalities, helping the user maintain situation awareness [19, 24, 42, 80, 116].

Seams are the ways in which wearables (and similar technologies) break due to inevitable failures of sensors, networks, effectors, etc. [14, 70]. Seamless interaction makes seams invisible for the user, and integrated into their surroundings [108]. Seamful design, on the other hand, argues for making users aware of the technology and its constraints, going as far as to integrate it into the experience [14].

2.2 Prior Wearable Devices Studies

While prior studies investigated how these wearable computers can be designed, worn, or used [31, 89, 115], the present study provides an in-depth analysis of wearable input devices and investigate the best way these devices can be selected and composed to design wearable computing systems. We highlight some of this prior work here.

The wearIT@work project aimed to use wearable computing technology to support workers. The project focused on wearable computing in emergency response, healthcare, car production, and aircraft maintenance by combining multiple wearable devices [58].

Shilkrot et al. [89] conducted a comprehensive survey on existing finger-worn devices. They classified finger-worn devices based on five components: input and output modalities, action of the devices, application domain, and form factor. This prior study focuses on wearable computing devices worn on fingers, while the present research encompasses various wearable input devices that can be worn on different parts of the body, incorporating the prior findings.

Kosmalla et al. [51] concentrated on wearable devices for rock climbing. The authors conducted an online survey to determine the most suitable body location for wearables and evaluated notification channels. Based on the survey results, the authors developed ClimbAware to test real-time on-body notifications for climbers; it combined voice recognition and tactile feedback to enable climbers to track progress.

Along with the variety of input modalities and domains of wearable devices, their size and form are highly variable, ranging from head-mounted devices to smartwatches and smart rings. Weigel and Steimle [107] examined input techniques used in wearable devices, focusing on devices that have small input surfaces (e.g., smartwatches). The researchers developed DeformWear, a small wearable device that incorporates a set of novel input methods to support a wide range of functions.

While the present research supports designers in building composite wearable systems that combine one or more wearable input devices, prior studies have proposed design guidelines for commercial wearable device developers to design better products. Gemperle et al. [31] provided design guidelines for developers to understand how body locations can affect wearable technology. Based on those guidelines, Zeagler [115] investigated the body locations on which to place wearables and updated the guidelines. The author discussed where biometric sensors can be placed in wearable devices and in which part of the body they should be worn to maximize accuracy.

Although all of these studies provide insights into the design and development of wearable devices, they do not focus on how designers can compose a wearable system, what considerations they need to take into account, and the advantages and disadvantages of devices. The present research focuses on wearable input modalities and proposes a universal framework for guiding the selection of devices and composition of wearable systems.

3 Grounded Theory Methodology

We conducted a qualitative study of wearable input devices to identify their characteristics and propose a framework to guide the selection and composition of such devices. Grounded theory is a set of research practices for exploring and characterizing a domain [33,34,35]. The practices include an iterative process of collecting data, analyzing the data, reviewing relevant literature, and reporting findings [43]. The type of data used in a grounded theory approach can be derived from many sources, including scientific papers, video games, video recordings, and survey data [20]. Our approach started with finding and selecting wearable input devices; then performing open coding to identify the initial concepts, categories, and their associated features; then gradually building these up into axial codes that form a framework.

Grounded theory begins with an iterative process of data gathering and analysis. Open coding involves applying labels to the collected data to identify what is different or unique, forming initial concepts. Undertaking open coding on each piece of data collected provides insights into the studied phenomenon and can benefit the following round of collection and analysis [34]. Concepts are created by identifying and grouping codes that relate to a common theme [3]. Axial coding is performed by recognizing relationships among the open codes and initial concepts, which results in the initial categories. Selective coding integrates the categories to form a core category that describes the data. Through this iterative process, which repeats the above steps until no new insights are gained, a theory emerges, which can then be applied to new data.

One researcher collected data and created a preliminary coding scheme. Based on the collected data, two researchers developed an open coding structure for the collected data to analyze the main features of each device. We collectively developed axial codes from the open codes, followed by selective coding, which produced the final dimensions for the framework.

3.1 Data Collection Strategy

Our iterative process started with finding wearable input devices from three sources: Pacific Northwest National Laboratory (PNNL) reportsFootnote 1, the ACM Digital Library (DL)Footnote 2, and the IEEE Xplore Digital Library (Xplore)Footnote 3, followed by collecting data on each of the discovered wearable devices.

Our focus is on devices that enable the user to input data directly, not those that sample activities, because our interest is in how people build interactive systems. For example, we include devices that provide air-based gestures (e.g., Logbar Ring [57]), but exclude those that focus on tracking the user’s activities and provide physiological data (e.g., fitbitFootnote 4). We note that we also include devices that could primarily be considered output devices, e.g., Google Glass [36] or the HTC Vive [103]. These devices also provide input modalities that could serve as a primary input mechanism for composite wearable systems.

In the remainder of this section, we describe our method for searching and selecting wearable input devices and their data.

Selecting Wearable Input Devices. To find existing wearable input devices, addressing RQ1, we searched for specialized journals that focus on wearable devices and sought out research papers on the subject.

A starting point was Responder Technology Alert, a journal prepared for the US Department of Homeland Security Science and Technology Directorate that specializes in wearables. We used Responder Technology Alert to seed our devices list; in its nine issues (2015–2016), we found 36 devices.

To ensure a comprehensive list of input devices, we identified research papers in the wearables space. We focused on items published via the ACM Conference on Human Factors in Computing Systems (CHI), the International Conference on Ubiquitous Computing (UbiComp), and International Symposium on Wearable Computers (ISWC). It is worth noting that in 2014 UbiComp and ISWC merged. To capture relevant papers, we developed search strings (note that for IEEE Xplore, “publication number” captures conference and year):

figure a

The resulting search identified 70 total research papers. 37 studies from CHI (1997–2017), 18 studies from UbiComp (2014–2016), and 15 studies from ISWC (1997–2012). Out of these papers we found 31 wearable input devices.

To maximize the list, we added an additional 17 devices. These devices are selected based on our prior knowledge of wearables, or were found while looking for data for other wearable devices. The process identified 84 unique wearable input devices to include in the analysis; Fig. 1 visualizes this.

Finding Data About Selected Devices. After creating the list of 84 wearable input devices, we sought data on each device’s capabilities and uses. For each device, we collected as much data as possible from the following:

  • research papers that made use of the device (e.g., projects that built a system with it, frameworks that described it);

  • technical specifications from commercial web sites;

  • device manuals;

  • online videos explaining how to use the device (e.g., manufacturer’s instructional videos, enthusiast reviews);

  • news articles describing the device; and

  • first-hand experience of working with the device in the lab.

Each device had multiple data sources (though not necessarily every one in the above list). The combination of data sources enabled the team to build a complex profile for each device.

Using ATLAS.ti MacFootnote 5, a software package for performing qualitative data analysis, each data source was linked to one or more devices in a network data structure (see Figure  2 for an example of one device in the workspace). In addition, we maintained a shared spreadsheet to record details. This process resulted in a profile for each device.

3.2 Analysis Procedure

Our analysis involved several phases (Fig. 1) which served to characterize the device space and address RQ2 and RQ3:

Fig. 1.
figure 1

The grounded theory process starting with searching and collecting wearable devices, initial observations on devices features, open coding, analyzing and forming the initial concepts, and creating the framework axes. Numbers on arrows indicate number of devices.

Fig. 2.
figure 2

Example network of data for the Atheer Air Glasses technology. Other devices in the corpus have similar entries.

Phase 1: Initial Observations of Device Features. The primary goal of this phase was to identify the capabilities of each device, its range of expressiveness, and what it might be used for. For each device, we identified the following:

  • the technologies with which it was constructed (e.g., accelerometers in the Nod Ring [71] or electromyography (EMG) sensing in Myo armband [69]);

  • the form factor of the device and where it is worn; and

  • the application domain of the device.

In this step, we augmented the data we had stored in ATLAS.ti, adding detail for each device. We used the application to import all data that we collected and to create a preliminary coding system to label device features.

Phase 2: Open Coding on Collected Data. In this step, two researchers began developing codes to classify devices, independently reading through the device network in ATLAS.ti and identifying keywords in device descriptions and data sources. Through this process, an initial set of categories emerged about each device:

  • technological complexity: sensors and algorithms used to provide interactivity;

  • the accuracy of the device in recognizing human input and/or context;

  • the body location(s) occupied by wearing;

  • body part(s) needed for manipulation;

  • number of body part(s) involved in typical use;

  • application domain for which the device was designed;

  • freedom of mobility enabled;

  • input modality(-ies) provided;

  • output modality(-ies) provided; and

  • the cost of the device.

Phase 3: Axial and Selective Coding. During this phase, we engaged in multiple iterative discussion sessions to explore the relationship between the codes, the emergent concepts, and the initial categories. The process involved multiple members of the research team (more than the authors), including a group session involving seven participants. At the beginning of this step, we discussed the initial categories from the previous phase. After several conversations, we decided to eliminate the categories of technological complexity, accuracy, cost, and domain for the following reasons. We eliminated technological complexity because it does not support human-centered design, but concerns technical details. We eliminated accuracy because it was hard to find or non-existent for many devices. Cost was eliminated because of its volatility. We found that domain was less useful because the task for which a device was designed did not necessarily impact what it could be used for.

We revised the remaining categories: input modalities became the type of interactivity axis, mobility became the mobility axis, and output modalities became the associated output modality axis. In addition, we merged certain categories, so body locations, body parts needed for manipulation, and number of body parts were combined into the body location axis. The result of this phase is a set of axes for our framework.

Fig. 3.
figure 3

Visual representation of framework axes and example devices.

4 Framework Overview

To guide the reader, this section provides a short summary of the framework; the next is a detailed description of each of the framework axes, along with examples. Our framework of wearable input devices contains four axes: type of interactivity, associated output modalities, mobility, and body location (Fig. 3). The axes provide significant information about each wearable device which helps the designers to build composite wearable computers.

4.1 Axis: Type of Interactivity

We define the interactivity axis as the input modality or modalities through which the user expresses intent. Type of interactivity considers how a user may invoke action (e.g., speaking, gesturing, clicking, touching). For the type of interactivity axis a device may have multiple modalities (e.g., a device that enables both speaking and gesturing) and so may exist at multiple points on the axis. We expect that as developers consider composing together devices they will identify which types of interactivity they need to support in their applications. They can then consult the framework to identify which device, or combination of devices, is necessary.

4.2 Axis: Associated Output Modalities

Some wearable input devices also incorporate output modalities. Associated output modalities considers how the device might provide output or feedback in a composite wearable system. For the associated output modalities axis a device may have multiple modalities (e.g., a device that provides visual and audio feedback). Designers should select the wearable input devices with associated output modalities that meets their design requirements.

4.3 Axis: Mobility

Some wearable devices inhibit movement, due to fragility, restricting user situation awareness, or requirements, such as sensors, that limit where they work. Mobility considers the freedom of movement that is provided by each wearable device in the list: fully mobile, partially mobile, or not mobile.

4.4 Axis: Body Location

Wearable devices come in many forms and fit different parts of the body: for example, devices worn on the head (e.g., glasses), hand(s) (e.g., gloves, rings), and wrist(s) (e.g., watches). While we expect users to equip multiple wearables for a composite wearable computer, certain combinations are not comfortable or not possible (e.g., wearing a ring and a glove on the same hand). The body location axis addresses the position of the body on which the device may be worn, enabling the designer to identify which combinations of devices can be composed together (and which might be mutually exclusive). Again, devices may cover multiple positions on the axis.

5 The Framework

The wearable input device framework is intended to assist designers in making choices on how to build composite wearables. For each axis, we describe potential characteristics of each input device. Along with the components of each axis, we provide their percentage in the data. We make use of examples from the data to support each category; the supplemental materials provide a complete list of devices.

5.1 Type of Interactivity

The type of interactivity axis identifies available input modalities. It is possible for a wearable input device to use more than one of these input methods.

Clicking (21%). Button clicks are used for input in many wearable devices; such devices may have a range of buttons. For example, the Wii Remote [110], a controller for Nintendo’s Wii console and popular research device [38, 86], has nine buttons.

Touch (46%). Similar to clicking, touch is used by numerous wearable computing devices. We can divide touch input to two types: ‘touch on device’ and ‘touch on surface’. ‘Touch on device’ depends on different types of touch sensors (e.g., resistive, capacitive sensing) [61]. The user can perform a touch gesture on a touchscreen (e.g., smartphones, smartwatches) or on a trackpad (e.g., Genius Ring Mouse [40]).

Other wearable technologies make use of the ‘touch on surface’ technique, in which the user touches something that is not the device: skin, nearby surfaces, etc. For instance, Laput et al. [52] proposed Skin Buttons, a small, touch-sensitive projector that uses IR sensors that can be integrated into another device such as a smartwatch; this device use a laser projector to display icons on the user’s skin. Other wearable devices, such as Tap [98], use embedded sensors to monitor mechanical data of the hand and fingers, which can be used to detect the user’s hand moving or tapping on a surface.

Voice (24%). Using voice for input in wearable devices is an attractive option because it is a natural UI [94]. Voice input is employed in some wearable devices by using an embedded microphone. For instance, users of Oculus Rift [73] can use their voices to perform certain functions such as searching for or opening an application. Another example is Google Glass [36], where the user’s voice can be used to make a call or write and send a message. A downside of voice is that it may not work well in noisy environments [18].

Mid-Air Gesture Interaction (51%). Hand gestures are widely used in wearable technologies. Hand gestures can be classified into two categories: dynamic and static [30]. A dynamic gesture comprises the movement of the hand, arm, or fingers during the elapsed time (e.g., waving or writing in the air), whereas a static gesture involves the hand remaining in a certain posture during the elapsed time (e.g., fist gesture or open-hand gesture) [81]. To detect a mid-air gesture, several technologies, including infrared sensors, inertial measurement units, or EMG, can be used. For example, the Myo armband [69] uses EMG and and inertial measurement units to detect the hand gesture. Many wearable devices in the list can be used to detect hand gesture such as Logbar ring [57], Nod ring [71], and Leap Motion [53].

Foot Movement (5%). Some devices employ foot movement as an input method for wearable computing. For instance, Matthies et al. [62] developed ShoeSoleSense, a wearable device that uses foot movement as an input technique for AR and VR applications. Also, Benavides et al. [12], proposed KickSoul, a wearable device in an insole form that detects foot movement to enable the user to interact with digital devices. Fukahori et al. [29] proposed a plantar-flexion-based gesture device that can be used to control other foot movement computing technology.

Head Movement (11%). Head movement is used as an input modality for wearable technologies. Some wearable devices (HMD and headset) contain a head tracker, which usually consists of gyroscopes, accelerometers, magnetometers, and orientation sensors. When the wearer changes the orientation or the position of their head, the tracker can update the display or respond in other ways. For example, the Oculus Rift DK2 uses this data to update visual information on the VR HMD [109]. VR glasses such as HTC Vive [103]; and AR glasses such AiR Glasses [6] can be used to detect head movement.

Eye Tracking (5%). Eye trackers measure eye movement and the center of focus for a user’s eyes. Eye tracking devices use cameras, IR illuminators, and gaze mapping algorithms to track the gaze of the user. Mobile versions generally use a scene camera to record at what the user is looking. Examples include Tobii glasses [99], iPal glasses [45], and Pupil Labs Eye tracker [78].

Location Tracking (10%). Location tracking refers to technology that determines the position in space, and movement of, a person or object. Wearable devices enable tracking location by using GPS and other technologies [117], normally by synthesizing multiple signals. Location tracking is included in many wearable input devices, especially AR glasses and smartwatches (e.g., Apple Watch [4], Sony SmartWatches [92], Vuzix smartglasses [104], Sony SmartEyeGlass [93]).

Other Input Modalities (8%). Additional methods, based on research that attempts to implement novel modalities, are not widely used in wearable devices; therefore, we add them as a separate category. Xiao et al. [114], increased the input methods of smartwatches by adding additional movements as inputs (e.g., panning, twist, binary tilt, click). Weigel and Steimle [107], proposed “a novel class of tiny and expressive wearable input devices that uses pressure, pinch, and shear deformations”.

5.2 Associated Output Modality

Some wearable input devices provide output and may even do so as their primary function (e.g., HMDs). The associated output modality axis identifies output modalities provided by some of the considered wearable input devices.

Visual (30%). Some wearable input devices provide visual output to the user: e.g., AR glasses, VR glasses, and smart glasses, smartwatches, and touch screens. For many HMDs, their primary purpose is visual feedback, but they are included in the framework due to using head tracking and other inputs. Some wearable input devices use a projector (e.g., pico projector) to provide visual output on the body or on a surface. For example, Harrison et al. [41] proposed omniTouch, a wearable input device that enables graphical multi-touch interaction; the device provides a visual output by using a pico projector to display the projected interface on any surface (e.g., the user’s skin). Other examples of wearable devices that provide visual output include Google glass [36], Rufus Cuff [82], and Epson Moverio [25].

Audio (19%). Audio is also used as an output modality for wearable technologies (e.g., audio notifications). Several wearable devices provide audio output to the user (e.g., Meta 2 [63], Intimate Interfaces [21], Recon Jet [44]). Some wearable devices use audio output to provide better and more immersive experiences to the wearer. For example, Microsoft Hololens [64], which are AR smartglasses, use audio output to simulate a tactile feeling when the user presses virtual buttons [113].

Haptic (14%). Haptic feedback communicate with the user by using touch. For example, the Apple Watch [4] uses haptic feedback [5] to grab user attention and to indicate when actions have been performed successfully.

5.3 Mobility

The mobility axis considers how wearables impact users’ ability to move in the physical world. We classify wearable input devices as fully mobile, which means they minimally restrict movement, semi mobile, which means they meaningfully negatively impact movement, or not mobile. We have not yet seen any devices that might augment mobility.

Fully Mobile (82%). Many wearable devices in the list were designed to support mobility. The user can move and perform multiple tasks while they wear these type of devices, maintaining a high level of situation awareness. For example, the user can wear a smartwatch (e.g., LG Smartwatch [55]), and perform everyday activities without any burden.

Semi Mobile (11%). Some wearable devices may partially inhibit movement. For instance, peripheral vision is important part of human vision for everyday life activities such as walking and driving; using an HMD (e.g., Hololens [64], Google Glass [36]) will partially cover the wearer’s peripheral vision [17].

Not Mobile (7%). Some wearable devices prevent movement due to the device requirements (e.g., requiring sensor-enabled room). For instance, Sony PlayStation VRFootnote 6 requires PlayStation Move and PlayStation Camera for interaction, which restricts the user to a specific space, limiting the mobility of the device.

5.4 Body Location

Wearable input devices can be worn on various locations on the body. We classified devices into two body regions: upper and lower; free form devices can be placed in various locations. Some devices span categories.

Upper Body (80%). Many wearable technologies are designed for the upper half of the body. Wearable technology positioned on the user’s hand might encompass all fingers (e.g., ART Fingertracking [2]), one finger (e.g., Bird [68], Fujitsu Ring [28]), the finger nails (e.g., NailO [47]), the palm (e.g., Virtual Keyboard [77]), or the entire hand (e.g., GoGlove [37]); it might also combine two parts of the hand such as the palm and fingers (e.g., Gest [32]). In addition, wearable devices can be worn on the wrist (e.g. smartwatches), the arm (e.g., Myo Armband), the shoulder (e.g., OmniTouch [41]), the waist (e.g., Belt [22]), and the head (e.g., AR and VR glasses).

Lower Body (6%). Very few devices are worn on the lower body. ShoeSense, a wearable device that contains a shoe-mounted sensor, provides gestural interaction for the wearer [9]. Other devices can be worn on the knee, such as iKneeBraces [102].

Free Form (14%). Certain wearable devices are not limited to a specific on-body location, but can be worn on various body locations. For example, the Cyclops, a wearable device used to detect whole body motion, can be placed on different parts of the body such as the shoulder, chest, and waist [15]. Free form devices also include those that are not intended for wearable use, but have been appropriated for such (e.g., Leap Motion for wearable hand tracking [54, 56]).

6 Using the Framework

In this section, we guide designers in how to design composite wearables through using the framework. First, we propose composition guidelines that designers can follow before selecting the wearable devices; then, we develop a scenario-based design process to show how the framework can be used to compose a wearable computer.

6.1 Composition Guidelines

Our composition guidelines are based on human-centered design [72]. The framework axes inform designers in thinking about the needs and requirements of of people. Designers should consider what are the user’s needs requirements in terms of interaction, mobility, and embodied constraints. As a designer reaches phases for prototyping or developing, they should address the questions aligned with the framework axes. As each question is answered, the range of potential devices is reduced until a workable subset remains.

  • The designer should identify the type of interactivity that is needed to accomplish the goals of the wearable system. Task demands and factors from the environment can determine what type of interactivity is most appropriate for the system. For example, while voice recognition is sometimes sufficient, it can be compromised by the environment or the wearer, such as with the presence of ambient noise or language barriers [18]. The choice of type of interactivity is heavily context- and purpose-dependent.

  • Similar to concerns about type of interactivity, the designer needs to know what feedback is necessary. While work on output is not directly addressed, many input devices have multiple associated output modalities, so selecting input devices with the right feedback mechanisms can minimize the number of devices needed overall, reducing complexity. For example, if the task requires that a user identifies their current location, a visual output is needed to display the current location of the user on a digital map.

  • The designer should determine how much mobility must be supported; is the experience stationary or mobile? Can the environment be instrumented? What kind of situation awareness does the user need?

  • As the range of devices narrows, the designer must consider what parts of the body can be occupied by equipment. What are the impacts of devices at particular body locations on the performance of necessary embodied tasks? How will proposed devices overlap in body location?

6.2 Design Scenario: Search and Rescue Map

In this section, we use a scenario-based design method [8], demonstrating how the framework can be used by designers of composite wearable computers. This process is an activity-based design approach [7, 8], in which the proposed scenario accounts for the user’s requirements, abilities, and activities. We use the framework to guide the design of a composite wearable computer, which captures a set of fictional requirements. This scenario considers complex information management activities set in a dangerous environment. Working from prior research on disaster response (e.g., [27, 100, 101]), we consider a search and rescue scenario involving map use in a disaster zone. As we consider this future scenario, we set aside, for the time being, concerns about the robustness of the equipment (e.g., waterproofing). Finally, we note that, were such a system built, this would likely be one of many applications it might support, but keep our scope narrow for this scenario.

Application Type. Map and orientation support in a disaster zone.

Motivating Scenario. A team of search and rescue specialists are working to search buildings in the aftermath of a hurricane. The location is dangerous: parts are flooded, buildings are on the verge of collapse, and some areas are on fire. Wireless data access may be available, due to mesh networks brought in with search and rescue (and/or provided by wireless carriers). The specialists need to be able to identify their current location, even when nearby locations no longer resemble existing maps [27, 50] while keeping hands free.

Activity. The search and rescue map needs to function as a support tool, constantly able to supply information to a specialist about their current location. It is used to ensure that the specialist is in the region they have been designated to search.

Specific Requirements. The search and rescue map has the following requirements:

  1. 1.

    the search and rescue specialist needs to maintain awareness of the surrounding environment;

  2. 2.

    the specialist needs maximum freedom of movement, especially hands, arms, and legs; and

  3. 3.

    the specialist needs to be able to identify their location.

Proposed Design. First, we need to select what type of interactivity the wearable should provide. The search and rescue specialist needs to interact with the wearable without encumbering hands during movement: we might use gestures, specifically ones that can be turned off while moving, and/or voice to achieve this. The system also needs location tracking to update map position and communicate it to other specialists.

The system will need output. A handheld display would be nice, but we are not considering touch and an HMD would be more supportive of keeping hands free. Based on considering associated output modality, some systems for voice work are built into HMDs, as is location tracking.

The system needs maximum mobility, so this rules out a number of HMDs that can only be used in stationary circumstances, while gesture support is available with many mobile devices.

Finally, we must consider body location. The HMD will go on the upper body, and there are a range of positions for gesture devices that are compatible with this.

From the wearable device list, the designer can select multiple devices that support their needs. For example, They can select Myo armband [69] that can detect the specialist hand gesture. Also, the designer can use Epson Moverio [25] to provide voice input, location tracking, and visual output. These devices will support mobility of the user and fulfil the task requirements.

7 Discussion

Based on our study of 84 wearable input devices, we developed insights to help wearable computer designers select and build composite wearable UIs. We discuss design implications to support building systems and surface design challenges that developers will likely encounter.

7.1 A Composition-Based Approach to Design

The proposed framework supports a composition-based approach to the design of wearable computers. Designers need to consider the relationship between wearable devices: how they complement, conflict, and coalesce into a cohesive unit together. To aid designers’ decision-making, our design process provides an overview of existing wearable devices, highlighting the components of their specification that matter most to designers. Designers begin by defining user needs and activities, which drives the selection of devices that can work in tandem with each other. From this point of view, our framework informs wearable designers by framing design decisions around critical wearable input device axes. Additionally, the included supplementary documents serve a quick design reference for many existing wearable input devices, by providing framework axes information for each device.

Integration Techniques. Most wearable devices do not work in isolation, instead, they work with other devices to exchange data and/or to get power. For example, smartglasses and smartwatches almost universally need to be paired to a smartphone to function at all. Mobile phones, computers, and wearable devices together can help the integration of data from multiple modalities which results in a rich and complex interaction with the wearable computer. The designer’s challenge is to integrate the components of the composite computer in an effective away. Designers need to think of both the low- and high-level data that is available from the composed devices. The sensor data or raw data need to be thought of in a way it can be composed with other data from multiple sources to provide a high-level function [48].

Considerations in Composing Wearable Computers. It is critical that designers take into consideration not only the functions of the composite wearable computer, but also, the relationship between the wearable, the user, and the environment. Based on developing the framework, the following considerations should be taken into account during the composition of the wearable computer.

  • The body location axis in our framework addresses the position of the body on which the device may be worn. Designers need to find the optimum body location and device type that can be appropriate for the intended activity. Although prior frameworks did provide guidelines for developers to understand how body locations can affect the wearable technology [31], our framework assists designers not only in selecting an optimum body location, but also, how complex the intended wearable computer would be and the range of interaction that is possible. Each body location affords different types of interaction while restricting movement. Designers need to be familiar with these affordances to help them select the wearable device or devices that work in tandem with each other and with the wearer’s body.

  • Along with the body part and its affordances, designers need to consider the comfort and usability of the designed wearable, specially when the body is in motion. Due to the fact that the human body may move, using the wearable while the user is still does not reflect how it would be used while in motion [111]. Bringing human factors knowledge into the composition process and conducting both usability and comfort testing while the user is still and in motion can help insure that the designed wearable is comfortable and usable [49, 67].

  • Designers need to ensure that wearable devices do not conflict with the wearer’s clothing. For example, creating a wearable computer that requires the user to wear a finger-worn device, while the activity or context that is used in also requires the user to wear gloves (e.g., firefighter, construction worker). Designers also need to consider the wearer’s clothing and fashion choices (e.g., jewelry), and how they might interact with the wearable computer. Being aware of these conflicts during the composition process help prevent any issues when using the intended wearable computer.

7.2 Challenges and Opportunities

Wearable devices present a range of issues, challenges, and opportunities for human-computer interaction. Designers need to consider these challenges when composing wearable computers and think of them as opportunities for design innovation.

Multimodal Versus Unimodal. Many prior interfaces are unimodal, offering only a single type of interface (e.g., touchscreen) [75, 79]; composite wearable computers offer the opportunity to build rich, multimodal systems. While multimodal systems are potentially more flexible and expressive, they also bring challenges.

Multimodal wearables have the potential to support the development of sophisticated wearable computers. Using these devices, designers can integrate multiple input modalities with each other that can provide more functions and increase usability [75]. For example, the AiR Glasses DK 2 [6] provide users with various input modalities, including hand gesture, touch, and voice. Composing a wearable computer that combines all of these modalities together provides a robust design, such that a wearer would be able to perform multiple functions through a set of input modalities. These modalities can be designed to complement each other or used as a standby in case of a failure of the main modality. Another advantage of composing a wearable computer with multiple input modalities is the increase in expressivity, though this could inhibit usability [72], of the wearable computer.

Designing for multimodal interfaces is challenging, as the developer needs to determine which modalities are best for which types of interaction. An alternative is to enable to the wearer to make decisions about which modality fits the best with the intended activity s/he want to perform. For example, voice input can be used by the wearer to interact with the wearable computer when hands need to be free, however, when background noise is present, the wearer can switch to the touch interface provided by the same wearable computer to perform the same task, although the redundancy reduces expressivity.

Unimodal wearables can limit the functions provided by the wearable. These type of wearables can be designed best for simple tasks that need to be performed in a similar fashion by multiple users with high accuracy. For example, a wearable computer that is used by factory operators to input data about the state of the factory can be designed using one input modality (e.g., a designer can use EasySMX [23], a wearable touch mouse). This design ensures that the wearable computer is usable by multiple workers similarly, which provides higher versatility.

When composing a wearable computer, designers need to take into account the trade-offs of using multimodal versus unimodal input, and the composition of multiple modalities on the usability, functionality, and accuracy of the wearable computer.

Technological Complexity. Wearable input devices may make use of a large number of sensors to provide users with a particular experience (e.g., inertial measurement units, infrared cameras, GPS, microphones). Some of these devices use a single sensor to provide functionality while others combine multiple sensors. Wearable devices that have more technological complexity may be able to provide multiple functions and more accurate data. On the other hand, technological complexity can be a challenge for development and mask problems (e.g., by automatically substituting sensor data in a way that is invisible to the developer). For example, sensor-fusion devices are a specialized form of multi-sensor devices that combine data from several sensors to produce more precise, complete, and/or reliable data than can be achieved with a single sensor [39, 105]. Such devices require algorithms to connect together multiple sensor feeds to provide higher-level data to the developer. In these devices, all of the fused sensors depend on each other to provide the intended data. Devices with sensor fusion enable the developer to work with sensor data at a higher level of abstraction (e.g., with distinct gesture callbacks instead of numerical accelerometer feeds) with the expectation that data is of higher quality. At the same time, the algorithms used to fuse multiple sensors could cause problems for developers, hiding data that could otherwise be used or that might indicate a problem with the sensor.

An example of sensor fusion is an inertial head-tracker, which is used in many HMDs. These consist of (at least) three sensors: three-axis accelerometer, gyroscope, and magnetometer. Data from these sensors are assembled together via algorithm to provide the developer with the user’s head orientation and movement; no single sensor would be able to provide such data. Another example of sensor fusion is the Myo Armband, a wearable device worn on a user’s forearm and used to detect the hand gesture of the wearer. It employs both inertial measurement unit and EMG sensor to provide accurate data about the hand and arm gestures.

Level of Accuracy. Consideration must be given to the level of accuracy of wearable devices. When composing a wearable computer, designers need to make sure that the level of accuracy of each wearable device is considered appropriate for the activity and context of use. While conducting this research, we noted a lack of information about the accuracy of each wearable device in our data list. Designers might find it challenging to have a sense of how accurate the composed wearable computer would be without such data.

One way to overcome the lack of accuracy data is to conduct user studies and usability testing to examine the accuracy of the used wearable devices and the composite wearable computer. When the accuracy of a device is not sufficient, designers can consider integrating other wearables that can help increase the accuracy of the wearable computer. Designers also can use testing techniques that have been developed for testing the accuracy of specific sensors. For example, to test the positioning accuracy of a wearable GPS device, designers can use various tools to measure the performance of the GPS (e.g., Time to First FixFootnote 7).

Testing the accuracy of gesture recognition devices, however, is more challenging. Humans are affected differently by the natural tremor of their hands, which can cause issues for gesture devices to recognize inputs with high accuracy [106]. Designers can overcome this issue by integrating multiple gesture recognition devices into the wearable computer to increase accuracy and reliability. For example, two Leap Motion [53] devices can be combined together to enhance the accuracy of the gesture data [66]. While this approach has the potential to enhance the accuracy, it might cause the wearable computer to increase in size and weight. Designers need to consider the trade-offs of combined more than one device on the wearability and usability of the composite wearable computer.

Clicking buttons, on the other hand, can guarantee a high level of accuracy. When comparing between gesture devices and other input modalities (e.g., mouse, touchpad), gesture devices perform the poorest and have the lowest accuracy percentage [87]. Another challenge of using gesture devices is the need for training the users on how to perform the gestures correctly to interact with the wearable computer, which might be difficult. Designers can consider using buttons as the main input modality to ensure a high level of accuracy.

Social Acceptance and User Resistance. While wearable computers are gaining more attention from designers and developers, there remains resistance from society. Designers need to build composite wearable computers that are socially acceptable to enable unhindered use of these technologies. For example, people may be opposed to wearing certain devices in crowded areas (e.g., HMDs [88]). The type of interactivity provided by the wearable computer can cause the wearer to feel awkward or uncomfortable (e.g., hand gestures [89]). Designers can use the framework to select the type of interactivity that fits best with the context they are designing for and with the body location that can help enhance the acceptance of the wearable computer. For example, fingers, wrists, and forearms are considered to be socially accepted on-body locations for wearable computers [76, 89].

Identifying Limitations of Existing Devices. Our framework can help designers and researchers identify limitations in available wearable devices, which can drive future development and research in the domain of wearable technologies. Based on our data set of 84 wearable input devices, we found that \(80\%\) of these wearables are worn on the upper-body. This is mainly due to the larger range of affordances provided by the upper part of the human body [115]. This could limit the designers’ choices of wearable devices that can be worn on the lower part of the body. To overcome this limitation, designers can use devices that are free form, and can be worn on multiple body parts (e.g., Cyclops [15]). However, only \(14\%\) of the wearables in the data set are free form, which might constrain design choices. To overcome this limitation, designers can combine wearable devices that are designed for the upper-body with different fabrics and textiles to enable wearing them on other parts of the body [26].

7.3 Limitations

This work adds to a growing body of research on wearable technology (e.g., [16, 31, 67, 89,90,91, 95, 96, 115]). We acknowledge that this work is limited, and not intended to provide an exhaustive analysis of input wearable devices and wearable technology in general. We intend to continue improving, extending, and refining the framework and finally validating it through designing composed wearable computers and investigating different aspects of the framework through future user studies.

8 Conclusion

The present research developed a grounded theory analysis of 84 wearable input devices using 197 data sources to understand their capabilities. We used grounded theory to develop a multi-axis framework for composing devices together into wearable computers. The framework addresses devices in terms of their type of interaction, associated output modalities, mobility, and body location. We expect the resulting framework to be useful to designers of future wearable systems, enabling them to navigate this complex space to assemble the right devices for a particular set of tasks while keeping in mind a set of limitations in composition.