1 Introduction

Ubiquitous computing is a vision on how technological systems can integrate into our daily life in a socially acceptable way. Such systems need to be able to collect information about identity, position, activity, etc. of the users that are present and of events that occur within the environment. They use complex sensors such as cameras with their accompanying image analysis software (e.g., Real Sense from Intel [10]) or tracking sensors (e.g. Leap Motion hand tracking [7]). However, complex sensors tend to be hard to understand by users as they demonstrate intelligent (or at least interactive) behaviour in aspects that they are trained for, but limited or no understanding in other aspects.

In this paper the Mirrored-Perception-Cognition-Action (MPCA) is proposed as a model that can assist designers when reasoning about complex sensing systems. MPCA emphasizes that all interactions between the user and the (digital) system need to pass through a shared physical environment. Rules of social behavior play a role in this interaction, while the designer also needs to take into account the asymmetry in abilities of the user and the system.

The proposed framework has been adopted in the design of a surgery assist system named TPSurgery (Fig. 1). This system allows a surgeon operating in a sterile environment to control both a patient information system and the surgery lighting in a touch-free manner. The MPCA framework has not only been used in the design of the system, but also as a reference framework when analyzing the feedback from a pilot user test.

Fig. 1.
figure 1

An overview of TPSurgery

2 Background

Perception Cognition Action (PCA) is a widespread model for describing the information processing within the human brain. It considers three main steps: a human perceives his environment through diverse senses (hearing, vision, touch, smell), interprets these perceptions, combining them with past experiences stored in memory, and plans possible actions, after which such actions are executed using his motor system [8]. This model is clear and simple and has for instance been adopted in robotics [3, 5], human-computer interaction [1] and neuroscience [4].

Several models of interaction have been developed based on this PCA model. For instance, ACT-R [1] is a theory for “simulating and understanding human cognition”. The EPIC [6] architecture models “human multimodal and multiple-task performance” and includes “peripheral sensory-motor processors surrounding a production-rule cognitive processor to construct precise computational models for a variety of human-computer interaction situations”. Ullmer and Iishi [11] in turn have presented MCRpd as a conceptual framework for tangible user interaction. They promote a system perspective that simultaneously considers the physical and the digital side of the interface.

These models provide insight into how a cognitive process is embedded in human-machine interaction. However, they do not show how information is passing back-and-forth between a user and a system, and how the human and technical system need to be matched to each other at every stage in the interaction.

3 MPCA Framework in Design

Designers are expected to integrate complex sensors in the user context in a way that these sensors can attain optimal performance. They also need to organize the feed-forward and feedback information such that the occurrence and effect of accidents in the interaction are minimized. Bellotti et al. [2] provide an interesting set of questions to consider when assessing the social behavior of such sensing systems:

  • Address: how to initiate interaction with a system

  • Attention: establishing that the system is attending

  • Action: expressing what the system needs to do

  • Alignment: monitoring system actions

  • Accident: recovering from interaction errors or misunderstandings

These steps already indicate that an interaction involves one or more loops in which the PCA system of both the human and the system are involved. The MPCA model emphasizes the need to clarify the status of both systems at any time through a single diagram (Fig. 2). It adopts an identical structure for both the cognitive (human) side and the digital (computer) side involved in the interaction, but also emphasizes the asymmetry in the interaction. The Perception (Senses) - Cognition (Reasoning) - Action (Motoric) points at an active stance of the human in the interaction, while the Perception (Control) – Cognition (Model) – Action (Display) points at a more subservient role for the system. Another aspect that the model emphasizes is that both partners in the interaction can only understand each other’s intentions when they are expressed in changes in the physical environment that they can both perceive and interpret.

Fig. 2.
figure 2

Mirrored Perception Cognition Action (MPCA) model.

The MPCA loop starts with the perception of the user. The user senses the system and tries to understand the options offered by the system, which is the cognition stage. After that the user formulates a goal and tries to express it by interacting with the system in the action stage. This leads to some changes in the physical world that the system can sense and interpret in the control stage. Adaptations to the model maintained by the system are executed accordingly. These changes in the model are reflected through feedback with actuators in the display stage. The physical world undergoes some changes that the user can observe for the next loop.

This MPCA model emphasizes that the system does not only need to provide feedback about the actions that it has performed itself, but also feed-forward that can help the user understand better what actions he can perform in response [12].

The MPCA model is intended to help the designer in making explicit the issues that need to be considered in each stage of the interaction:

  • What a system can do and how it communicates its abilities (assist the user perspective)?

  • Sensors: how the system is controlled, i.e., how does it sense what is expected of it? (input variables)

  • Actuators: what are adequate ways for the system to display its response? (output variables)

  • Transformations: how does the system use the incoming variables (+memory) to extract meaningful information? (mapping from variables to information)

  • Which information needs to be maintained in order to demonstrate intelligent behavior (i.e., behavior that does not only depend on the input variables, but also on past events)? (memory)

We illustrate how to apply the MPCA model in an interaction design with the leap motion sensor.

4 Designing with a Complex Sensor

An existing surgery assist system [9] was redesigned using the MPCA model. The redesigned system will be called TPSurgery. It is an interactive surgery assist system that supports browsing patient information and adjusting the lighting within the surgery room. Because of the sterile environment, all interactions need to be accomplished without touching. The system is controlled by hand gestures that are sensed by the leap motion, while the feedback and feed-forward are accomplished through a combination of a tangible user interface (TUI) and a graphical user interface (GUI).

The context is described in the following sentences: Once the surgery begins, the system is powered on and is waiting for interaction. The surgeon holds his/her hand in a position that can be detected in order to activate the system. He/she uses the system to adjust the surgery light. He/she stops adjusting and starts performing surgery. When he/she needs patient information, he/she re-activates the system and switches it to information browsing mode. After browsing patient information, he/she stops looking at the display and continues with the surgery. When the surgery ends, the system is switched off.

Different stages in the interaction for which an MPCA model need to be considered are the following:

  • When the system is on, it provides feed-forward information in both the GUI and by means of LEDs blinking in the TUI to inform the user to start interaction through hand gestures (perception) (Fig. 3-a). For simplifying the interaction, only single-hand gestures are allowed. Warning is provided in both the GUI and the TUI when detecting two hands.

    Fig. 3.
    figure 3

    Workflow of TPSurgery. From left-top to bottom-right is: (a) TPSurgery starts working, (b) setting default hand, (c) browsing patient information with pinch gesture, (d) out of detection warning, (e) not reacting to wrong gestures, (f) rolling hand to recover from error, (g) switching between different modes, (h) controlling surgery light with pinch gesture.

  • If the user considers his/her left hand as the ‘default hand’ (cognition), he/she will put his/her left hand above the leap motion and holds it still for a little while (action). The sensor can detect the hand and identify it as a left hand (control). The system stores that the left hand is the default hand and keeps track of the time that the hand was held still (modeling). The GUI provides feedback for both parameters once the time that the hand is detected exceeds a set threshold (display). On the TUI, the LED with the left hand shape is turned on, informing the user that the left hand has been detected as the preferred hand and that the system is ready to accept input from it. Note that forcing the user to hold his hand still above the leap motion for a little while before starting to interact with it has a very positive influence on the performance of the sensor to correctly interpret hand gestures (Fig. 3-b).

  • In the ‘infomapping’ stage displaying patient information, a blue dot on the screen indicates that the system is attending to the hand and is tracking hand gestures. The LEDs on the TUI indicate the detection range of the leap motion. If the hand starts crossing one of the borders of the detection range, the LEDs on this border turn from blue to red, urging the user to adjust the position of his hand (Fig. 3-d). When the ‘infomapping’ stage is inactivate, the human body displayed on the GUI is white to inform the user that the system is not accepting hand gestures (Fig. 3-e).

  • The user can use a pinch gesture to interact with the GUI display (action). The system decides whether the pinch is intended or not by comparing the pinch strength, which is derived from how well the thumb and forefinger form a closed loop, to a preset threshold (control & modeling). If the user action is interpreted as intentional, the color of the displayed human body will change to blue (display). This informs the user that the system is attending to him (perception & cognition) (Fig. 3-c).

  • The user can move his/her default hand to change the position and orientation of the displayed human body. In order to prevent that the system responds to unintended movements, the user has to keep pinching. So from the point of view of the user, the action to be performed is ‘pinch and move’. If the user unintentionally moves the display to an unwanted orientation, he can extend his hand and hold it for a little while, which will return the display to its original orientation (Fig. 3-f).

  • If the user wants to change the lighting, he/she needs to move his/her hand to the location of the button ‘switch’ (Fig. 3-g). Then he/she has to hold his/her hand on that button for a few seconds to switch the mode for avoiding unintended action. During this time the button will turn blue to tell the user it is paying attention to his hand gesture (Fig. 3-h). The gestures used for positioning the surgery light are similar to the gestures used to control the displayed human body, so that the user is not expected to learn two separate sets of gestures.

This above description clarifies how design decisions for the prototype have been influenced by the MPCA framework. A small user test was also performed with the protoype. Most users thought the system made sense in telling the detection range while trying to establish interaction by means of having their hand being recognized. The feed-forward and feedback offered through both the graphical user interface and tangible user interface were mostly deemed appropriate. In short, they could quickly made sense of the sensing system. Some subjects were however confused by the context of the application, which was however not the primary topic of the reported study.

5 Discussion and Future Work

This paper proposes the MPCA model as a useful framework when designing interaction systems with complex sensors. An example of how to design an interaction system with the help this model was explained in some detail. User tests were performed in order to establish whether or not users were able to make sense of the sensing system. Results showed that users praised the GUI and TUI feed-forwards and feedbacks.

More design examples and user tests are obviously required to more firmly establish that this MPCA model is indeed useful for designers and leads to designs of complex sensor systems that users can understand and appreciate. The model is therefore actively promoted in a design courses at our department. This paper describes one of the student project outcomes from this design course.