Personal and Ubiquitous Computing

, Volume 16, Issue 1, pp 27–37 | Cite as

Steerable projection: exploring alignment in interactive mobile displays

  • Jessica R. Cauchard
  • Mike Fraser
  • Teng Han
  • Sriram Subramanian
Original Article

Abstract

Emerging smartphones and other handheld devices are now being fitted with a set of new embedded technologies such as pico-projection. They are usually designed with the pico-projector embedded in the top of the device. Despite the potential of personal mobile projection to support new forms of interactivity such as augmented reality techniques, these devices have not yet made significant impact on the ways in which mobile data is experienced. We suggest that this ‘traditional’ configuration of fixed pico-projectors within the device is unsuited to many projection tasks because it couples the orientation of the device to the management of the projection space, preventing users from easily and simultaneously using the mobile device and looking at the projection. We present a study which demonstrates this problem and the requirement for steerable projection behaviour and some initial users’ preferences for different projection coupling angles according to context. Our study highlights the importance of flexible interactive projections which can support interaction techniques on the device and on the projection space according to task. This inspires a number of interaction techniques that create different personal and shared interactive display alignments to suit a range of different mobile projection situations.

Keywords

Personal mobile projection Handheld pico-projectors Interactive steerable displays Hand and feet interaction 

1 Introduction

The new range of mobile pico-projector and projector phone platforms provide a potentially unique selling proposition for manufacturers. Mobile projected displays are being heralded as a new opportunity for co-located collaboration [1]. However, it remains unclear how these projected displays fit alongside the existing hardware ecology of the mobile handset. Pico projectors are now being sold either embedded inside mobile phones (e.g. Samsung Beam i8520), cameras (e.g. Nikon S1000pj), tablet PCs (e.g. DigiLife iOne [2]) or as accessory projectors that can be plugged into most personal devices (e.g. WowWee Cinemin Swivel [3]).

Personal projectors offer one of the first design challenges for a more general class of mobile devices that incorporate multiple displays. Indeed, the ways in which handheld projectors are positioned will affect the use of other mobile inputs (e.g. keyboards or cameras) and outputs (e.g. screens or vibro-tactile motors). This creates new challenges for the role of the traditional mobile handset screen particularly with regard to its placement relative to the projection.

A number of possible designs might be used to optimally control the relationship between two displays. Currently, in projector phones the projector is typically mounted above the screen with a horizontal projector throw. This generates a fixed orthogonal angle between the screen and the projection, making it difficult for the person holding the device to see the screen and projection simultaneously, or even to rapidly interleave between them. This might not be an issue if, for example, each display were used separately—the projector for public interactions and the phone’s screen for private ones. However, this would preclude new opportunities emerging that exploit both displays simultaneously; for example, Hang et al. [4] demonstrate the advantage of using both a projector and a phone screen for specific applications such as text input. With simultaneous dual displays, users can also decide which data they want to keep private and which data they want to share on another screen at the same time. One example of a dual display mobile device is presented by Hinckley et al. [5]; they describe an alternative to separating, interleaving or switching off displays. The device possesses a range of ‘postures’ corresponding to different operational modes, identified through the hinge angle that people can use on their own or collaboratively. An alternative dual-display configuration might be to physically separate the projector hardware from the handset to support the dynamic juxtaposition of the displays. However, physically separating devices imposes increased demands on user control, cost, power, additional hardware to send video signals, and prevents the projector from easily benefiting from handset input capabilities such as accelerometers, cameras and touch screens.

Several projects have explored input techniques for mobile or wearable projection systems. However, at this time interactions with projections from these devices are limited in a couple of key ways. Firstly, the device’s hardware ecology itself may prevent the user from using the device where and how they want in the environment. For instance, the Light Touch™ pico projector [6] needs to be put down on a flat surface at hand reach in order for the user to interact with it, cutting down on its mobile capability. Similarly, Harrison et al’s [7] Skinput interface requires the device to be fixed on the user’s body and for the user to not move their body relatively to the projection so their skin can be used as an input surface. The fixed or worn projector in these cases makes it difficult to rapidly choose and use appropriate interactive surfaces as the user moves through the environment. Furthermore, the fixed relationship between the camera and the projector assumes that the interaction should in some way align with the projection. For example, Mistry and Maes [8] present a technique based on mid-air interaction where users can gesture to a camera worn around the neck. Cao et al. [1] use a combination of two buttons attached to the projector as input to the projection.

The relative placement of input and output capabilities on handheld mobile devices could create significant problems for interacting with projections. As a matter of fact, if a touch screen and projector are not aligned on a device in such a way that both can be used simultaneously, then the use of the screen as input to the projection is virtually impossible. Recognising this problem, steerable projectors have emerged such as the S-Vision prototype [9] that, equipped with a swivel, can be put on any surface and project at different heights, on a wall for example, regardless of the device’s own position.

Whereas applications such as presentation viewing might benefit from being projected on a wall, an Augmented Reality (AR) application guiding the user through streets, such as the Nearest Tube Application [10], might benefit from being projected directly on the pavement, where users could interact with the application by stepping onto directional arrows. Similarly, it is interesting to note the 90° angle between the phone’s screen and the projection throw in the AR project Map Torchlight [11], which connects a mobile handset and a handheld projector in a fixed alternative configuration, although it does not respond to the problems of contextually adapting the projection angle.

Drawing on these innovative examples, we suggest a generalised approach to the configuration issues by using steerable projection, in which displays and their inputs can be reoriented with regard to one another on the same device in order to create different juxtapositions and arrangements that suit particular tasks. In the following sections, we describe our implementation of the dynamically steerable projection on a handheld device. A study demonstrates the requirement for such an approach and determines initial preferences for display offset depending on application context. We draw on this study to inspire novel interaction paradigms for combining such steerable projection systems with interactivity. Our techniques adapt to situations in which a variety of projection surfaces may be selected, and a variety of input techniques may be used depending on a user’s choice of alignment or misalignment with the projection beam. We explore projections that are touched with the hand or foot, as well as projections that detect user’s hand or foot movements in front of the handheld device’s camera to interact with the projection.

2 Steerable projectors

According to Ashdown and Sato [12] p. 1 “a steerable projector is a projector whose beam can be moved under computer control”. In the case of personal steerable projectors, users should be able to choose relative display angles that are best suited for them depending on the context. We expect that some offsets are better suited than others for particular tasks or applications, these correspond to the “difficulties in handling the context switch” described by Hang et al. [4] p. 214. One difficulty in switching contexts may be that mobile phones or Personal Digital Assistants (PDAs) contain private data such as contact details, personal information, text messages, emails or pictures. In the case of projector phones, the projection space can be used in conjunction with the mobile phone’s screen by either cloning the displays or by displaying complementary information across both screens. Cao et al. [1] address this privacy issue with a permission control system in which data is either public, semi-public or private and is displayed accordingly.

A steerable projector offers a solution in which displaying such categories of information determines or is determined by the spatial relationships between a private and public display. Users can choose where to display information; they can decide, for example, to project on a large projection space (public) in front of them or on a smaller one on a desk for more controlled semi-private sharing. Also, a specific projection angle might be more adapted to respond to the physical constraints of the projection surfaces available [13]. For example, on a train an appropriate projection space might be the folding tray attached to the rear of the seat in front. Thus, we expect the optimal projection angle to depend on criteria such as the nature of the application displayed on the screen, the privacy settings but also the environment. The privacy settings are determined in terms of data privacy as well as in terms of private/public spaces. However, we currently have little principled information on mechanisms or preferences for projected display use. To explore this emerging design space, we began by looking at the angle between a screen and a projection coupled in the same handheld device.

Our prototype (Fig. 1) has been implemented using a Samsung Omnia HD i8910,1 running Symbian 5th Edition OS, and a handheld projector (Pico Pocket V3 Projector). The phone handset and the projector communicate through a bespoke TV-out/mono cable. This prototype is fully portable; the architecture (Fig. 2) can be used with any mobile phone equipped with TV-out capability. It can also be used with any handheld projector with Composite Video input. For this example we have incorporated a mirror at the top of the projector lens, based on Pinhanez’ [14] Everywhere Display projector but adapting their design to handheld use. Although a final design would mount the lens on a pivoting head inside the device and/or use an array of steerable Micro-Electro-Mechanical Systems (MEMS) mirrors, for our proof-of-concept we have used a single larger mirror attached to a micro servo motor (Hitec HS-55) which is controlled by an Arduino Bluetooth microcontroller. The user selects the angle of the mirror through the phone’s touch-screen; the application then wirelessly sends information to the electronic board that adjusts the mirror’s position accordingly.
Fig. 1

Prototype disassembled to show components

Fig. 2

Prototype architecture

Our prototype automates the selection of different projected orientations. Manual projection systems such as with WowWee’s Cinemin Swivel portable pico-projector [3] are more straightforward to implement. Nonetheless automated steerable designs offer more possibilities such as readjusting the projection’s position, keeping the position still even if the user’s hand is moving (e.g. using accelerometers to perform tilt compensation), automatically finding the optimal projection surface, and even moving the projection itself (for example to indicate directions or to adapt to coarse changes in the user’s position such as lying down). While we do not explore such functionality in this paper, we anticipate that automated steerable projection will support future techniques such as detecting optimal projection spaces depending on lighting conditions or combining multiple projections in real space to create larger projection spaces, provide improved resolution or even increase resolution or brightness.

Steerable projection is affected by standard projection issues such as keystoning because the projection needs to be of good quality in more than one orientation. If the system makes use of a mirror, then the lens needs to be kept as close to the mirror as possible in order to minimise distortion. In our prototype, we counteract part of the keystoning effect by sending a signal to the projector at a lower resolution than the potential maximum. This partially reduces distortion at the extremities of the image but reduces the overall scale. Steerable projection systems may also present difficulties with focusing, although emerging laser pico-projector technologies such as the technologies produced by Microvision and Light Blue Optics will mitigate this problem.

3 Study: projection-device orientation

We wanted to determine whether steerable projection was important for supporting different uses of handheld projectors. We conducted an experiment to determine whether different angles between the mobile device screen and the projection space are useful for different tasks. Three simple tasks were designed for participants, during which the selection of different projection angles was monitored. Our hypothesis was that participants would prefer different orientations between the screen and the projection for different tasks.

3.1 Participants and materials

We recruited twenty-one people between 22 and 40 years old to test the prototype, seven working as individuals and fourteen working in pairs (four pairs were unisex and three were of mixed gender). Our participants were all regular mobile phone users, a minority were already smartphone owners, and none of them had ever seen or used a pico-projector or embedded projector on a device. Each task was performed by both the individual participants and the pairs using one device only in the pair case. Although our device and architecture support steering to any angle, for the experiment software was implemented with a choice of three pre-determined projection angles, labelled as: Wall projection, Desk projection and Floor projection (Fig. 3).
Fig. 3

Steerable angles used for the study

Wall projection corresponds to a horizontal projection, identical to the one available on current mobile projector phones. Desk and Floor projections respectively correspond to a 30° and 50° downward inclination. The interface for switching angle is very simple, providing a single ‘angle’ button at the bottom left of the touch screen that opens a pop-up menu with three choices: Wall, Desk or Floor. We have purposely given names to the projection angles and not the angles itself as we believe that they would be more meaningful to the users that were not necessarily technical people. We understand that this could have been confusing for them and have therefore empathized on the fact that users could project wherever they wanted, regardless of the name of the angle.

3.2 Procedure

Each session lasted between 30 and 45 min. During the first part of the session, we demonstrated the prototype to each participant (individuals or pairs). The participants used the prototype until they were familiar with the system. Since the interface was very straightforward, only two touches to change angle, the participants were rapidly confident to use the system on their own. In the case of pair participants only one user would hold the device, and in some cases the participants decided to switch who was holding the device during the experiment. They were given a set of tasks and asked for each of them to choose the projection angle they felt the most comfortable with in order to complete that specific task. It was made clear that the participants should complete the task in their own time and that they could choose any surface to project onto. The tasks were chosen to include a spread of demands from keeping the projection very still to observing as much detail as possible, through to using the projection on the move. Our overall hypothesis was that different tasks would suit different projection angles, and therefore motivate our design. To test this hypothesis, we designed three tasks which could be used for personal projection and recorded the extent to which preferences for particular angles were shown. The chosen tasks were:
  • Spot the difference. The participant(s) projected two images and had to spot at least five differences between them. We expected that this task would highlight different needs between individuals and pairs.

  • Reading. The participant(s) had to read aloud an email displayed on the projection. This task required the user to keep the prototype very steady while concentrating on the projected image. We expected that it would highlight considerations for privacy.

  • Navigation. The participant(s) had to follow projected arrows to help them navigate between two points across a maze. They were walking while holding the device and interacting with it using the touch screen. It was emphasised that they could use any surface available along the way for the projection. We expected that motion would challenge the participants to continuously find new projection spaces.

We monitored whether and how often participants changed angles during and between tasks and for what reason. We observed participants’ behaviours during the sessions using a think-aloud protocol (and listening to conversations between the pairs) and conducted a semi-structured interview after the completion of the tasks to gather qualitative results. The participants had to identify and explain their preferences of angle for each task and why. They were then given an opportunity to express their opinion on the device and asked if they would use it if available and in which situations.

4 Results

The participants were very enthusiastic about the device in response to questions about all tasks. They grasped the concept of steerable projection spaces quickly and showed no difficulty using the phone’s display to switch angles. As expected, some participants found the low quality of resolution and contrast and keystone effects limiting. For each task, we conducted a Pearson’s chi-square test to identify whether preferences of viewing angle were significantly different from random choice for the task, and recorded observations on the detail of participants’ selected angles.

4.1 Task 1: spot the difference

A chi-square test indicated a significant difference between the observed and expected frequency of angle that participants felt comfortable with (χ2 = 7.43, df = 2, p < 0.05). The majority (57%) selected Wall projection (Fig. 4a), with the remainder (43%) selecting Desk projection (Fig. 4b) and none selecting Floor projection.
Fig. 4

Spot the difference—wall (a) and desk (b) projections

For pairs, one participant typically held the device while the other pointed at the differences (Fig. 4b); some participants used the shadow of their fingers on the beam to point to details of the image. Most participants tried completing the task with different angles and some changed angles during the task. Participants reported that the Desk projection was chosen because it was closer and easier to point at and touch the projected image. The Wall projection was chosen as a natural physical position, with more control over the projection size. 70% of the participants said they would be likely to change angle depending on various factors, including surfaces available to project on, number of people they would want to show an image to, and how simple it would be to change angle.

4.2 Task 2: reading

A chi-square test showed a significant difference between the observed and expected frequency of angle selected (χ2 = 7.43, df = 2, p < 0.05). The majority (57%) selected Desk projection, with the remainder (43%) selecting Wall projection and none selecting Floor projection.

As we expected, reading an email out loud raised some concerns over privacy issues. Our choice of reading task for email was perhaps even distracting as some said they would not use the projection at all for reading email, while others believed that they would only use it in private or semi-private places. The Wall projection was described as being ‘more comfortable’, with no need to bend one’s neck, just looking straight ahead. However, participants also indicated that they would switch to Desk projection if they were in a more public place. This confirmed that the projection space itself can be used as a way of managing shared privacy. Desk projection was also chosen because participants found the horizontal surface to be the most sensible place to read. A number of participants commented that they would like to pre-set a ‘reading angle’ on their phone.

4.3 Task 3: navigation

A chi-square test showed a significant difference between the observed and expected frequency of angle selected (χ2 = 10.86, df = 2, p < 0.01). The majority (71%) selected Desk projection, with the remainder (29%) selecting Floor projection and none selecting Wall projection.

All participants used the floor as their projection space for the directional arrows as there was no adequate continuous wall space while walking. The choice between Floor or Desk projection angle seemed dependent on whether participants were naturally holding the device horizontally (Desk projection preferred) or tilted upwards (Floor projection preferred). When the device was held horizontally, the Floor projection was very close to the body and participants did not feel comfortable walking while looking at their feet. On the other hand, when the device was tilted, participants found the Floor angle approximately suitable, although they commented that it should be easily adjustable, depending on: the number of people around (short angles suit crowded places); terrain (going up or down hill would affect required projection orientation), and speed (faster movement would require projected information further away). The participants also commented that in general it was tricky to simultaneously walk along the path and control the projection through the touch screen. In most cases, they had to stop walking to change the projection angle, to then resume the task with the new projected angle.

4.4 Additional results

We collapsed results across all tasks to test whether there was an overall preference for a particular projection angle for our design. A chi-square test showed a significant difference between the observed and expected frequency of angle selected (χ2 = 14.29, df = 2, p < 0.001). Across all tasks, the majority selected Desk projection (57%), followed by Wall projection (33%) and finally Floor projection (10%).

Our small-scale study did not have the statistical power to make definitive conclusions about significant differences between individual and pair behaviours. However, discussions during the interview indicated that pair interactions heightened a focus on privacy issues. Moreover, we observed that pairs appeared to change projection angle more frequently. This may be to achieve the more complex task of balancing a co-participant’s changing viewing requirements with the user’s own viewing requirements.

5 Discussion

We found evidence that the task being undertaken affects the projected orientation requirements and therefore that different tasks from our selected activities will produce different preference results. For our selected tasks, the overall preferred projection-screen coupling angle was Desk (30º), and not the Wall angle (0º) currently preferred by handheld projector manufacturers. In the task which included user mobility, the currently preferred Wall angle configuration was not used at all. We deliberately ran the experiment in a room where all types of projection spaces (such as whiteboards and desks) were available, so users could decide their ideal projection space for each task. Every single participant, at some point, changed angle to accomplish the tasks. They all agreed that they would use different angles depending on the context or the application that they were using. Thus we found strong observational and statistical evidence for the benefits of steerable projection. Our participants also provided some interesting suggestion for future designs, including the idea that coupling angles could be automatically recognised by the device depending on the chosen application, controlled at a sub-degree level using analogue controls, and many participants suggested preset favourite angles for given tasks along with other handset profile settings.

Finally, we also noted the relationship between steerable projection and interactivity—restricting control of the projection to the phone screen—meant that participants had to iterate between the screen and the projection for control. This issue was particularly prominent in the navigation case where moving compounded the difficulties. It was also particularly observable in studies with pairs where participants had to iterate between angles, often to balance the needs of the user and the observer. Our observations on the use of projection surfaces which are increasingly distant from the hands such as floor projection will also impose increased demands on the design of suitable interaction techniques for these settings. In the following section we explore how these implications have inspired our design of interaction techniques for steerable projections.

5.1 Interaction techniques for steerable projections

Interaction techniques for steerable projections have to be flexible enough to support the use of different surfaces such as a wall, desk, floor or even ceiling. While some features of the device require for it to be held in a steady position, the projections’ interaction techniques need to be viable when the device is being held. Indeed, the main advantage of steerable projectors is that they are fully mobile and therefore should not require for the device to be placed on a stable surface to be used.

In our study mentioned in the last section, we used a touch screen as these are available in many pico-projector devices such as commercialised projector phones and cameras. However, current touch screen technologies mostly give visual feedback and are not always practical for ‘on the move’ interaction in which users might not want to look at the screen in order to interact with the application. Furthermore, touch screens normally preclude simultaneous shared interaction between pairs or groups with a projection. A good technique for ‘on the move’ interaction would also allow unconstrained movement with no additional sensors or physical tags.

Since our projection is steerable, we suggest the user could employ different body parts to produce interactive input. We have begun to explore direct interaction techniques using hands and feet depending on the angle of projection, as these provide the greatest reaching range to touch a projection originating from a handheld device.

As well as considering the input source on the body, we also need to configure the alignment between the projector and the camera that will be used to track the hands or feet. Since the projector’s throw angle can be changed, we might consider fixing the camera to the projector’s steering mechanism so that they are always aligned. However, as with our study tasks described in section: Study: Projection-Device Orientation, we may also wish to consider situations in which the control space and projection space are misaligned and so make the camera independently steerable. This would allow users to control their projection without detracting from the projected content or without being noticed, as in Montero et al.’s secretive gesture [15]. Thus we may design a system to allow the interactive and projection spaces to be deliberately aligned or misaligned at particular points in a task.

If we are controlling a projection without a physical surface such as a touch screen, our system also needs to consider how to ‘click’ or select content as well as move a cursor or point of focus. Several selection techniques have been proposed in the literature for various gesture interaction systems. One possibility is the user’s hand dwelling over the item they want to select beyond some fixed length of time. Another possibility would be to put a reflective surface on the user’s body part that can easily be recognised by a camera, such as the “spotlight from tape on a […] boot” ([16] p. 212) in the Interactive Dirt system. Another example is the use of a vision-based technique to recognise hand shape, such as the pinch gestures proposed by Wilson [17]. These techniques require the user to learn a set of gestures to use the system. Finally, another technique would be to use camera-vision associated with the projection in order to recognise the distance between the user’s hand or foot and the projected image; a selection could be made when the user touches the projection space itself, a technique commonly used with fixed position depth cameras alongside interactive projection surfaces [18].

5.1.1 Implementation

In order to demonstrate the capabilities of interactive steerable mobile projection systems, we updated our steerable projection system used in the study described in section: Study: Projection-Device Orientation, to support hand and foot tracking through the mobile phone’s camera. The tracking is realised on our prototype through real-time vision-based algorithms that make use of the OpenCV library directly running on the Nokia N900 phone. Our system is therefore completely autonomous and does not require server-side processing. Although we have not built an independently steerable camera yet, we decided to not fix the camera relatively to the projector in order to explore different alignment settings for the interaction. The angle between the camera and the projector can currently be changed manually—since we are using the phone’s camera, the angle is between the phone’s body and the pico-projector.

In software, we designed two settings: the first corresponds to the camera and projector being aligned; while the second setting corresponds to the camera and projector being misaligned. In addition, we have implemented different interaction techniques to respond to the challenges of the different alignment settings. In both cases, a vision-based algorithm, explained below, is used to recognise the colour, shape and contour of the hand and foot. This ensures that other objects in the environment do not trigger interaction. Examples of these techniques in use can be found in the next section: Example Applications.

5.1.1.1 Algorithm for aligned camera-projector

In this setting, the camera’s interaction space matches the projection space, which means that the camera ‘sees’ the projected image. The technique implemented for this setting corresponds to a dwell-threshold based selection technique using the hand or foot of the user. This is particularly suitable since the user looks at the same space as the camera; moreover, there is no need for a cursor since this is a direct manipulation within the projection.

In terms of the algorithm itself, the contour and position of user’s hand or foot are detected using colour segmentation, frame by frame, with the OpenCV library. Colour segmentation is a commonly used method to separate human body parts from the background [19]. It is also a method that does not require too much processing power and that can be used in real-time on a phone without limiting other processor demands. The HSV colour space [20] was used to set a range of colours that would correspond to skin colour for hand recognition and tracking. In the case of foot tracking, for the purposes of this demonstration we used a dark range of colours, imposing the limitation that the user had to wear dark shoes (although clearly more sophisticated algorithms are possible). HSV colour model is especially helpful to analyse the image information with non-uniform lighting conditions. Once the hand or foot is recognised, the algorithm separates the contour from the rest of the background (segmentation stage), allowing the software to easily determine the position of the user’s hand or foot. When exploring our design, environmental factors such as indoor lighting and the projection background colour were kept under-control in order to reduce the image noise and increase the stability of our recognition algorithm; again, more stable algorithms are possible beyond our demonstration, although real-time responses of these will trade-off against locally available processor capacity.

Once the hand (or foot) is detected, the algorithm needs to check what is being selected on the projection. The selection process is started as soon as the user’s hand (or foot) is detected inside the field of view of the camera. There are two steps to this selection process. First, the algorithm checks if the detected contour is decreasing in size, implying that the hand (or foot) is moving towards the projection and further away from the camera. This strategy allows the system to differentiate the intention to touch from other movements. Then, after a few frames of the contour decreasing in size, the algorithm checks if the hand (or foot) stays still for more than 10 frames. The refresh-rate of the phone’s camera used is 20fps in order to guarantee the fluency of gesture detection, so the corresponding dwell-threshold time is 0.5 s. In practice, the selection time is to some extent increased due to the processing required. The suitability of this dwell time was determined empirically through the use of the system. Once the selection is confirmed, the position of the hand (or foot) is compared to the position of items that can be selected on the projection space and the corresponding item is selected.

5.1.1.2 Algorithm for misaligned camera-projector

In this setting, the camera and the projector are misaligned, which means that the camera’s interaction space and the projection space are different. The technique used in this setting involves a gesture recognition algorithm. An indirect selection technique would require implementing a cursor to determine the position of the hand (or foot) relative to the projection. A set of many different gestures could be implemented in the same manner; however, we only implemented two gestures as a proof of concept that these can be used for misaligned camera-projector arrangements. The gestures implemented are a waving gesture (with hand or foot) from left to right and the same gesture from right to left.

The hand (or foot) is detected in the same way as for the algorithm presented in the Algorithm for aligned Camera-Projector section. In order to check the movement of the hand (or foot), the algorithm sequentially calculates the contour’s coordinate along the x-axis. If the value keeps increasing at each frame for a few frames in a row (rate used is 10fps), then the application recognises that the user is waving from left to right (or from right to left if the value keeps decreasing). One of the limitations of this algorithm is that the user has to move their hand or foot out of the camera field of view between two actions.

5.1.2 Example applications

We developed two applications to investigate the benefits and trade-offs of the various interaction techniques that we have implemented for steerable mobile projection systems. The first application (Fig. 5a, b) is an Easter Egg Hunt game which aligns projector and camera in order to enable touch with hand or foot on the projection. In the second application, a presentation support tool (Fig. 5c, d), the camera and the projection are intentionally misaligned, so that the camera can detect foot movement when projecting on the wall or hand movement when projecting at any height. In the next sections we describe these applications and then discuss our informal experiences using them to highlight the benefits and drawbacks of these interaction techniques in steerable projection settings.
Fig. 5

Interaction by: a touching the projection; b stepping on it; c waving; d kicking

5.1.2.1 Aligned: an easter egg hunt

We have implemented an augmented version of the traditional Easter Egg Hunt game using the mobile projection system where virtual clues lead to actual chocolate eggs. Each egg has a location clue which is given to the participant on the projected image; moreover the projection beam itself gives another clue by displaying at the height at which the egg is hidden. On each egg there is a picture that the participant needs to select on the projection in order to access the next clue. When the game starts, the rules are explained and the user is informed that both hands and feet can be used to interact with the projected image. The idea is to use the hand when the projection falls on a nearby flat surface such as table or wall; and the foot for a floor projection. When the user selects the correct image, the projector steers to the next clue and changes its content.

5.1.2.2 Misaligned: a presentation support tool

We have developed a second application in which the user navigates through a series of pictures or slides and looks at the next or previous item by moving their hand or foot in front of the camera. This interface can be used for changing slides during a presentation or for browsing photos together in a group. For this application, the camera is in a fixed position. The user’s hand or foot movement is used to provide input by waving from left to right (forward navigation) or from right to left (backward navigation) in front of the camera.

5.1.2.3 Findings

In the case where the camera and projector are aligned, touch interaction with the ‘spare’ hand not holding the device requires the user to get close enough to the projection to be able to touch it. This process was sometimes difficult since the projection reduces in size as the throw is reduced to arm’s length; in some cases, the user could not even get close enough to reach the projection, for example when objects were obstructing the way. Thus, in our Easter Egg Hunt application, with the camera and projector aligned, hand-based interaction was the least easy to use. Foot-based interaction on the other hand was very easy to use, because the throw distance is significantly greater and therefore makes it simple to adjust the projection and foot into a comfortable juxtaposition.

When the camera and projectors are misaligned, as in the presentation application, the movements of the hand and foot allow discrete interaction, opening up many possibilities. The users could intuitively navigate through pictures forth and back by sliding their hands/feet, respectively from the left to the right or the right to the left. Although we intended the design to support individual interaction, the device could be held in such a way as to project in one direction and provide a new interaction space to allow someone else to move the slides for the presentation, even if they were not holding the device.

Our technique that recognises the shape of the hand or foot worked well and dwelling is an intuitive interaction technique to use. In order to provide predictable interactivity, the dwell time needs to be relatively short for the user to have the patience to hold their gesture in place. When the camera and projector are misaligned, it can be difficult to select particular objects without any feedback that conveys the camera-projector mapping, suggesting a cursor or a pointer on the projection that provides a reference point.

6 Conclusions and future work

We examined the idea of steerable projections as a way of overcoming the alignment problems between projected displays and interactions as well as the ‘traditional’ displays and interactions provided by a mobile handheld device. We demonstrated that participants preferred different projection angles for different tasks and described their initial preferences in which screen and projection were oriented at different angles with respect to the handset and screen. Out of three possible angles, the overall preferred projection-screen angle was 30º, and not the 0º currently preferred by handheld projector manufacturers. The 0º angle was completely unused in a mobile task where alternative steerable options are provided, which shows evidence that there is a correlation between the lack of use of existing projector phone configurations and continuous mobility. All our participants preferred to change angle to accomplish tasks and all agreed that they would use different angles depending on the context or the application that they were using. We also found evidence that screen-based interaction techniques were not optimal for handheld projections, and went on to implement and demonstrate a number of interaction techniques based on the alignment or misalignment between the projection and the phone. Our initial experiences with these techniques suggest that these interaction techniques need to adapt to different situations and exploit opportunities such as whether the projector and the camera are aligned. Although hand-based touch interaction seems fairly easy and intuitive, it does not seem optimal for interactive surfaces created by wearable or handheld projectors. Foot tracking, however, seems to be a very promising interaction technique for steerable mobile projection. Both techniques, however, can be used as secretive gestures in the case where the projector and camera are misaligned.

This work opens up broad new avenues for research into both personal projection and mobile device functionality. Personal handsets incorporate an increasing array of input and output technologies. Each new capability introduces additional challenges to fit into the device ecology such that existing hardware and the corresponding interactive capabilities are not disrupted. The relative placement of displays, cameras, sensors and controls predetermine particular uses of the device by imposing how capabilities can be coupled and combined. We expect that additional new interaction techniques combining digital and physical steering will be required to suit these emerging capabilities. In future work, we expect to introduce increased automated functionality that steers handheld projectors and cameras according to additional inputs such as detecting and using optimal projection spaces, compensating for movement jitter or keystoning, and automatically aligning projections from multiple handheld devices.

Footnotes

  1. 1.

    After the study described in section Study: Projection-Device Orientation, the Samsung Omnia HD was replaced by a Nokia N900, since the Maemo OS offers more flexibility to control the displays.

Notes

Acknowledgments

This research was supported by the EPSRC and Mobile VCE through the Core 5 User Interactions programme, grant number EP/G058334/1. The authors would like to thank their colleagues at the Bristol Interaction and Graphics lab for their help and support.

References

  1. 1.
    Cao X, Forlines C, Balakrishnan R (2007) Multi-user interaction using handheld projectors. Paper presented at the 20th annual ACM symposium on user interface software and technology, Newport, Rhode Island, USAGoogle Scholar
  2. 2.
    Murph D (2010) DigiLife I-One e-reader smuggles along integrated projector, gets white glove treatment at Computex. http://www.engadget.com/2010/06/04/digilife-i-one-e-reader-smuggles-along-integrated-projector-get/
  3. 3.
    Wowwee (2010) Cinemin Swivel pico projector. http://www.wowwee.com/en/cinemin
  4. 4.
    Hang A, Rukzio E, Greaves A (2008) Projector phone: a study of using mobile phones with integrated projector for interaction with maps. Paper presented at the 10th international conference on human computer interaction with mobile devices and services, Amsterdam, The NetherlandsGoogle Scholar
  5. 5.
    Hinckley K, Dixon M, Sarin R, Guimbretiere F, Balakrishnan R (2009) Codex: a dual screen tablet computer. Paper presented at the 27th international conference on human factors in computing systems, Boston, MA, USAGoogle Scholar
  6. 6.
    Light Blue Optics—Light Touch Pico Projector (2010) http://lightblueoptics.com/products/light-touch/. Accessed 27 Jan 2011
  7. 7.
    Harrison C, Tan D, Morris D (2010) Skinput: appropriating the body as an input surface. In: Proceedings of the CHI ‘10: 28th international conference on human factors in computing systems, Atlanta, Georgia, USA, April 10–15, 2010. ACM, New York, NY, USA, pp 453–462. doi:http://doi.acm.org/10.1145/1753326.1753394
  8. 8.
    Mistry P, Maes P (2009) SixthSense: a wearable gestural interface. Paper presented at the ACM SIGGRAPH ASIA 2009 sketches, Yokohama, JapanGoogle Scholar
  9. 9.
    S-Vision mobile projector phone prototype (2010) http://felixrunde.de/portfolio/mobiltelefon/. Accessed 20 Mar 2010
  10. 10.
    Acrossair (2009) Nearest tube iPhone application. http://www.acrossair.com/apps_nearesttube.htm
  11. 11.
    Schöning J, Rohs M, Kratz S, Löchtefeld M, Krüger A (2009) Map torchlight: a mobile augmented reality camera projector unit. Paper presented at the 27th international conference extended abstracts on human factors in computing systems, Boston, MA, USAGoogle Scholar
  12. 12.
    Ashdown M, Sato Y (2005) Steerable projector calibration. Paper presented at the IEEE computer society conference on computer vision and pattern recognition (CVPR′05)–workshopsGoogle Scholar
  13. 13.
    Cauchard JR, Fraser M, Sriram S (2010) Offsetting displays on mobile projector phones. Paper presented at the Ubiprojection 2010, Workshop on personal projection at pervasive 2010, Helsinki, Finland, 17 May 2010Google Scholar
  14. 14.
    Pinhanez CS (2001) The everywhere displays projector: a device to create ubiquitous graphical interfaces. Paper presented at the 3rd international conference on ubiquitous computing, Atlanta, Georgia, USAGoogle Scholar
  15. 15.
    Montero CS, Alexander J, Marshall M, Subramanian S Would you do that?—understanding social acceptance of gestural interfaces. In: Proceedings of the MobileHCI 2010, Lisboa, Portugal, 7–10 September 2010Google Scholar
  16. 16.
    McFarlane DC, Wilder SM (2009) Interactive dirt: increasing mobile work performance with a wearable projector-camera system. Paper presented at the 11th international conference on ubiquitous computing, Orlando, Florida, USAGoogle Scholar
  17. 17.
    Wilson AD (2006) Robust computer vision-based detection of pinching for one and two-handed gesture input. Paper presented at the UIST ′06. 19th annual ACM symposium on user interface software and technology, Montreux, SwitzerlandGoogle Scholar
  18. 18.
    Benko H, Wilson A DepthTouch: Using depth-sensing camera to enable freehand interactions on and above the interactive surface. In: Proceedings of the IEEE workshop on tabletops and interactive surfaces '08, Amsterdam, the Netherlands, 1–3 October 2008Google Scholar
  19. 19.
    Manresa C, Varona J, Mas R, Perales F (2005) Hand tracking and gesture recognition for human-computer interaction. Electron Lett Comput Vis Image Anal 5(3):96–104Google Scholar
  20. 20.
    Tsang PWM, Tsang WH (1996) Edge detection on object color. In: Proceedings of the international conference on image processing 1996, 16–19 Sept 1996, pp 1049–1052Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  • Jessica R. Cauchard
    • 1
  • Mike Fraser
    • 1
  • Teng Han
    • 1
  • Sriram Subramanian
    • 1
  1. 1.Bristol Interaction and GraphicsUniversity of BristolBristolUK

Personalised recommendations