UbiBeam: Exploring the Interaction Space for Home Deployed Projector-Camera Systems

  • Jan GugenheimerEmail author
  • Pascal Knierim
  • Christian Winkler
  • Julian Seifert
  • Enrico Rukzio
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9298)


Until now, research on projector-camera systems had only concentrated on user-interaction within a lab-environment. As a result of this, there are very limited insights into how such systems could be used in everyday life. It was therefore our aim to investigate requirements and use cases of home deployed projector-camera systems. To this purpose, we conducted an in-situ user study involving 22 diverse households. Several different categories were specified using a grounded theory approach; placement, projection surface, interaction modality and content/use cases. Based on the analysis of our results, we created UbiBeam; a projector-camera system designed for domestic use. The system has several different features including automatic focus adjustment with depth sensing which enables ordinary surfaces to be transformed into touch-sensitive information displays. We developed UbiBeam as an open source platform and provide construction plans, 3D-models and source code to the community. We encourage researchers to use it as a research platform and conduct more field studies on projector-camera systems.


Form Factor Remote Control Interaction Modality Living Room Projection Surface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Mark Weiser’s vision [24] of ubiquitous information being constantly provided to the users is not fully achievable with current technology. Even if smartphones, tablets and public displays are one step in this direction, they still lack the omnipresence and the ability to fully blend into the users environment.
Fig. 1.

The UbiBeam system in combination with the envisioned use cases for a home deployable projector-camera system

Current research achieves this ubiquity by simulating omnipresent screens using projector-camera systems (e.g. [11, 26, 28, 29, 30]). Most of these projects provide valuable insights on either the interaction with the projection or on technical implementations to create more sophisticated technology. These two aspects are widely researched in the field of projector-camera systems, whereas aspects such as real-life use-cases, domestic deployments and in-situ evaluations are rare. However, projects such as IllumiRoom [14] or Lumo [1] show that the insights of home deployment and home use can be of great value for designers of projector-camera systems.

In this paper, we explore the design space of home deployed projector-camera systems by conducting an in-situ study in the homes of 22 people. Based on our insights, we build UbiBeam (Fig. 1), a small and portable projector-camera system for home deployment. We envision a future where such devices will be sold in hardware stores. They could be available in different form factors, either as a replacement for light bulbs or a simple small box which can be placed in several ways inside the users’ environments to be able to blend into the household (Fig. 1). The design of these devices will not only focus on the interaction with the content but also on aspects such as deployment and portability. This work is a first step towards a greater understanding of developing home deployed projector-camera systems for end-users as it provides system and design requirements derived from results from a qualitative in-site user study.

The main contribution of this paper is a qualitative research in-situ study of domestically deployable projector-camera systems conducted by visiting 22 households with the aim of identifying design opportunities and use cases. Four main categories (Projector-Camera Placement, Projection Surface, Interaction Modality, Projected Content/Use Case) were identified. Based on these insights, UbiBeam a steerable and stand-alone projector-camera system for domestic use was designed and implemented to be able to conduct further research by deploying UbiBeam and collecting real usage data.

The rest of this paper is organized as follows. First, we present related work and prior research on projector-camera systems. We then report and discuss the findings on our exploratory field study. Subsequently, we illustrate how these findings can be implemented inside a projector-camera system and present our prototype UbiBeam. Finally we discuss our results and show future research directions.

2 Related Work

Previous related work can be categorized as follows: mobile projector-camera systems and stationary projector-camera systems. In addition we discuss work on projector-camera systems which incorporates field research approaches. General trends of pico-projectors/mobile projectors have been discussed in several overview papers, e.g. [7, 12, 23], and showed a promising future and increasing availability of different devices and form factors.

2.1 Stationary Projector-Camera Systems

In an earlier work, Pinhanez [19] presented the Everywhere Display Projector which can project onto several predefined surfaces. A rotating mirror placed over the projector enables a movable projection cone. In this work, challenges such as distortion, focus and obstruction were introduced and addressed.

Raskar et al. presented several concepts on geometric aware projector-camera systems for non-planar surfaces [20, 22]. In a later work, Raskar et al. presented their vision of the Office of the Future. Here, they used projector-camera systems to create a spatially immersive display [21]. In more recent work, Wilson et al. focused on touch interaction with projected interfaces using a depth camera [27, 28]. Furthermore, Wilson et al. introduced Beamatron, a steerable projector-camera system which uses computational techniques to capture the whole geometry of the room and superimpose graphics all over the room in real time [26]. Such steerable projector-camera systems were also researched by Butz. et al. [3] and Cauchard et al. [5]. Both showed different implementations for either a mobile or a stationary steerable projector system.

Linder and Maes introduced LuminAR, a projected augmented reality interface, which dynamically augments objects and surfaces with digital information [16]. LuminAR consists of a pico-projector, a camera and a wireless computer in a compact form factor embedded in a design similar to a desk lamp. However, Luminar is implemented as a stationary lamp which does not support portability and ad-hoc mounting. The MirageTable by Benko et al. [2] presented a setup which used a projector-camera system above a curved display which was designed to merge real and virtual worlds. The depth camera tracks the user’s eyes and enables perspective stereoscopic 3D visualization to a single user.

This representative selection of prior work illustrates the lab driven research currently conducted with projector-camera systems. In this work, we draw the focus beyond the scope of instrumented lab environments and investigate possibilities for applications of projector-camera systems in-situ in potential users’ homes.

2.2 Mobile Projector-Camera Systems

In contrast to stationary setups, mobile projector-camera systems face a new variety of challenges due to their mobility factor. An early work by Karitsuka and Sato [15] presented the concept of a wearable projector which created a permanent interactive display for the user on the go. Mistry and Maes (SixthSense) advanced this concept by adding gestural interaction and context-awareness. SixthSense, a portable system consisting of a projector, a camera and colored markers [17]. SixthSense enabled gestural interaction with the physical world by augmenting different objects. In OmniTouch [11], Harrison et al. mounted a projector-camera system onto the shoulder and in doing so enabled multi-touch on planar surfaces. Molyneaux et al. [18] enhanced this by enabling touch on arbitrarily shaped surfaces and also offering geometrically correct projections. Winkler et al. [29] presented AMP-D, a similar setup to OmniTouch. However AMP-D includes also the floor in front of the user as a projection display. Hence, users can transition between using a private display (hand) and public display (floor). Some research was conducted on mobile projectors which are not mounted but held by the user. Willis et al. [25] and Cao et al. [4] presented solutions on what an interaction between several mobile devices could look like.

Our work focuses on form factors which are in-between stationary and mobile projector-camera systems. We have envisioned a device which is portable but which can also be easily mounted inside a room to create a stationary interaction space.

2.3 Field Research for Projector-Camera Systems

The previously discussed work focused on either the implementation of projector-camera systems or the evaluation of individual interaction techniques. To our knowledge no exploratory study on the deployment and use of projector-camera systems in domestic environments has ever been conducted. Huber et al. conducted in LightBeam [13] a qualitative study with several researchers working on interaction techniques using a pico projector. However, the participants were all HCI researchers and were used to help explore the interaction technique. Hardy et al. [9] conducted a self experiment by using an interactive desk for one year and reported their experience. Hardy provides valuable insight into the long-term use of a projector-camera system in the work space, however a larger sample of data is still missing. WorldKit, a first step towards the deployment of projector-camera systems was made by Xiao et al. [30]. WorldKit offers similar to UbiDisplays [10] a simple framework to create quick interactive surfaces with a projector-camera system. However, the setup presented is still big and bulky and would be hard to deploy in several homes of participants. The main difference of our work is the approach of visiting users in their home environment and exploring home deployment and home usage of projector-camera systems. We based our work on the implementation and insights conducted by Gugenheimer et al. [8]. This work extends the previous by giving a deeper understanding of the design space for home deployed projector-camera systems. The study procedure and methodology is explained in full detail and the results are set in contrast to requirements for home deployed projector-camera systems. These requirements were used to build and implement UbiBeam which shows an example of how to implement those requirements in future projector-camera systems.

3 Design Process

In order to gain a comprehensive understanding of the requirements of home deployed projector-camera systems, we designed and conducted an exploratory field study.

3.1 Study Design

For the purpose of data analysis, we visited 22 different households and conducted semi-structured interviews. We decided to interview participants in their homes for several reasons. On the one hand, it created a familiar environment for the participant which led to a pleasant atmosphere. Furthermore, the participants were aware of whole arrangement of the rooms and could therefore provide a detailed insight into categories such as the placement of the projector-camera system and display spaces. This also allowed us to have a variety of rooms in which the participants could set up different scenarios. The rooms we covered in our study are: the living room, bedroom, bathroom, working room, kitchen and corridor.

We recruited 22 participants (10 female, 12 male) between 22 and 58 years of age (M= 29). The apartment sizes measured between 27 and 104 square meters (M= 71.68) and consisted of 1 to 4 rooms (M= 2.86). By conducting the semi-structured interview for every room, we collected 92 samples. Most participants did not live alone and shared a flat with 1 to 4 other people (M= 2.14). The majority of the participants were students with a technical background. The participants received 8 Currency as a reward.

Before the interview started the participants were introduced to the study and also received a brief introduction into ubiquitous computing and everywhere displays. The participants were all equipped with a mock up (Fig. 2) which consisted of an APITEK Pocket Cinema V60 projector placed inside a cardboard box which was mounted onto a flexible camera stand. This allowed participants to attach the mock up to almost all surfaces. The cardboard box provided illustrations of non functional input and output possibilities such as a touchpad, several buttons, a display and a depth camera. The purpose of this low-fidelity mock up was to inspire creativity in the participant’s handling of the device.
Fig. 2.

Mock-up consisting of a projector inside a cardboard box mounted on a flexible camera stand

The interviews were structured into three parts. First, each room of the participants home was inspected. Each participant was asked questions about how they would use the projector-camera system. Furthermore, they were asked to build a potential set-up using the mock up. There were several pre-designed non-interactive widget examples stored on the projector (watching a movie, social media, weather, cooking etc.). To avoid biasing the participants the widgets were only revealed when a participant mentioned them in their individual set-up. The second part consisted of a questionnaire about the general requirements for a projector-camera system. The last part was a demographic questionnaire including data on the apartment. The whole process took around one hour. Even though this study was not conducted using a real projector-camera system, we argue that this resulted in more creative solutions and responses, since an actual system would have limited the participants due to the technical implementation. For this reason we intentionally used a low fidelity mock-up.

A grounded theory approach was chosen for the analysis of the data [6]. To gather the aforementioned data, we conducted semi-structured interviews with the participants and made notes, pictures and video recordings of many sessions. Two of the authors coded the data using a selective, open and axial coding approach. The initial research question was: “How would people use a small and easy deployable projector-camera system in their daily lives? When and how would they interact with such a device, and how would they integrate it into their home?”.

During the process we discovered that the four main categories the participants were focusing on when they handled the projector-camera system were:
  • Projector-Camera System placement. Where was the projector-camera system mounted inside the room? Was it placed so the participant can reach the projection or the projector-camera system? How was the projector-camera system placed: Standing on a flat surface or mounted around an object?

  • Projection surface. What projection surfaces did the participant choose? Was the projection horizontal (table) or vertical (wall) and how was the orientation of projector-camera system and projection surface?

  • Interaction modalities. What modalities were used for the input and why?

  • Projected Content/Use Cases. What content did the participant want to project for each specific room and which use cases were important to them?

The following subsections present the findings of our study. The numbers in brackets indicate the number of participants who performed a specific action in a particular room. We start the results with the use cases, since the other categories were mostly influenced by the content created. The placement and the projection surface both also highly influenced the interaction modalities.

3.2 Content and Use Cases

The exact use cases varied since the participants each referred to different rooms. In spite of this variation, larger concepts were able to be found in the set-ups created by the participants:information widgets and entertainment widgets.

Information Widgets. We consider information widgets as use cases in which the participant almost only wants to aggregate data. This use case was mostly mentioned in the kitchen and the corridor. These places can mostly be seen as utilitarian rooms where someone would not spend a significant amount of their free time. These rooms also usually do not offer any seating possibilities for a longer amount of time. The most common use case in the kitchen was to project recipe information while cooking (16). Further suggestions were shopping lists and notes. In the corridor the most frequent use cases were a bus schedule (5), a calender and a weather forecast. The main suggestions for the working room was an extension of the monitor (3). When observing these suggestions, it can be claimed that most use cases were used as an aid in finishing a specific task characteristic to the room.

Entertainment Widgets. Entertainment use-cases were mostly created in the living room, bedroom and bathroom. In the living room participants mostly created set-ups for games (6) and watching television (6). These use cases mostly took advantage of the fact that big spaces such as walls or tables were available. Having such a device, P20 also re-thought the arrangement of his living room since “the focus does not have to be on the television anymore”. In the bathroom several participants mentioned the boredom they experience during some tasks such as taking a shower (P7, P19), sitting on the toilet (P20) or cleaning (P19). Therefore they suggested an accompanying task such as television (6) or reading the news (5). Several participants (6) had privacy concerns in using a device with a potential camera inside their bathroom and bedroom: “I don’t want a camera where I undress myself” (P10). For the bedroom participants created similar use cases as the living room. The most cited use cases for the bedroom was the television (7). Some participants (P22, P6, P5) suggested to create an ambient device which would project specific colors and “moods” suited to music. The use of the projector-camera system in these rooms highly differed compared to the kitchen, corridor and working room. Here the focus was on enhancing the free time one spends in these rooms and making the stay more enjoyable.

3.3 Placement of the Projector-Camera System

To avoid the placement of the projector-camera system being biased by the limitation of the projection size (as we expect projectors to provide much larger projections in the near future) we asked the participants to place the system anywhere irrespective of the picture size it created. In a next step the participant showed the size of the projected surface they expected from the device.

Similar to the use cases, the placement can be divided into two higher concepts: placing the device in reach and out of reach.

In reach. When placing the device either in the bedroom, bathroom or kitchen, the participants always placed the device within reaching distance. In each case the device was mounted either at waist or shoulder height. In the bedroom participants mostly attached the device to the bed (5). P20 took the device in their hand when lying in bed to have “full control” of it. One reason why participants picked the bed as a location could be that resting and sleeping is the primary purpose of this place. Furthermore, the whole room is in sight from the bed. In the kitchen the device was mostly attached to the wall (7) or placed on a cupboard (5). The kitchen and bathroom were not considered locations where a device would be permanently mounted. Therefore, participants attached it to locations they could effortlessly remove it from and carry it to a different room. In the bathroom the most preferred location was again the wall (7).

Out of reach. In the living room, working room and corridor participants preferred a mounting above body height. These were also rooms where participants could imagine a permanent mounting. For this reason the device was placed in a way that it could project on most of the surfaces and was “not in the way” (P19). In the living room the preferred location was the ceiling (8) and the wall (4). The same applied for the corridor where 8 participants tried to mount it to the ceiling and 4 to the wall. The most used location in the working room was the ceiling (3). In these rooms the mounting of the device was not considered an important aspect. It was almost always assumed that “the device is somewhere where it can reach every surface” (P22). The focus hereby was more on the projected surfaces.
Fig. 3.

Users building and explaining their setups (Mock-Up highlighted for better viewability).

3.4 Orientation and Type of Surface

To gain an impression as to where the surfaces should be and how they look like, participants removed the projector-camera system from their mounting position and placed it somewhere close by.

The participants almost exclusively chose flat and planar surfaces for each interface. At the beginning of the study each participant was informed that it is technically possible to project onto non-planar surfaces without distortion. In spite of this, only one participant expressed a desire to project onto a couch. Although the remaining participants were aware that they could project onto non-planar without distortion, they explained that they would still prefer to project onto a flat and non-planar surface: “I prefer flat surfaces even if the projection can be undistorted otherwise” (P1). Therefore the only meaningful classification on projection surfaces consists of a distinction between horizontal (e.g. tables) and vertical (e.g. walls) display orientation.

In the kitchen, bedroom, working room and living room horizontal and vertical surfaces were used almost equally. However in the corridor and bathroom, vertical surfaces were used more often as a result of the lack of sufficient horizontal surfaces. The most used surface in the bathroom was the mirror (4). However, since the projection was not working on the mirror participants projected right next to it. In the corridor almost only vertical surfaces were created either onto the door (5) or the wall (2). As already mentioned in the section on use cases, the corridor was used to aid specific tasks that participants commonly undertake there. In this case the situation would be leaving the apartment, and when leaving the apartment, seeing the bus schedule and/or weather forecast on the door seemed natural to participants. P10 was the only participant who used the floor as a projection surface to create an interactive rug.

In the living room a popular surface was the table (7) in the center of the room. Participants create an interactive tabletop (Fig. 3) and wanted to “play different kinds of board games” (P20). Furthermore, participants used the wall to create a large television. In the kitchen the cabinet door (5), the table (5) and the space between stove and kitchen hood (5) were used the most. Participants also wanted the interface either to “follow” (P21) or project several interfaces on different surfaces. In the bedroom the most used surface was the wall. Only two participants used the ceiling (P5, P20). The working room mostly used a desk (4) or the wall (3).

By observing these results, we have concluded that the projection surface was mostly used to support the use-case and was influenced by the room.

3.5 Interaction Modalities

The main interaction modalities participants requested were speech recognition, touch or a remote control. Other techniques such as gesture recognition, shadow interaction or a laser pointer were mentioned rarely. The interaction modality was highly influenced by the room and the primary task in there.

Given that cooking is regarded as one of the primary tasks in the kitchen, displaying the recipe would support that primary task. Many of the participants considered using touch as an input method since the projection surface they had chosen was always within their reach. However, most participants opted for a speech (11) input since the task of cooking involved using both hands or both hands were often dirty. Nevertheless some participants did choose touch (6) or a remote control (5) as an input modality. The same reasoning was also given in the bathroom: “I dont want to touch having wet hands” (P19). Therefore speech (3) was again the preferred input modality. One participant (P21) even mentioned that they “do not want any input” and only wanted to consume content.

Looking at the living room where the placement was made out of reach and the focus was on entertainment use cases, participants preferred using a remote control (9) or touch (5) to interact with the surface: “I already use a remote control here so I want to control it with the same one” (P19). However, a mobile phone or a tablet was often named as the remote control. The location of the surface was a big influence on the interaction. If the surface was the table, touch was preferred. If the surface was a wall the remote control was used.

Inside the bedroom, working room and corridor remote control and touch were named almost equally. Similar to the living room it was dependent on the location of the surface. When the surface was horizontal and close by, the participants preferred touch, but when a vertical surface was out of reach the participants preferred a remote control. One participant explained that his choices are mostly driven by convenience: “You see, I am lazy and I don’t want to leave my bed to interact with something” (P22).

3.6 Derived Requirements for Prototype

After analyzing the data from the semi-structured interviews we combined the results with those of the questionnaires and derived several requirements for our prototype of a domestically deployed projector-camera system. Participants always wanted more than only one fixed surface in every room. Considering the out of reach placement, we concluded that the projector-camera system must be steerable so it can autonomous create different interfaces. The form factor was mostly dictated by the projector used. We analyzed the set-ups of the participants and found out that the distance between the device and surface was between 40 cm and 350 cm (Mdn= 200 cm). The projected surfaces sizes varied from the size of a cupboard door to a whole wall. Therefore, the projector used must be an ultra-compact DLP to have a high brightness at the required distance and still have a small form factor. Since participants wanted to carry the device into several rooms and have different use cases the mount must offer a quick and easy deployment. A last issue which came up several times was the focus of the projector. Participants did not want to adjust the focus every time they deploy the device in a new location. Therefore an auto focus must be realized.

After we derived the key requirements for a device, we experimented with several form factors (Fig. 4) which could fulfill the needs of the participants. Firstly, we envisioned a small cube (a) which could be placed on flat surfaces. The two different versions were mounted on a flexible arm (b) or similar to [16] inside a lamp (c). We finally decided to use the rectangular design (b), hanging from an arm with a clamp which can be mounted on several surfaces. This design provided more freedom in terms of movement compared to the lamp approach but could still provide a motorized rotation unlike to the cube.
Fig. 4.

Different form factors which were considered for the projector-camera system

4 Implementation

4.1 Hardware Architecture

UbiBeam uses the ORDROID-XU as the processing unit which offers a powerful eight-core Single-Board-Computer (SBC). A WiFi-Dongle and a wireless keyboard are also connected to the SBC. The Carmine 1.08 from PrimeSence is used as a depth camera. It offers a wide range advantage for our particular user cases, such as a higher resolution in comparison to smaller Time-of-Flight cameras. Moreover, it is well supported by the OpenNI framework. As for the projector we opted for the ultra-compact LED projector ML550 by OPTOMA (a 550 lumen DLP projector combined with a LED light source). It measures only 105 mm x 106 mm x 39 mm in size and weights 380 g. The projection distance is between 0.55 m and 3.23 m. For the pan and tilt of the system, two HS-785HB servo motors by HiTEC are used. These quarter scale servos offer a torque of 132 Ncm. To be able to provide an auto focus, we attached a SPMSH2040L linear servo to the focusing unit of the projector similar to [29]. To control the actuators, an Arduino Pro Mini is used.

Focus-Adjustment. As UbiBeam is required to support quick switching between display positions at various distances, automatic focus adjustment is essential. All available handheld projectors only provide manual focus adjustment as this is sufficient in typical usage scenarios. The only exception, Laser Beam Steering projectors, cannot provide the required luminosity. As for the Optoma 550ML DLP, its focus is manually adjusted via a small lever. To realise automatic adjustment of the focus, the movement of the lever is controlled with a servo (SPMSH2040L). The servo is glued to the designed servo mount. To determine the required position of the servo for a given distance, a calibration task was conducted which determined a formula which calculates a PWM signal to a particular distance with a maximum error less than 40 \(\mu s\).

Pan-Tilt Unit. The pan-tilt unit is responsible for moving the depth sensing camera, the projector and the SBC. This allows UbiBeam to rotate along two axis and allocate the interactive projection inside this space. Each tilt and pan servo is grounded to a ServoBlocks. A ServoBlocks isolates the lateral load from the servo and increase the load-bearing. The pan servo is mounted overhead. The tilt servos is rotated by 90 degree in a vertical plane.

The final hardware construction measures 10.5 cm x 12.2 cm x 22.5 cm including the pan-tilt unit and weighs 996 g (Fig. 1). To be able to easily mount the device to a variety of surfaces we adjusted it to a Manfrotto Magic Arm. The hardware components can be bought and assembled for less than 1000 USD.
Fig. 5.

Touch interaction using the UbiBeam system (UbiBeam is highlighted for better viewability).

4.2 Software Implementation

Building a stand-alone projector-camera system requires a lightweight and resource saving software. We decided to use Ubuntu 12.04 on the ODROID since it also offered the most compatibility with the hardware and software. For reading RGB and depth images, OpenNI version 2.2 for ARM is used. Image processing is done with OpenCV in version 2.4.6. Visualisation of widgets is accomplished with Qt (version 4.8.2), a library for UI development using C++ and QML.

Based on our results from the interaction of the qualitative study (controlling the device by using a remote control or touch interaction), we established interaction with UbiBeam by following a simple concept: after running our software the projection turns into a touch sensitive interaction space. The user then creates widgets on this space (e.g. digital image frame, calendar etc.) and interacts with them via touch (Fig. 5). The orientation of the device itself is done with an Android application sending pan and tilt commands. After moving the device to a new space the auto focus and touch detection recalibrates automatically and creates a new interaction space.

Touch Algorithm. The touch detection was implemented based on an algorithm demonstrated in [27]. One of the key characteristics is that touch can be detected on any physical object without user driven calibration tasks. The touch detection developed can be divided into four parts. To begin with, the scenery is analyzed and a spatial image, the ground truth, is generated. The image retrieved is then filtered for noise and used to calculate a binary contact image while touch detection is still running. The contact image is filtered and blob detection is able to specify contact points. In the end contact points are tracked over time and turned into interaction events which in turn trigger events intended by the user. Contact points which have been specified are then classified into different touch events (touch down, long touch, move, touch release). The spatial ground truth image is created by a temporal filtering of 30 single depth images. Mapping the specified touch events from the camera coordinate system to the widget is required to enable interaction. Therefore, the touch event point \(P_(x,y)\) detected in the camera coordinate system is transformed several times until the point is located in the same coordinate systems as the widget. This whole pipeline allows for touch events to be detected on any kind of non-planar surfaces.

Picture Distortion. In order to project distortion free content onto surfaces which are not perpendicular to the device, a pre-warping of the projected content had to be done. A plane detection on the depth map was carried out in accordance with the concepts of Yoo et al. [31]. This helped to find potential projection surfaces. Four points, each situated on one of the detected planes and spanning a rectangle of the desired size were specified. Finally, the perspective projection which transforms the widget to the determined points is calculated and applied to render a corrected representation of the widget.

Developing Widgets. The framework developed enables a dynamic loading of widgets. All the complexity of the spatially aware projection, dynamic touch detection and movement of the projector-camera system are encapsulated and hidden from the view of the widget. This allows for the straight forward development of widgets. Two different possibilities are supported to create a new widget. Developers are able to implement a provided C++ interface to create a widget. Alternatively, developers can implement widgets using Qt User Interface Creation Kit (Qt Quick). It uses QML to describe modern looking, fluid UIs in a declarative manner.

5 Discussion

The in-situ user study revealed that participants preferred entertainment widgets for home deployed projector-camera systems. Even if the most mentioned use cases was a solely utilitarian one, the qualitative feedback showed that participants valued the benefit for entertainment more than the benefit a projector-camera system would bring for a certain task. The placement category indicated that each room would have a slightly different requirement and a final system should support mobility but also the ability to be spontaneous mounted to a free space. Therefore the interaction concept should support in reach (touch) and out of reach (remote control) functionalities. Even though, voice command was often mentioned as an interaction modality it was always combined with very specific use-cases as a cropping technique for touch interaction. The preference for flat surfaces and creating tabletop similar spaces supported the choice of touch as an interaction modality. The implementation of UbiBeam shows one feasible example solution for a home deployed projector-camera systems. We hope that researchers will use our description to re-implement a similar device and be able to use it as a research platform to gain insights on home deployed projector-camera systems.

6 Conclusion

With UbiBeam we focused on small and deployable camera-projector systems which are designed for domestic use. Albeit a large existing body of previous works on mobile projected interfaces, the requirements of domestic use had been completely neglected so far. These include portability, deployment, the selection of projection surfaces both from an interaction as well as an implementation perspective.

In this work we assessed users requirements for projected interfaces in their own homes and contexts. The presented qualitative study discovered the important categories and their interrelations that a domestically deployed projector-camera system must focus on. An example of such was that users differentiated between basic information aggregation to support a specific in task in a room and entertainment to enhance free time. Both aspects had different requirements for deployment and interaction. We discussed the several dimensions of each category and showed how they influenced each other. Based on these results, we derived requirements (Steerable, Remote Control Interaction, Touch Input Interaction, Fast Deployment, Auto Focus) for a first prototype, and explored different form factors.

The design of the UbiBeam prototype based on requirements derived from the study led to further insights into technical considerations and constraints. We demonstrated a possible implementation that already sufficiently delivers the required performance and accuracy. Still there is room for improvement, most notably, the size of the system should be further shrunken in the future to simplify handling and portability of the device. With projectors constantly gaining in brightness and first miniature depth cameras shortly to be introduced on the market (e.g. Googles Tango), this size reduction seems to come naturally. With releasing the UbiBeam framework open source1, including a more detailed building instruction, hardware list, source code and 3D-print models for all parts, we further allow the community to easily rebuild and advance our presented prototype and its applications. We believe that this will help the community to further investigate the domestic deployment of projector-camera systems.

In the future we would like to develop several systems and deploy them in households to collect quantitative data and qualitative feedback over a longer period of time. The design of the system already facilitates running a long term study. This will provide insights into the when, why, and how the system is used.




The authors would like to thank all study participants. This work was conducted within the Transregional Collaborative Research Centre SFB/TRR 62 Companion-Technology of Cognitive Technical Systems funded by the German Research Foundation (DFG).


  1. 1.
    Lumo interactive projector. Accessed 18 April 2015
  2. 2.
    Benko, H., Jota, R., Wilson, A.: Miragetable: freehand interaction on a projected augmented reality tabletop. In: Proceedings of the CHI 2012, pp. 199–208. ACM, New York, NY, USA (2012)Google Scholar
  3. 3.
    Butz, A., Krger, A., Peepholes, A., Lenses, M.: A generalized peephole metaphor for augmented reality and instrumented environments. In: Workshop on Software Technology for Augmented Reality Systems (2003)Google Scholar
  4. 4.
    Cao, X., Forlines, C., Balakrishnan, R.: Multi-user interaction using handheld projectors. In: Proceedings of the UIST 2007, pp. 43–52. ACM, New York, NY, USA (2007)Google Scholar
  5. 5.
    Cauchard, J., Fraser, M., Han, T., Subramanian, S.: Steerable projection: exploring alignment in interactive mobile displays. Pers. Ubiquit. Comput. 16(1), 27–37 (2012)CrossRefGoogle Scholar
  6. 6.
    Corbin, J., Strauss, A.: Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Sage, New York (2008)Google Scholar
  7. 7.
    Dachselt, R., Häkkilä, J., Jones, M., Löchtefeld, M., Rohs, M., Rukzio, E.: Pico projectors: firefly or bright future? Interactions 19(2), 24–29 (2012)CrossRefGoogle Scholar
  8. 8.
    Gugenheimer, J., Knierim, P., Seifert, J., Rukzio, E.: Ubibeam: an interactive projector-camera system for domestic deployment. In: Proceedings of the ITS 2014, pp. 305–310. ACM, New York, NY, USA (2014)Google Scholar
  9. 9.
    Hardy, J.: Reflections: a year spent with an interactive desk. Interactions 19(6), 56–61 (2012)CrossRefGoogle Scholar
  10. 10.
    Hardy, J., Ellis, C., Alexander, J., Davies, N.: Ubi displays: a toolkit for the rapid creation of interactive projected displays. In: The International Symposium on Pervasive Displays (2013)Google Scholar
  11. 11.
    Harrison, C., Benko, H., Wilson, A.D.: Omnitouch: wearable multitouch interaction everywhere. In: Proceedings of the UIST 2011, pp. 441–450. ACM, New York, NY, USA (2011)Google Scholar
  12. 12.
    Huber, J.: A research overview of mobile projected user interfaces. Informatik-Spektrum 37(5), 464–473 (2014)CrossRefGoogle Scholar
  13. 13.
    Huber, J., Steimle, J., Liao, C., Liu, Q., Mühlhäuser, M.: Lightbeam: interacting with augmented real-world objects in pico projections. In: Proceedings of the MUM 2012, pp. 16:1–16:10. ACM, New York, NY, USA (2012)Google Scholar
  14. 14.
    Jones, B.R., Benko, H., Ofek, E., Wilson, A.D.: Illumiroom: peripheral projected illusions for interactive experiences. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2013, pp. 869–878. ACM, New York, NY, USA (2013)Google Scholar
  15. 15.
    Karitsuka, T., Sato, K.: A wearable mixed reality with an on-board projector. In: Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR 2003, pp. 321– 322. IEEE Computer Society, Washington, DC, USA (2003)Google Scholar
  16. 16.
    Linder, N., Maes, P.: Luminar: portable robotic augmented reality interface design and prototype. In: Adjunct Proceedings of the UIST 2010, pp. 395–396. ACM, New York, NY, USA (2010)Google Scholar
  17. 17.
    Mistry, P., Maes, P.: Sixthsense: a wearable gestural interface. In: ACM SIGGRAPH ASIA 2009 Sketches, SIGGRAPH ASIA 2009, pp. 11:1–11:1. ACM, New York, NY, USA (2009)Google Scholar
  18. 18.
    Molyneaux, D., Izadi, S., Kim, D., Hilliges, O., Hodges, S., Cao, X., Butler, A., Gellersen, H.: Interactive environment-aware handheld projectors for pervasive computing spaces. In: Kay, J., Lukowicz, P., Tokuda, H., Olivier, P., Krüger, A. (eds.) Pervasive 2012. LNCS, vol. 7319, pp. 197–215. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  19. 19.
    Pinhanez, C.: The everywhere displays projector: a device to create ubiquitous graphical interfaces. In: Abowd, G.D., Brumitt, B., Shafer, S. (eds.) UbiComp 2001. LNCS, vol. 2201, pp. 315–331. Springer, Heidelberg (2001) CrossRefGoogle Scholar
  20. 20.
    Raskar, R., Brown, M.S., Yang, R., Chen, W.-C., Welch, G., Towles, H., Seales, B., Fuchs, H.: Multi-projector displays using camera-based registration. In: Proceedings of the VIS 1999, pp. 161–168. IEEE Computer Society Press, Los Alamitos, CA, USA (1999)Google Scholar
  21. 21.
    Raskar, R., van Baar, J., Beardsley, P., Willwacher, T., Rao, S., Forlines, C.: iLamps: geometrically aware and self-configuring projectors. In: ACM SIGGRAPH 2003 Papers, pp. 809–818. ACM, New York, NY, USA (2003)Google Scholar
  22. 22.
    Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., Fuchs, H.: The office of the future: a unified approach to image-based modeling and spatially immersive displays. In: Proceedings of the SIGGRAPH 1998, pp. 179–188. ACM, New York, NY, USA (1998)Google Scholar
  23. 23.
    Rukzio, E., Holleis, P., Gellersen, H.: Personal projectors for pervasive computing. IEEE Pervasive Comput. 11(2), 30–37 (2012)CrossRefGoogle Scholar
  24. 24.
    Weiser, M.: The computer for the 21st Century. In: Baecker, R.M., Grudin, J., Buxton, W.A.S., Greenberg, S. (eds.) Human-Computer Interaction, pp. 933–940. Morgan Kaufmann Publishers Inc., San Francisco (1995)Google Scholar
  25. 25.
    Willis, K.D., Poupyrev, I., Hudson, S.E., Mahler, M.: Sidebyside: ad-hoc multi-user interaction with handheld projectors. In: Proceedings of the UIST 2011, pp. 431–440. ACM, New York, NY, USA (2011)Google Scholar
  26. 26.
    Wilson, A., Benko, H., Izadi, S., Hilliges, O.: Steerable augmented reality with the beamatron. In: Proceedings of the UIST 2012, pp. 413–422. ACM, New York, NY, USA (2012)Google Scholar
  27. 27.
    Wilson, A.D.: Using a depth camera as a touch sensor. In: Proceedings of the ITS 2010, pp. 69–72. ACM, New York, NY, USA (2010)Google Scholar
  28. 28.
    Wilson, A.D., Benko, H.: Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the UIST 2010, pp. 273–282. ACM, New York, NY, USA (2010)Google Scholar
  29. 29.
    Winkler, C., Seifert, J., Dobbelstein, D., Rukzio, E.: Pervasive information through constant personal projection: the ambient mobile pervasive display (amp-d). In: Proceedings of the CHI 2014, pp. 4117–4126. ACM, New York, NY, USA (2014)Google Scholar
  30. 30.
    Xiao, R., Harrison, C., Hudson, S.E.: Worldkit: rapid and easy creation of ad-hoc interactive applications on everyday surfaces. In: Proceedings of the CHI 2013, pp. 879–888. ACM, New York, NY, USA (2013)Google Scholar
  31. 31.
    Yoo, H.W., Kim, W.H., Park, J.W., Lee, W.H., Chung, M.J.: Real-time plane detection based on depth map from kinect. In: Proceedings of ISR 2013, pp. 1–4, October 2013Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  • Jan Gugenheimer
    • 1
    Email author
  • Pascal Knierim
    • 2
  • Christian Winkler
    • 1
  • Julian Seifert
    • 3
  • Enrico Rukzio
    • 1
  1. 1.Ulm UniversityUlmGermany
  2. 2.University of StuttgartStuttgartGermany
  3. 3.European Patent OfficeMunichGermany

Personalised recommendations