Keywords

1 Introduction

Environmental controls in office environments are becoming increasingly automated. Building management systems (BMS) are being used to control lights, blinds, humidity and temperature. While some of these parameters are set dynamically, based on localised sensor input, they can potentially affect larger numbers of inhabitants, in particular in open office environments. Generally, centralised systems like BMSs do not account for individual inhabitants’ preferences regarding indoor climate and environmental office conditions.

The long-term aim of our research is to contribute to the design of systems that aid office inhabitants in controlling their localised office environments, as well as support the process of negotiating shared preferences amongst co-located inhabitants. However, effecting change to environmental conditions in shared offices, based on inhabitant preferences, poses a number of significant technical and social integration challenges. It is a ‘wicked problem’, which raises numerous interrelated research questions and will require a sustained programme of research well beyond the scope of the study reported here.

As first step towards addressing these challenges this paper focuses on exploring some of the possible personal input, feedback and interaction mechanisms such systems might use. We present results of the design, implementation and initial evaluation of a system that uses a range of different ambient and tangible input and output modalities to record subjective office comfort feedback and display sensed data and aggregated group preferences for a range of environmental factors.

Our system consists of three components: (a) a local Sensor Platform situated on the users’ desks that locally measures temperature, humidity, light levels and noise levels; (b) an ambient and tangible interaction device, called MiniOrb, that displays the locally sensed environmental conditions and allows users to select and submit their preference ratings; and (c) a mobile application, MobiOrb, that provides an alternative display of the sensed information and input of user-preferences as precise measurements.

Our research aims to address two pertinent questions. First, how can ambient and tangible interaction mechanisms support office inhabitants to record and reflect on their subjective office comfort levels, and second how do these mechanisms compare to screen-based interactions that enable users to see and set the same information with numerical accuracy?

Additionally, our in situ, group-based and real-time preference gathering approach contributes to the range of methods currently available for the study of indoor climate, which is a field still largely reliant on individual and intermittent questionnaire survey methods for gathering personal indoor climate preference data.

2 Background

2.1 Ubiquitous Computing and Indoor Climate

The field of indoor climate studies emerged in the 1920 s to study people’s physiological response to and perception of indoor climate conditions to identify an ideal ‘comfort zone’ for building climate regulation. This research has focussed on measurable physical parameters such as temperature, lighting levels, sound levels and humidity and has been enormously influential in the development of building standards and legislation for mandated comfort levels [1]. Recently, this idealised and de-contextualized view of what constitutes comfort has begun to be challenged by researchers who emphasize that measureable parameters alone are not enough to give a full picture of the reasons that people perceive the indoor climate the ways they do or account for the actual energy use of buildings [2]. As the introduction to a recent special issue puts it, there is a movement “…away from a passive and toward an active model of the person; away from purely physical or physiological paradigms toward those which emphasize meanings and social settings, and away from universalizing codes and standards … toward more flexible and more explicitly ‘adaptive’ strategies in engineering and design” [2, p. 307].

This ‘user-centred’ turn in building studies [3] highlights questions around how social relations, lived experience, and people’s actual use of buildings play into the experience of indoor climate. There is now recognition that achieving energy efficiency in a building is not an engineering problem alone, but a complex and ‘wicked’ problem dependant on social-relations and patterns of use between inhabitants in a building [1]. There is also recognition of a need to move away from static, pre-defined and steady state models of comfort in order to achieve more sustainable levels of energy use in buildings [2].

Buildings are also increasingly utilising ubiquitous sensing technologies to control the functioning of indoor climate systems in ‘smart’ ways [4]. This often translates into increased automation of indoor climate systems, however it has also been shown that building occupants’ satisfaction levels are strongly negatively affected by lack of control over the environment [5]. Allowing people to control the indoor environment not only improves their overall satisfaction [5] but also can be an effective way to reduce energy consumption [6].

While user engagement in this context can be achieved through a range of interaction techniques we specially consider ambient and tangible interaction mechanisms in the context of this paper.

2.2 Ambient Interaction

Ambient devices are a class of interaction mechanism, commonly used to unobtrusively relay information to users. For instance, Ishii [7] explored how to instrument office environments with an array of ambient feedback mechanisms, including lights, sounds, air flow and projected information as part of the ambientROOM environment. Ambient feedback devices have been applied and studied in a wide range of settings [e.g. 8, 9].

Ambient devices commonly rely on relatively simple output mechanisms like LED-based glowing orbs. However, despite their apparent simplicity designers have to carefully consider what information the device should display, how to implement appropriate notification intensity levels and how the device should transition between different states [10]. In addition to the role of ambient devices as pure output mechanisms there is an increasing trend to combine these devices with tangible and other interaction mechanisms in order to enhance people’s physical work environment and provide both input and output capabilities [e.g. 11, 12]. For instance, AuraOrb [13] enhanced a “glowing orb” display with an eye contact sensor and touch input, allowing users to trigger interactions by shifting their focus to the device. Further examples for this approach can be found in the context of instant messaging and presence awareness [e.g. 12, 14, 15].

2.3 Informal Awareness

Informal awareness addresses tools and mechanisms that facilitate the background awareness between work colleagues, incorporating knowledge of presence, activity and availability [16]. Initial research focused on facilitating casual interaction with the aim to support ongoing collaboration. However, more recent research has explored the notion of informal awareness in the context of domestic environments and other non-work environments [e.g. 17]. For instance Elliot, Neustaedter and Greenberg [18] investigated the contextual properties of location for awareness in the home, showing that where and when devices are deployed is a vital factor for their usefulness and uptake.

In the context of our study suitable modes of interaction through which individuals can engage with indoor climate data, need to be accompanied by the consideration of the social quality of these interactions. While the perception of indoor climate is based on individual preferences, the management of shared office environments is an inherently social problem which requires mutual awareness and consensus building across individuals and their specific preferences. As a result we consider aspects of informal awareness as part of our design process.

3 Design Process

The starting point for our design was a pre-existing prototype for an embedded wireless sensing platform, which one of the authors had developed in a separate ongoing research project [19]. This platform had been developed to monitor and log indoor climate parameters in an office environment and although it had been programmed to run autonomously the platform did include the possibility for simple user input in the form of a small ‘joystick’ button. Triggered by this, we began to discuss whether it would be possible to expand the possible interactions and feedback available from the platform with a view to collecting user-preference information alongside raw sensor data and as a way of enquiring into future possibilities for collective user control of indoor climate systems.

Given the embedded, wireless and self-contained form-factor of this pre-existing prototype we decided to explore what ambient and tangible interaction techniques could be built around the sensor platform. We decided to work with the constraints of the existing platform and that the device should run off the microprocessor of the sensor platform in order that it could be small and unobtrusive enough to be easily positioned on people’s desks. At the same time, we also aimed to make the feedback and interaction of the device rich enough that it would be engaging and useable, so users would actually want to contribute their preference data. We had several overall goals for the design:

  • The interactions should be quick and unobtrusive

  • The device should provide an ambient awareness of a range of sensor readings

  • The device should allow setting of individual preferences in relation to each sensor reading

  • The device should allow comparison between individual and group (average) preferences, allowing users to maintain informal awareness of others’ preferences

  • The device should allow for user feedback on their level of social connectedness.

In addition to the directly sensed values provided by the sensor platform, we introduced a soft measure of “social connectedness”. We deliberately left the meaning of this measure of “social connectedness” open to interpretation by participants rather than specifying it as for example the number of other people currently in the office. Our aim was to allow people to indicate their feeling of the general social atmosphere of the office as they interpreted it and it was these interpretations of participants that we were particularly interested in. Rather than actually quantifying social connectedness, our aim with this measure was to open this up for discussion in our subsequent interviews with participants alongside the other environmental factors (this is discussed further in the “study design” section below).

It is important to note that the design of the ambient interaction device was subject to limitations imposed by the existing sensor platform. The platform had a limited number of input/output ports available that could be used to communicate with the interaction device. As a result the focus of the device design was not to build an interaction device with a large number of possible interaction capabilities. Instead we focussed on how the device could provide a small but sufficient set of interaction mechanisms that would meet our design goals, yet allowed us to use the existing sensor-platform infrastructure.

To develop the design we undertook an iterative development process, where we built working prototypes and then ‘lived with them’ ourselves in order to refine the usability, functionality and physical form. Importantly, the programmed ‘behaviour’ of the devices was something that could only be understood by spending some time to experience how it was to interact with the devices over a period of time.

Through this process, several key improvements were made. First was the addition of audio output to provide feedback when setting preferences and to give users a reminder that the device hadn’t been interacted with on any given day. We also discovered that there was a need to support a user in comparing between the current sensor reading and their setting, and to be able to ‘scroll’ through different sensor readings.

We also realised during this process that there was an important question around whether people would want to get a precise reading of sensor data in comparison to the more ambient display provided by the device. This prompted us to design and develop a second prototype based on a mobile-optimised web page, which reproduced the basic functions of the device with the ability to see and set specific sensor values. This second interface represents an alternative approach to building an interface onto the sensor platform where the functioning of the sensor platform is exposed through a web interface. It was therefore useful as a point of comparison for the tangible and ambient design.

3.1 MiniOrb System

The MiniOrb system consists of three components, a sensor platform, an ambient and tangible interaction device and a mobile application, each of which fulfil a different role. We introduce each component in turn.

Sensor Platform. The MiniOrb sensor platform is an Arduino-based sensing device that measures temperature, humidity, light and sound levels via an array of digital and analogue sensors (see Fig. 1, background). Each platform communicates wirelessly across a ZigBee mesh network to a dedicated server. The sensor platforms were placed in a relatively fixed position above users’ desks in order to achieve comparability of sensor readings. The platforms run autonomously and users do not interact with them directly.

Fig. 1.
figure 1

MiniOrb sensor platform (background) and ambient interaction device (foreground)

3.2 MiniOrb Interaction Device

The MiniOrb device is an ambient and tangible interaction device that records user’s office comfort preference values, displays both sensor reading from the user’s local sensor platform, as well as average comfort preferences across all users (see Fig. 1, foreground). The device consists of three small LED’s that indicate different states, a piezo speaker, a button and a scroll-wheel potentiometer for user input, as well as a dome-shaped “orb”, a 3D-printed plastic light diffuser which contains a bright RGB-LED and a laser-etched/cut cover.

The dome-shaped “orb” RGB LED is the main output mechanism for the device. To output information the device cycles through a series of colours, which represent different sensor categories of “temperature”, “light”, “noise” and “social”. The available colours were constrained by the capabilities of the RGB LED. The directly sensed parameters of temperature, light and noise were mapped to the primary colours of red, green and blue respectively because these give the clearest colour from the LED, while the ‘soft’ measure of social connectedness was mapped to yellow, which relies on mixing of red and green channels (see Fig. 2, left, for a match between colour and sensor categories). In addition three small LEDs linked to the icons for sensor, user and group respectively indicate wether the readings are a sensor value, a personal preference or a group average. Values are mapped to the colour intensity of the orb, i.e. the higher the value the more intense the colour.

Fig. 2.
figure 2

MiniOrb cheat sheet (left) and conceptual device design (right) (Color figure online)

For instance, to display temperature-related information the device cycles through three settings. First it displays the value read by the sensor platform as a matching relative intensity of the colour red. The LED under the “sensor” icon lights up to indicate the state. The device then displays the last known user preference, again indicated by the corresponding status LED “user”. The temperature cycle is then completed by displaying the value for the “group” preference in a similar fashion. Each state is displayed for approximately 5 s. Once the temperature cycle completes, the device moves on to the “light” category using green as the output colour, and so forth.

The “social” category differs from the other categories, in that it is not based on input from the sensor platform, but purely determined by user feedback and intended as a trigger for subsequent discussion of participants’ interpretations of this (we discussed the notion of “social connectedness” in the design section). Thus, for this sensor category the “sensor” value is identical to the “group” value. Each user was given a “cheat sheet” that outlined the colour-codes, states and interactions.

The device offers users three interaction mechanisms by combining the push button and scroll wheel: (1) scroll wheel: when users scroll the wheel they can choose one of the four sensor categories manually, e.g. if users are interested in the sound reading they can scroll the wheel to get the device to display the corresponding cycle immediately, without having to wait for the device to complete the other cycles. (2) push button: when pressing the button, the device displays the user preference for the current sensor category, and when releasing the button displays the corresponding sensed value. This allows users to efficiently compare their own preference against the sensed value. (3) scroll wheel & push button: this function allows users to enter preference values for any sensor category they selected. To do so they keep the button depressed and set the required orb intensity via the scroll. The preference is recorded as soon as the button is released. The device was designed so that this interaction could be easily achieved with a single hand, e.g. by pressing the button with a finger and scrolling the wheel with the thumb.

In addition to the visual output mechanisms the device employs a small number of audio cues to enhance the interaction. The interaction with the scroll wheel is enhanced with subtle “click” sounds that give users a sense of selecting discrete units. A slightly more pronounced sound is used when the wheel moved into the “middle” position. A separate “chirp” sound is used to notify the user that their preference has been recorded and sent of to the server. Lastly, once per day the device issues a short “remember me” “buzz” sound to encourage users to record their preferences. This sound has been specifically designed to be noticeable, but not to annoy users.

3.3 MobiOrb Mobile Application

The MobiOrb mobile application is an alternative interface that provides the same basic functionality as the MiniOrb device, but employs different interaction mechanisms (see Fig. 3). Apart from the interaction approaches, the main difference between the two MiniOrb interfaces is that the mobile interface allows user to interact with specific sensor values (e.g. Temperature 26.1 C).

Fig. 3.
figure 3

MobiOrb mobile interface

The main screen consists of four sections, one for each sensor type. Each section contains a colour-coded slider that matches the MiniOrb sensor colour scheme. Users can move these sliders to record their preferences, which are also displayed in plain text in a grey bar in the top part of the slider. The readings in bold at the bottom of each section show the actual sensor value using a sensor-specific unit. The detached grey bar in the middle of each section depicts the group average value. The sensor and preference values exactly match the ones displayed on the MiniOrb device interface. The mobile interface allows users to more accurately assess and set sensor values, but at the same time does not provide the same ambient accessibility as the MiniOrb devices that are situated on users’ desks.

4 MiniOrb Evaluation

The evaluation of the MiniOrb system was conducted through a series of user studies. In the context of this paper we report on the outcomes of a two-week in situ trial of the MiniOrb system as well as the outcomes of a series of semi-structured interviews conducted following the trial.

4.1 Study Design and Setup

Study participants were recruited amongst the inhabitants of the Queensland University of Technology’s Science and Engineering Centre (SEC), Australia, a multidisciplinary research facility, hosted across two newly constructed buildings, which hosts academics, general staff and postgraduate students. An invitation email was sent out to all SEC inhabitants to participate in the MiniOrb study. The study was structured in three parts: an initial questionnaire exploring existing attitudes towards indoor climate preferences; a two-week trial of the MiniOrb system; and a follow-up interview investigating participants’ experience of use and their interpretation of the sensor categories. Participants could choose how many of the stages to complete based on their own availability. 29 people participated in total in at least one of the stages in the study. 14 of these 29 were available to participate in the initial questionnaire only. 15 participants were available to participate in both the questionnaire and the trials, but again due to availability only 11 of these were also able to participate in the follow-up interviews.

15 participants took part in one of two consecutive trials, which were each conducted over a period of two weeks. At the start of each trial a sensor platform and MiniOrb interaction device were installed on each participant’s desk. Each participant received a short introduction on how to use the device and were given a “cheat sheet” – a very short manual outlining the sensor colour code, symbols and basic functions (see Fig. 2, left). Participants were not instructed to use the device at particular times, but rather encouraged to record preference settings when they felt it was appropriate to do so. The intention of giving the participants the MiniOrb device only in the first week was to allow them to familiarise themselves with what would likely less-familiar interactions and functioning of the ambient and tangible version of the interface than the relatively more familiar mobile phone based MobiOrb interface.

During the second week of the trial participants were offered to use the MobiOrb mobile application in addition to the MiniOrb device on their desk. Our aim was not to compare the two interfaces in an A/B test, but to add an interface that offered accurate numerical readings in order to gain a better understanding of how well the ambient interface performed in relaying office comfort information. Out of fifteen total participants, seven used the mobile interface.

After each trial was completed we conducted a series of semi-structured interviews with the participants, which lasted between 20–30 min. A total of 11 participants across the two trials took part in the interviews. We used a grounded theory approach and conducted open coding to categorise the interview results.

4.2 Study Results

In this section we briefly discuss the questionnaire results, but predominantly focus on the results of the trial and follow-up interviews.

Questionnaire. The results from the questionnaire provided a baseline for the more detailed qualitative results of the post-trial interviews. The questionnaire results confirmed our assumption that the suggested environmental factors are important to the participants’ perception of office comfort. They further confirmed, that with the exception of “social atmosphere” participants felt that they had very limited control over their office environment. Most participants were neither happy nor unhappy with their overall indoor climate, but were reasonably happy with the location of their desks. Generally, our participants felt that being able to change considered environmental factors would have a high impact on their office comfort levels.

Follow-up Interviews. The interviews were structured into three overarching sections: (1) attitudes towards office comfort, (2) experience of using the MiniOrb ambient device and (3) experience of using the MobiOrb mobile application. The first section was intended to enhance the data on office comfort levels, collected in the questionnaires, and provided more detail on participants’ working context and differences in attitudes between individuals. The other two sections explored both when and how people used the devices on their desk, as well as how they perceived the usability and user experience of the respective device and application. We discuss the results for each section in turn.

Attitudes Towards Office Comfort. While many participants appreciated their office environment overall, we identified a number of diverse concerns regarding office comfort. The most commonly mentioned issue was temperature. Many of the participants felt that the overall temperature in the building was set a “little bit” too low. Some participants reported feeling cold at certain parts of the day (e.g. the afternoon). Since the study was conducted in a sub-tropical environment, this generally did not mean that too little energy was used to warm the building, but rather that too much was used to cool it. The second most commonly mentioned issue related to noise. Noise was nearly exclusively interpreted as noise caused by conversations. A number of participants felt disturbed when other people nearby chatted or conducted phone conversations. About half of the participants mentioned that they coped with this interruption by using headphones. Other participants’ strategy involved moving to a different (quieter) desk, a meeting room, or working in the library. Participants who reported noise issues were exclusively situated in the open office environment. Other, non conversation-related, background noise was not perceived to be an issue. Lighting, and in particular the setting of the window blinds, was reported as an issue by some participants. Depending on where their desk was located in relation to the windows, they either perceived that they received too much light, which caused issues with glare and reflection on monitors, or the opposite, that the office was too dark and they were not able to see the outside environment. However, complaints regarding lighting were overall less prevalent and intense compared to those regarding noise and temperature. Another issue that was mentioned a number of times was the notion of privacy in the open office setting. Some participants reported that they would like to have higher, more secluded, cubicles or offices to be able to work in a more private setting. When asked how they perceived their current level of control over their environment, the majority of participants felt that their level of control was very low or even non-existent. The most requested control factor was being able to change the temperature, followed by control over the window blinds. Some participants mentioned that they would like control over aspects like privacy and noise, but also reflected that this would likely require changes in the physical setup of the office.

MiniOrb Device Experience. All interviewees reported having used the device. We identified a number of usage patterns with regards to when participants recorded comfort preferences. First, many study participants used the device in the morning when they first arrived at their desk, and again when they returned to the desk from a break. The reported reason for this was that the device was perceived as very inconspicuous (ambient) and participants generally “forgot” that it was there after a while. However, when they returned to their desk they commonly noticed the glowing orb and “remembered” that the device was there. Second, participants would specifically use the device when they became aware of being uncomfortable or when the local environment changed (e.g. the window blinds going up and down). Third, participants commonly entered data when the device issued a “remember me” buzz sound. Nearly all participants perceived this mechanism very positively. They felt that it helped them to remember to provide input and did not feel that the interaction was intrusive or distracting. One participant also reported that they were encouraged by hearing other people send feedback from their own devices (by hearing the “feedback submitted” sound) and subsequently remembered to use the device themselves.

Overwhelmingly, participants enjoyed having the device situated on their desk and perceived that the device was very unobtrusive as well as easy to use. However, specific functionalities were used at different rates as well as interpreted and applied differently. A significant difference emerged in the way people recorded preferences. Some participants used the push button feature, that allowed them directly compare the current sensor reading, for a specific category, with their user preference. These participants would then set the value a “little bit” higher or lower than the current status to indicate gradual preference change. By contrast, other participants would turn their preference value to the maximum or minimum setting to indicate their strong desire for this value to change respectively. These participants did not perceive that they were setting a specific value, but rather interpreted the interaction as “casting a vote”. Participants reported that they particularly engaged in this type of voting when they felt strongly about their choice or wanted to communicate their displeasure (e.g. they felt annoyed because the environment was too noisy to concentrate).

There were a limited number of reported uses of the “social” category. As described earlier, our intention with adding this category to the interface was in large part to trigger discussion in the interviews of what people’s interpretations of it were. While some participants reported that they were unsure how to interpret this category, others gave surprising examples of a use of the feature that resulted in social interaction. For instance some participants belonging to a working group would “turn up” their social preference value at the end of some working day to mutually indicate to each other that they were ready to engage in social activities. In these cases it seems that the social category functioned not only as a measure of social atmosphere, but also as much as a means for people to signal calls for social action to one-another.

The functionality that was reportedly used least was the “group average” feature. Only some participants reported that they actively observed the group setting after they submitted their preference in order to understand how other users felt. Many other participants however stated that they did not pay attention to the group setting, or in some cases were not sure what it meant.

A number of users pointed out that setting feedback levels made them feel like “somebody cared”. While these participants were aware that the system only recorded feedback values and did not affect change, they nevertheless valued the fact that their opinion did in some way count. One user opined: “(…) it just gave me the feeling that somebody maybe cares somewhere”.

The interviews revealed a number of other, smaller, issues regarding the system’s functionality. One participant thought that the “press button” function would allow them to compare personal preference with group average values, rather then sensor values. A single participant felt that the light from the orb was somewhat distracting and subsequently positioned it out of sight. However, this attitude was not shared by the large majority of participants.

MobiOrb Application Experience. Out of the eleven participants we interviewed, seven had used the mobile application. The most common observation was that the mobile application was less noticed or thought of. Most participants felt that the ambient device reminded and encouraged them to use it because it was situated on people’s desk. The mobile application, by contrast, had to be remembered and used on purpose.

However, when people actually used the application they appreciated the ease with which feedback values could be set and found it generally easy to use. One participant commented that setting multiple values was quicker and easier on the mobile device. The fact that the mobile device displayed concrete values rather than relative colour hues was an obvious difference between the two interfaces. Our participants on average did not seem to prefer either way of presenting values over the other. Some participants expressed that seeing the concrete values, and in particular the range within which the value could be changed, enhanced their experience: “It just felt like I knew more what I was saying with the range”. However, another participant mentioned that he liked being able to focus on setting their perceived comfort levels relative to the current sensed value, without having to think about absolute numbers.

5 Discussion

5.1 Discussion of Interview Results

The interviews provided a nuanced picture of participants’ attitude towards office comfort and their use of the different elements of the system. In the following discussion we will highlight five pertinent issues that warrant further discussion:

“Protest” vs. Gradual Vote. Due to the fact that the feedback mechanism of the ambient device was based on colour intensity, the meaning of feedback values was open to interpretation. Our participants used the feedback mechanisms in two significantly different ways: (a) to submit gradual changes based on the sensor value to indicate relative shifts in required comfort levels or (b) to submit a radical change by setting the value to the minimum or maximum setting.

The latter approach, here also referred to as a “protest vote” was used to express a strong feeling of discomfort and was similar to a yes/no voting approach, while the former approach aimed to provide an accurate reading of the desired value. Both approaches are valid, however the protest vote was less applicable on the mobile application, since users were able to see the specific value of their preference setting. Our participants reported that once they saw the results of their “protest vote” on their mobile interface they realised that they had set the preference value either very low or very high and that this setting did not reflect their actual preference. However, we believe that both approaches are valid in the context of providing feedback on comfort levels and should be supported. This issue requires further reflection on the design of future iterations of our system and other similar systems.

Minimal Design Trade-off. The minimal design of the interaction device was an important design consideration. This was a design constraint that we consciously chose by deciding to work within the existing constraints of the sensor platform. The challenge was to build a small device that combined suitable ambient output mechanisms with a small number of tangible interaction mechanisms. The device had to support a suitable range of functionality without burdening the user with too much complexity. Based on the results of the interviews we believe that we overall succeeded in achieving this goal, but for future work we would strongly consider redesigning some of the underlying platform while maintaining the philosophy of keeping the platform as minimal as possible.

With regards to its “ambient quality” the device was perceived as fading into the background and being available when people wanted to interact with it. However there were signs that not all of the intended functionality was used to the same extent. In particular, the group average reading was only used by a limited number of participants. This fact is possibly related to our choice of functionality that allowed users to compare the feedback value against the sensor value, but not the group average value. This is a potentially significant design decision because as became clear from the interviews an important factor for people is that they feel that their preferences are reflected or supported by the group. This highlights that indoor comfort is as much a social phenomenon as a measurable physical phenomenon and by choosing to compare with the sensor values rather than the group preference, our interface emphasised the “physical” view.

This presents a design trade-off when dealing with a device with limited interaction mechanisms. We suspect that rather than trying to integrate both the comparison of sensor values and group averages into a single device, an alternative and potentially better design would be to remove rarely used functionality (e.g. group average) and represent this functionality on a separate device or interface (e.g. a “MaxiOrb” with the sole purpose of publicly displaying group averages to a group of users in a section of an office).

Somebody Cares Somewhere. The notion that some participants felt positively about the fact that their feedback was recorded highlights the importance of aspects of office comfort that go beyond measurable factors, such as “being appreciated”. With regards to the design of similar systems, this raises the question how systems can be designed to more actively give user the feeling of being listened to as well as finding mechanisms to affect change or reflect office comfort attitudes to other inhabitants (e.g. a “MaxiOrb” public display, mentioned above could indicate that several users felt that the workspace was getting too noisy, and thus raise the level of awareness regarding shared attitudes in office environments).

Prompting Interaction. The small “remember me” buzz sound prompt, issued to encourage users to submit a preference value, had a significant impact on the usage pattern of the device. Interestingly, our participants did not find this interaction to be distracting, but perceived it as a welcome reminder to interact with the system. Conceptually, this interaction can be interpreted as briefly moving the device from its’ ambient state into the user’s focus, acting as a reverse notification, requesting user interaction, rather than indicating a change in the systems’ state.

Ambient vs. Mobile Interaction. It is too early, and beyond the scope of this paper, to conduct a conclusive comparison between the use of the ambient interaction device and the mobile application in the context of our study. However, the results of our interviews indicate that both interfaces fulfilled different and important roles. One of the most important aspects of the interaction device was its ambient nature. The fact that the device was located on people’s desk meant that it acted as a constant reminder, a central quality when seeking to continuously solicit user input. The mobile device by comparison was appreciated for its straightforward and precise interface, which allowed users to provide specific feedback and understand the range of different sensor categories. Interestingly, a number of users remarked that they would have preferred if this interface was located on their computer desktop rather then their mobile phone to provide better integration with the working environment on their desk. Generally, the mobile interface was perceived as an extension that provided additional functionality to the ambient interaction device, rather than a replacement of it.

6 Conclusions

In this paper we described the design, use and evaluation of MiniOrb, a system that employs ambient and tangible interaction mechanisms to allow inhabitants of office environments to report on subjectively perceived office comfort levels. One attraction of a tangible interaction approach in this context is that it gives physical presence to a phenomenon that is normally at the background of peoples’ experience. Our research addresses two pertinent questions. First, how can ambient and tangible interaction mechanisms support office inhabitants to record and reflect on their subjective office comfort levels, and second how do these mechanisms compare to more traditional approaches that enable users to see and set specific sensor values?

The results of our study show that minimal interaction devices, combining ambient and tangible interaction approaches, are well suited to engage users in the process providing preferences. This process can be aided by the provision of alternative interface mechanisms that provide accurate sensor and reference values when required. The results of our study are particularly relevant in light of the fact that our system did not affect change in the users comfort levels, but merely recorded their preferences, thus providing less of an incentive to engage with the system. The fact that our system was used and users felt that they were “listened to” highlights the importance of exploring mechanisms to provide individualised control over office comfort levels. While the introduction of our system was successful, the results of our study revealed many nuances with regards to how people provided feedback, which functionality to integrate in a minimal interaction device, how to prompt interactions and the different ways people interpret vague and specific sensor readings.

An important contribution of our approach for ongoing research into understanding people’s responses to indoor climate conditions is that it provides a method of recording preferences in situ and through time and for encouraging people to reflect on their experience of indoor climate. This supports the need for moving away from static steady-state approaches to indoor climate control to ones that take account of individual variability and changes over time.