Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Embedding intelligent eco-aware systems within everyday things to increase people’s energy awareness

Abstract

There is a lack of energy consumption awareness in working spaces. People in their workplaces do not receive energy consumption feedback nor do they pay a monthly invoice to electricity providers. In order to enhance workers’ energy awareness, we have transformed everyday shared electrical appliances which are placed in common spaces (e.g. beamer projectors, coffee-makers, printers, screens, portable fans, kettles, and so on) into persuasive eco-aware everyday things. The proposed approach lets these appliances report their usage patterns to a Cloud-server where the data are transformed into time-series and then processed to obtain the appliances’ next-week usage forecast. Autoregressive integrated moving average model has been selected as the potentially most accurate method for processing such usage predictions when compared with the performance exhibited by three different configurations of Artificial neural networks. Our major contribution is the application of soft computing techniques to the field of sustainable persuasive technologies. Thus, consumption predictions are used to trigger timely persuasive interactions to help device users to operate the appliances as efficiently, energy-wise, as possible. Qualitative and quantitative results were gathered in a between-three-groups study related with the use of shared electrical coffee-makers at workplace. The goal of these studies was to assess the effectiveness of the proposed eco-aware design in a workplace environment in terms of energy saving and the degree of affiliation between people and the smart appliances to create a green-team relationship.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. 1.

    http://www.alertme.co.uk.

  2. 2.

    http://www.currentcost.com.

  3. 3.

    http://www.theowl.com.

  4. 4.

    https://nest.com/blog/2012/08/23/nest-thermostats-for-business/.

  5. 5.

    The whole questionnaire in its Spanish version is available on http://tiny.cc/xrasbx.

  6. 6.

    https://dl.dropboxusercontent.com/u/3163534/dataset%20soft-computing.zip.

  7. 7.

    A complete model description and the whole selection criteria was thoroughly described in López-de-Armentia et al. (2014).

References

  1. Adebiyi AA, Adewumi AO, Ayo CK (2014) Comparison of ARIMA and artificial neural networks models for stock price prediction. J Appl Math 2014. doi:10.1155/2014/614342

  2. Arroyo E, Bonanni L, Selker T (2005) Waterbot: exploring feedback and persuasive techniques at the sink. In: Proc. of CHI’05, pp 631–639

  3. Box GEP, Jenkins G (1990) Time series analysis, forecasting and control. Holden-Day, San Francisco

  4. Broms L, Katzeff C et al. (2010) Coffee maker patterns and the design of energy feedback artefacts. In: Proc. of DIS’10, ACM, pp 93–102

  5. Brynjarsdóttir H, Håkansson M, Baumer E, DiSalvo C, Sengers P (2012) Pierce, James, sustainably unpersuaded: how persuasion narrows our vision of sustainability. In: Proc. of CHI’12. ACM, pp 947–956

  6. Bush, E., Nipkow, J., Josephy, B., Heutling, S., Griesshammer, R.: Strategies to enhance energy efficiency of coffee machines. In: Proc. of EEDAL’09 (2009)

  7. CarbonTrust (2013) CarbonTrust: employee awareness and office energy efficiency. http://tiny.cc/o5p5tw. Accessed 30 Sep 2014

  8. Carrico AR, Riemer M (2011) Motivating energy conservation in the workplace: An evaluation of the use of group-level feedback and peer education. J Environ Psychol 31(1):1–13

  9. Casado-Mansilla D, López-De-Armentia J, Garaizar P, López-De-Ipiña D (2014) To switch the coffee maker or not: that is the question to be energy efficient at work. In: Proc. of E.A. of CHI’14 (publication pending). ACM, New York

  10. Chai T, Draxler RR (2014) Root mean square error (rmse) or mean absolute error (mae)? Arguments against avoiding rmse in the literature. Geosci Model Dev 7(3):1247–1250

  11. Chapman J (2005) Emotionally durable design: objects, experiences and empathy. Earthscan LLC, London

  12. Chetty M, Tran D, Grinter RE (2008) Getting to green: understanding resource consumption in the home. In: Proc. of UbiComp’08. ACM, pp 242–251

  13. Costanza E, Ramchurn SD, Jennings NR (2012) Understanding domestic energy consumption through interactive visualisation. In: Proc. of UbiComp’12. ACM, pp 216–225

  14. Crosbie T, Houghton M (2012) Sustainability in the workplace. In: Technical report, sustainability at work

  15. Daamen DDL, Staats H, Wilke HAM, Engelen M (2001) Improving environmental behavior in companies:the effectiveness of tailored versus nontailored interventions. J Environ Behav 33(2):229–248

  16. Darby S (2006) The effectiveness of feedback on energy consumption: a review for DEFRA of the literature on metering, billing and direct displays. In: Technical report. University of Oxford, Oxford

  17. Darley JM, Latane B (1968) Bystander intervention in emergencies: diffusion of responsibility. J Personal Soc Psychol 8:277–281

  18. DiSalvo C, Sengers P, Brynjarsdttir H (2010), Mapping the landscape of sustainable HCI. In: Proc. of (CHI’10). ACM, pp 1975–1984

  19. Efficiency Energy, Energy Renewable (2008) Energy efficiency trends in residential and commercial buildings. In: Technical report, US, DoE

  20. EU-Comission (2008) Directive 2008/98/EC of the European parliament and of the council of 19 November 2008 on waste and repealing certain Directives. Technical report, European Comission. http://bit.ly/RdWxOj

  21. Farber D (2012) CNET news: twitter hits 400 million tweets per day, mostly mobile. http://cnet.co/KHlg8q. Accessed 30 Sep 2014

  22. Fischer C (2008) Feedback on household electricity consumption: a tool for saving energy? J Energy Effic 1(1):79–104

  23. Fogg BJ (2003) Persuasive technology: using computers to change what we think and do. Ubiquity. doi:10.1145/764008.763957

  24. Fogg BJ (2009) A behavior model for persuasive design. In: Proc. of PERSUASIVE’09. Springer, Berlin, pp 40–1407

  25. Foster D, Lawson S, Blythe M, Cairns P (2010) Wattsup?: motivating reductions in domestic energy consumption using social networks. In: Proc. NordiCHI’10. ACM, pp 178–187

  26. Foster D, Lawson S, Wardman J, Blythe M, Linehan C (2012) “Watts in it for me?”: design implications for implementing effective energy interventions in organisations. In Proc. of CHI’12. ACM, pp 2357–2366. 978-1-4503-1015-4

  27. Froehlich J, Findlater L, Landay J (2010) The design of eco-feedback technology. In: Proc. of CHI’10. ACM, pp 1999–2008

  28. Gustafsson A, Gyllenswrd M (2005) The power-aware cord: energy awareness through ambient information display. In: EA of CHI’05. ACM, pp 1423–1426

  29. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. J Neural Netw 2:359–366

  30. Jahn M, Schwartz T, Simon J, Jentsch M (2011) EnergyPULSE: tracking sustainable behavior in office environments. In: Proc. of e-Energy’11. ACM, pp 87–96

  31. Jönsson L, Broms L, Katzeff C (2010) Watt-Lite: energy statistics made tangible. In: Proc. of DIS’10. ACM, pp 240–243

  32. Khashei M, Bijari M (2010) An artificial neural network (p, d, q) model for timeseries forecasting. Expert Syst Appl 37:479–489

  33. López-de-Armentia J, Casado-Mansilla D, López-de-Ipiña D (2014) Reducing energy waste through eco-aware every-day things. J Mob Inf Syst 10:79–103

  34. Nass C, Fogg BJ, Moon Y (1996) Can computers be teammates? Int J Hum-Comput Stud 45(6):669–678

  35. Pierce J, Paulos E (2012) Beyond energy monitors: interaction, energy, and emerging energy systems. In: Proc. of CHI’12. ACM, pp 665–674

  36. R-Core Team (2013) R: a language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. ISBN 3-900051-07-0. http://www.R-project.org/. Accessed 19 June

  37. Schmidt A, Gellersen H-W, Beigl M (1999) Matching information and ambient media. In: Cooperative buildings. Integrating information, organizations, and architecture. Springer, Berlin, pp 140–149

  38. Schwartz T, Betz M, Ramirez L, Stevens G (2010) Sustainable energy practices at work: understanding the role of workers in energy conservation. In: Proc. of NordiCHI’10. ACM, pp 452–462

  39. Siero FW, Bakker AB, Dekker GB, Van-den-Burg MTC (1996) Changing organizational energy consumption behaviour through comparative feedback. J Environ Psychol 16(3):235–246

  40. Starik M, Marcus AA (2000) Introduction to the special research forum on the management of organizations in the natural environment. J Acad Manag 43(4):539–547

  41. Thieme A, Comber R, Miebach J, Weeden J, Kraemer N, Lawson S, Olivier P (2012) We’ve bin watching you: designing for reflection and social persuasion to promote sustainable lifestyles. In: Proc. of CHI’12. ACM, pp 2337–2346

  42. Ventura D, Casado-Mansilla D, López-de-Armentia J, Garaizar P, Catania V, López-de-Ipiña D (2014) ARIIMA: a real IoT implementation of a machine-learning architecture for reducing energy consumption. In: Proc. of UCAm I’14, LNCS

  43. Winett RA, Neale MS, Grier HC (1979) Effects of self-monitoring and Feedback on Residential Electricity Consumption. J Appl Behav Anal 12(2):173–184

  44. Yang R, Newman MW (2013) Learning from a learning thermostat: lessons for intelligent systems for the home. In: Proc. of UbiComp’13. ACM, pp 93–102

  45. Yun R, Lasternas B, Aziz A, Loftness V, Scupelli P, Rowe A, Kothari R, Marion F, Zhao J (2013b) Toward the design of a dashboard to promote environmentally sustainable behavior among office workers. In: Proc. of PERSUASIVE’13. Springer, Berlin, pp 246–252

  46. Yun R, Scupelli P, Aziz A, Loftness V (2013a) Sustainability in the workplace: nine intervention techniques for behavior change. In: Proc. of PERSUASIVE’13. Springer, Berlin, pp 253–265

Download references

Acknowledgments

The authors are very grateful to the University of Deusto for the financial support to their Ph.D. studies and also to the project Future Internet II (IE11-316) supported by the Basque Government.

Author information

Correspondence to Diego Casado-Mansilla.

Additional information

Communicated by A. Jara, M.R. Ogiela, I. You and F.-Y. Leu.

Appendix

Appendix

ANNs as a soft computing technique are widely used as forecasting models in many areas. ANNs are data-driven, self-adaptive methods with few prior assumptions. They are also good predictors with the ability to make generalised observations from the results learnt from original data, thereby permitting correct inference of the latent part of the population. The wide use of ANNs is due to their very efficient performance in solving nonlinear problems including those in real world. This is in contrast to ARIMA, which assume that the series are generated from linear processes and as a result might be inappropriate for most real-world problems that are nonlinear (Adebiyi et al. 2014). Although ANNs have provided competitive results when compared with ARIMA models (Khashei and Bijari 2010), we have evidenced that in our specific case, the ARIMA-based model performs better when using the Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Mean Absolute Scaled Error (MASE) and Mean Absolute Percentage Error (MAPE) comparative metrics.

ANNs in a nutshell

An ANN is an interconnected group of nodes able to approximate a functional relationship between input and output variables in a certain domain of interest. Using the analogy with human neural networks, the nodes, that compose the ANN, are said neurons and the directed edges which communicate them are called synapses. The neurons are organised in layers which are usually fully connected by synapses. Each of the synapses is attached to a weight indicating the effect of the corresponding neuron in the whole model. The data pass through the neural network as signals. They are first processed by the so-called integration function combining all incoming signals (usually a summation); and second, they are processed by the so-called activation function obtaining the output of the neuron. A neural network with zero hidden layers is a linear expansion. The general model of a neural network usually has three layers (one is the hidden layer) with a single output. It is represented by the following equation:

$$\begin{aligned} o(x) = f\left( w_0 + \sum _{j=1}^q w_j \cdot f\left( w_{0j} + \sum _{i=1}^n w_{ij} x_i\right) \right) \end{aligned}$$
(1)

where \(w_{ij}\) \((i=1,2,\ldots ,n; j=1,2,\ldots q)\) and \(w_j\) are the weights of synapses; n is the number of input nodes and q is the number hidden nodes; f is the activation function. The definition of an ANN predictive model consists in determining the weights that provide the best fitting of the real data through usage of learning algorithms (Khashei and Bijari 2010).

ANN’s topology selection

This process encompasses the selection of the number of hidden layers, as well as the number of input, hidden and output neurons in each layer. As the model creation is driven by the data, we overview the shape of our time series. Our dataset which is openly availableFootnote 6 consisted of (1) the training-set with the number of coffees prepared throughout each of the one-hour slots (starting at 7.00 a.m. and ending at 7.00 p.m) in 18 working days (weekends excluded). (2) The test-set which is the same type of data corresponding to the 5 consecutive working days; (3) 5 more working days of real data to test the models’ performance.

Taking into account that one hidden layer is sufficient to model any piece-wise continuous function as stated by Hornik et al. (1989), we have chosen to use only one hidden layer in our model. Regarding the input layer, we have modelled the neural network with five input variables (i.e. five input neurons) that represent the number of coffees counted in the same hour slot along five consecutive days (from Monday to Friday). In the output layer we have decided to use only one neuron. It represents the number of predicted coffees that are prone to be prepared in the same hour-slot of those of the input nodes.

The idea of the proposed model is to feed it with a window of 5 working days of data (i.e. 12 vectors of 5 values each) to obtain the next-day forecast (i.e. a 12 values vector), slide the window forthward one day, including the previous predicted day, and perform again the prediction (See Fig. 6). To clarify, if we feed the model with 5 values corresponding with the coffees prepared in one hour slot from Monday to Friday, we will expect to obtain the forecasted number of coffees from the next Monday at the corresponding slot. If we do the same form Tuesday of the previous week until Monday of the current week, we will expect to obtain Tuesday’s forecasted number of coffees. If we slide again we will obtain the Wednesday forecast. So we repeat this process until the whole week is predicted.

Fig. 6
figure6

The sliding window of 5 working days of data used to predict the next-day coffee intakes forecast. The process is repeated until the whole week is completed

ANN’s training phase

To select the most accurate model to compare it with ARIMA, we tested three different training configurations with respect to the number of hidden neurons: 5:2:1 (i.e. two hidden neurons); 5:5:1 (i.e. five hidden neurons); and 5:10:1 (i.e. ten hidden neurons). The training of the network is performed applying resilient backpropagation as a learning algorithm. The training-set is composed by the coffees counted in each hour slot during 18 working days. Therefore, we train the ANN with 36 input vectors of five values each (15 days) and another 36 output vector of one value each (3 days). In this phase, the ANN calculates in an iterative manner the output for each given inputs, it measures the difference between the predicted and given output (i.e. the error) and it uses this error to modify the weights. All the three model configurations were, respectively, trained with 10 epochs using the same training-set and learning algorithm.

Testing phase and model selection

The last phase is the testing session. The test-set was composed by the coffees counted in one week. As the neural network model provides only one output, the execution of one test session is repeated by applying the sliding window approach of Fig. 6.

To determine the best performing structure, we have calculated the different prediction errors for each of the models: RMSE, MAPE, MAE and MASE. They are shown in Table 4. For each neural network configuration these metrics were computed using the predicted values and the remaining five days of real data of our dataset.

In all evaluated indexes, ANN_2 (two hidden nodes) has smaller values than the other ANNs. However, assuming a Gaussian error distribution for our predictions, we have used the root-mean-squared-error (RMSE) to determine the best performing structure as was suggested by Chai and Draxler (2014). Table 4 shows that the RMSE value of testing error is 1.6278 for the configuration 5:2:1, while it is 2.3614 and 2.0953, respectively, for the neural networks’ structures with five and ten hidden nodes. The smallest RMSE bares the best neural network configuration. Therefore, we can conclude that for the time series data we had, the most accurate predictive model, when forecasting the weekly usage of a coffee machine, was a network with two nodes in the hidden layer (Table 4).

Table 4 Comparison of forecasting models

ARIMA outperforms ANN 5:2:1

In a previous authors’ publication we performed a forecasting model selection that better fitted with our specific time series data.Footnote 7 The selected model was an ARIMA (3; 1; 1)(2; 0; 2) which is an ARIMA model with a seasonal part (also called SARIMA). This model was chosen since it presented the smallest Akaike’s Information Criterion (AIC) and because it did not present autocorrelations in the residuals. These two criterion demonstrated that the model provides an adequate fit to the data.

To compare the models’ autocorrelated structure performance, we have confronted the ARIMA error measurements with regard to the three ANN configurations previously discussed. According to Table 4, ARIMA has the lowest percentage error in each of the metrics calculated. In view of the results, we consider that ARIMA is the a suitable forecasting technique for the coffee-maker appliance’s usage.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Casado-Mansilla, D., López-de-Armentia, J., Ventura, D. et al. Embedding intelligent eco-aware systems within everyday things to increase people’s energy awareness. Soft Comput 20, 1695–1711 (2016). https://doi.org/10.1007/s00500-015-1751-0

Download citation

Keywords

  • Eco-aware everyday things
  • Persuasive eco-feedback
  • Energy awareness
  • Machine learning
  • Time series
  • ARIMA models