1 Introduction

The evolution of human–computer interaction has evolved massively during the last decade, which led to emerging a more collaborative interaction system. In this regard, exploiting eXtended Realities (XR) can enhance the human–computer interaction in a more collaborative system, wherein the interaction between the human and the computer takes place in the same way the human–human interaction does (de Belen et al. 2019a, b, c). Virtual Reality (VR), which is on the one end of the virtuality continuum of Milgram taxonomy (Milgram and Kishino 1994), immerses the user in a digital world, simulates the virtual environment, and allows the users to interact with their digital surroundings through VR head-mounted display, controllers and their input system. Augmented Reality (AR), however, enhances the real-world environment by overlaying digitally rendered perceptual information without completely blocking the user off the real world (Azuma 1997). Although Augmented and Mixed Reality (MR) were found to be used interchangeably, according to the Reality-Virtuality Continuum of Milgram (Milgram et al. 1995), MR is a broader term that contains anything in between the real and virtual environment on the virtuality continuum including AR. Exploiting the MR technology lets the developers create a world where the virtual and physical environments meet, interact, co-exist, and blend seamlessly. It is believed to be one of the most effective and commonly used mediums of human–human interfaces in which computers can provide the same kind of collaborative information (Ens et al. 2019) that people have in face-to-face interactions, such as communication by object manipulation, voice, and gesture (Wexelblat 1993).

Mixed reality applications have demonstrated a good level of potential in recent years and encountered increasing acceptance in multiple domains due to their proven benefits (Evans et al. 2017). The possibility of overlaying superimposed virtual objects on the physical world and interaction between different realities, in addition to the perceived environmental input, has proved MR to be beneficial and supportive in activities where performing the tasks requires continuous and interactive guidance and assistance. Attaching the artificial computer-generated objects enables the possibility of enriching the physical environment with informative, comprehensive, and context-aware virtual instructions with visual and voice feedback on how to complete a task. This can be game-changing in the field of smart homes, Ambient Assisted Living (AAL), and Internet of Things (IoT) (Skarbez et al. 2021), where continuous and context-aware help and support have to be provided in a system which is aware of its real-world surroundings.

Smart home (Harper 2006) is a combined technological paradigm incorporating different methods and techniques to represent all the house devices in a network of smart, ubiquitous, and pervasive objects. This distributed and interconnected network of smart objects is capable of monitoring, communicating, exchanging, and transmitting data through the protocol of the IoT (Dohr et al. 2010). The domain of IoT has seen immense research and technological advancements, including advances in embedded systems, wireless sensor networks, and wearable technologies; yet the current challenge of IoT platforms in the post-PC era is to provide more engaging and immersive interfaces for more intuitive interactions since the landscape of computer interfaces has also improved alongside with the other technological innovations to be more engaging and ubiquitous (Gubbi et al. 2013).

In this context, smart and assistive services can be deployed to support the inhabitants, including older adults and people with disabilities fostering their safety and independent living at home longer and easier. The smart home has gained increasing attention as one of the excellent solutions to improve the comfort, well-being, and quality of life among the aging population leveraging the IoT, Ambient intelligence, and Context-awareness (CA). Taking advantage of the pervasiveness and ubiquity of smart objects, such a domestic environment is capable of sensing and measuring the physical environment and triggering the environment based on the information received, exploiting IoT’s sensors and actuators. Moreover, with recent advances in artificial intelligence techniques in the context of IoT, AAL for the aging population has become smarter than before; since the system is based on the data coming from environmental sensors and wearables to detect critical conditions in real time and trigger the environment based on those data.

According to the report United Nations published in 2020 (United Nations 2020), most countries in the world are experiencing growth in the proportion of their population over 65, and the number is expected to double in the next 30 years. As the aging population continues to grow, developing a solid health care infrastructure that can deliver continuous care and support for older adults is a substantial necessity in terms of social and economic welfare. In this context, the primary phenomenon to focus on is the quality of taking care of older adults safely at home for a longer period of time instead of nursing homes which is proven to be more socially valuable and cost-effective (de Belen and Bednarz 2019). In this regard, many experts believe that Assistive Technologies that are assistive, adaptive, and rehabilitative can be a reliable solution and suitable for everyday use (Leo et al. 2017). Assistive technology equipment is becoming more prevalent in our environment to increase, maintain or improve the functional capabilities of older adults or people with disabilities (Gamberini et al. 2006). Ambient Intelligence is an emerging discipline that brings intelligence to the environment and makes it sensitive, adaptive, and responsive to people’s needs and preferences. During the last decade, ambient intelligence has been strengthened vastly as a result of tremendous technological advances in pervasive computing, the IoT (Giusto et al. 2010), and artificial intelligence (Cook et al. 2009). Considering that older adults are more prone to develop chronic conditions that may affect their cognitive and motor functionalities, smart homes incorporating IoT and ambient intelligence are believed to be a promising solution to building an AAL environment that maintains the elderly's health, safety, independence, and well-being.

This work describes an enhanced release of “HoloHome”; an MR application designed and implemented on the first generation of Microsoft HoloLens (Microsoft 2023a), to present a new means of interaction for the inhabitants of the smart home, allowing them to manage and regulate the domestic environment and smart home components effectively and intuitively. It also supports the inhabitants in performing their Activities of Daily Living (ADL) (Katz 1983) more conveniently and independently, thus fulfilling the AAL requirements for older adults and people with disabilities as well. HoloHome was born within the Italian project “Future Home for Future Communities (FHfFC)” (Future Home for Future Communities Project 2019), in which the “house of the future” was developed by integrating multiple technological paradigms to promote inhabitants’ comfort, safety, and independence while providing them continuous help and support in performing their daily activities. The “house of the future” has been implemented inside the STIIMA’s Living Lab in Lecco (Lombardy, Italy), where HoloHome has been exploited as an interface to control it.

The previous release of this application has been previously discussed (Mahroo et al. 2019), where 3D objects were augmented and placed on top of the real environment exploiting Vuforia (Vuforia 2023), a library utilizing computer vision technology to recognize and track image markers to overlay virtual graphics on top of the real world. However, the brand-new release of the HoloHome application discussed in this paper deploys an alternative method of positioning and relocating the virtual objects to the physical environment, exploring the spatial mapping techniques provided by Microsoft Mixed Reality ToolKit (MRTK) (Microsoft 2023b) to reap the benefits of Microsoft MR full functionalities, thus enabling the interaction between virtual and real objects. This new environment unlocks the links between human, computer, and environment interaction in a hybrid of realities. It can capture environmental data such as a person’s position in the real world (head tracking), surfaces and boundaries (spatial mapping), ambient lighting, object recognition, and location to enhance the perceived reality and reduce the gap where the artificial world ends and the physical world starts. Moreover, this new version of HoloHome is capable of adapting to the different health conditions and disabilities of the inhabitants. It also introduces smart features and functionalities that could be used as assistive technologies, particularly designed for the aging population who tend to suffer from age-associated memory deficits (reduced cognitive ability) or limited mobility to help them foster their independent living at home for a longer period. Finally, this paper investigates the assessment of the usability of the HoloHome to evaluate the effectiveness of the proposed system in promoting healthy and independent living at home among the inhabitants.

The remainder of this paper is organized as follows: Sect. 2 highlights some of the remarkable research in the fields of smart home, IoT, XR, and AAL while setting the related context in the combination of these technologies in the proposed system; Sect. 3 presents the architecture of the HoloHome with its features and functionalities; Sect. 4 investigates the usability of HoloHome application with a preliminary validation test on healthy users; and finally, Sect. 5 summarizes the main outcomes of the HoloHome and the envisioned future works.

2 Related work

Over the last few decades, there has been a vast amount of research and technological advancements in the field of smart homes (De Silva et al. 2012) and IoT (Li et al. 2015). Many technology historians believe the concept of smart home and home automation was initiated with Nikola Tesla’s invention of remote controls more than a century ago (Eseosa and Promise 2014); however, it has seen massive advances and pervasiveness over the last few decades. Smart homes provide inhabitants with home automation, comfort, convenience, security, and energy efficiency. Many researchers have already discussed smart home technologies, architecture (Li et al. 2018), benefits, and challenges (Wilson et al. 2017). In particular, Malche and Maheshwary (2017) have provided a typical smart home architecture and functions based on IoT, while another study (Mahroo et al. 2018) discusses another example of a smart home based on the IoT and Semantic Web Technologies (Baader et al. 2005).

The concept of IoT and smart home is also among the main topics discussed in AAL, where continuous care and support must be enabled for older adults and people with disabilities to maintain their independent living in a safe and comfortable domestic environment. Dohr et al. (2010) have applied the IoT to the AAL environment to help seniors in their daily routines and enable a new form of communication between the older adults and their environment. A study done by Rashidi and Mihailidis (2012) suggests that the emergence of AAL technologies is inevitable due to the rapidly aging society. As a result, assistive technologies (Pramod 2022) are increasingly gaining attention to provide special services for older adults (Cook and Polgar 2014), whose perception, cognition, and motor skills capabilities are changing parallel to the aging process (Czaja et al. 2019).

Since 1999 when Ashton (2009) mentioned the term IoT for the first time, the paradigm of IoT has seen rapid technological advances, especially in wearable technologies, embedded and ubiquitous devices, sensors, and actuators (Shao et al. 2019) and also in the relevant privacy and security factors (Sicari et al. 2015), (Weber 2010). IoT has already been applied in various fields like smart homes, smart buildings, health care, transportation, industry, agriculture, manufacturing, and automation to the point where some researchers even believe that it is approaching its viable mainstream usage (Gubbi et al. 2013).

However, over the last decade, with the advances in the XR technologies, the research in IoT has seen a shift toward being more focused on user experience, design, and human interactions in the landscape of interfaces that have become more engaging, immersive, and ubiquitous (Shao et al. 2019). Unlike the traditional IoT platforms, which are based on dashboard systems accessible from computers or mobile devices (Lee et al. 2019), the current challenge of IoT platforms in the post-PC era is to provide more engaging and immersive interfaces for more intuitive interaction methods (Alce et al. 2019). Alce et al. have done several studies to understand different interaction models and how they affect the IoT interfaces in AR (Alce et al. 2017) and VR (Alce et al. 2014) environments. Taking advantage of the AR/MR technologies and immersing the user with superimposed virtual objects instead of the traditional dashboards can elevate the IoT to a whole new era of human interaction and context-based information (Blanco-Novoa et al. 2020). However, the issue of compatibility and interoperability between heterogeneous devices remains a challenge to be addressed (Croatti and Ricci 2017), (Jo and Kim 2019).

Mixed reality devices and technologies have improved significantly during the last few years and gained more attention than before. These improvements have led to the reduction of commercialized prices and the expansion of the availability and interests in different consumer domains. Research on MR and AR has already been applied to various fields such as education (Pan et al. 2006), entertainment (Stapleton et al. 2002), gaming (Rashid et al. 2006), healthcare (Sahija 2022), (Viglialoro et al. 2021), industry (Moser et al. 2019), and engineering (Rodriguez et al. 2015). Quint et al. (2015) proposed an MR learning environment for workers within the factory following the Industry 4.0 vision (Kagermann et al. 2013). Another research done by Chen et al. highlighted a general view of the recent developments and challenges in medical MR (Chen et al. 2017). Regarding the industry and engineering, there have been several studies, especially for training the employees and assembly, such as studies done by Wang et al. (2016) and Sand et al. (2016).

In addition, different researchers have tried to tackle the combined paradigm of MR with IoT and smart homes (Park et al. 2018). Lee et al. (2019) introduced a framework to integrate an MR device, namely a Microsoft HoloLens glasses, into the OneM2M-based IoT platform through a RESTful API. Another research done by Jo and Kim investigated a scalable IoT platform where AR devices can be tracked and detected automatically (Jo and Kim 2016). Recently, Blanco-Novoa et al. (2020) studied the issue of interoperability of AR devices with IoT platforms and proposed a framework to integrate them easier to communicate with each other dynamically in a hybrid reality-based user experience to make a more natural, realistic, and context-aware smart home environment. An MR application to support domestic environment reconfiguration is also discussed by Spoladore et al. (2017) to facilitate performing the Activities of Daily Living through a smart home simulator.

Moreover, the issue of usability needs and design strategies to include MR and IoT interfaces for the aging population is addressed by de Belen and Bednarz (2019). Rashid et al. (2017) investigated the issue of smart cities and their inclusion for all citizens, including the older adults and people with disabilities. The authors discuss how IoT and AR can improve the accessibility issue for people with limited accessibility. In another study, de Belen et al. (2019a, b, c) discussed how wearable assistive technologies enable older people to improve their interactions with MR and IoT in order to assist them in daily activities such as analyzing the environment, searching for objects, and navigation within the house. With the release of Microsoft HoloLens smartglasses in 2016, researchers started investigating the use of HoloLens in different contexts. An experimental study has been conducted by Liu et al. (2018) to evaluate the HoloLens performance quantitatively. Evans et al. assessed the HoloLens functionalities on a guided assembly instruction (Evans et al. 2017). Other research has been done by Jang and Bednarz to explore the possibility of connecting the microcontrollers to HoloLens interfaces for exploiting the IoT and smart homes. HoloSensor (Jang and Bednarz 2018) proposed an IoT framework with a HoloLens interface and connected sensors with the Arduino microcontrollers.

Acknowledging the complexity of the multiple research paradigms, this work aims to provide a holistic approach combining MR interaction systems with an IoT-based smart home environment. HoloHome is a straightforward and easy-to-use MR application implemented on Microsoft HoloLens to regulate and control the smart home features and components, serving as a home controller interface, embracing different users, including older adults and people with disabilities. The proposed system can also be utilized as an assistive technology for the aging population within the domestic environment to help them manage their smart homes. It can support them in their daily activities to foster their independence, safety, comfort, and energy-saving, while it is also poised to support people with limited mobility and reduced cognitive functions. On the one hand, HoloHome is capable of providing home automation and domestic comfort using the HoloLens interface, thus helping people with limited mobility to perform their daily activities without moving near the actual device. On the other hand, it provides seniors and people with reduced short-term memory some peculiar functionalities to support them in performing their Activities of Daily Living, thus maintaining their health, comfort, and independence.

3 The smart home architecture

In the context of smart homes, the possibility of manipulating smart devices with a natural and realistic human–computer interaction plays a pivotal role. Moreover, the feasibility of inserting visual hints, in the form of virtual objects, inserted in the user’s Field Of View (FOV)—i.e. the observable angle of the virtual world user can see through HoloLens—would bring a significant advantage in the AAL environment. In this regard, this work proposes a home controller MR interface to let the inhabitants interact with the smart home’s features and components in the same natural way people interact in face-to-face communications—such as hand manipulation, voice, gesture, and eye contact (head–gaze).

HoloHome provides a customizable and adaptable system to maximize comfort and usability within the smart home environment considering diverse groups of people, including healthy users, older adults and people with mild disabilities, thus, fulfilling AAL and context-awareness requirements. It can be adapted according to the unique needs of different users in terms of interaction methods and domestic comfort preferences. This MR application is designed to distinguish between two inhabitants who share the smart home, understand the different inhabitants’ needs, and adapt and customize the system to improve their indoor comfort.

Additionally, HoloHome exploits the IoT system allowing the inhabitants to manage and control the smart home. This is accomplished by installing a network of microcontroller-enabled sensors and actuators within the domestic environment and utilizing the IoT gateways for data exchange and transmission.

The architecture of HoloHome is composed of three core elements: the hardware and physical devices, the software (HoloHome MR interface), and the communication protocols that allow data transmission between the hardware and software components.

3.1 Hardware

HoloHome's physical environment is established in the STIIMA’s Living Lab, which encompasses the main home equipment and appliances (a refrigerator, a washing machine, a dishwasher, a pantry, a table, and a few cabinets) with their functionalities being simulated via MR. The HoloHome physical environment is equipped not only with proper home appliances but also with necessary smart and ubiquitous devices (nodes), i.e. sensors and actuators, to enable the exchange of data between the physical environment and the MR application. These smart nodes are the microcontroller-enabled sensors and actuators forming a network of interconnected and interacting devices, generating the data streams transmitted through active transponders to proper receivers exploiting IoT. HoloHome has deployed and connected four comfort metrics sensors to measure the indoor environment: AM2320 Digital Temperature and Humidity Sensor, TSL2561 Digital Luminosity/Lux/Light Sensor, and 3709 Adafruit SGP30 Air Quality Sensor Breakout for VOC and eCO2.

In addition, a pair of MR smartglasses is required to allow the inhabitants of the “house of the future” to interact with their smart home features and components through the MR application. For this project, Microsoft HoloLens (first generation) has been chosen for its benefits and versatility. HoloLens is a fully untethered, see-through holographic computer with a pair of translucent screens for its eye-pieces that allows the injection of the holograms—virtual computer-generated objects—into the user’s line of sight and blending them into the real environment. The HoloLens features an Inertial Measurement Unit (IMU) (which includes an accelerometer, gyroscope, and a magnetometer), four “environment understanding” sensors (two on each side), an energy-efficient depth camera with a 120° × 120° angle of view, a 2.4-megapixel photographic video camera, a four-microphone array, and an ambient light sensor. HoloLens is able to pin the 3D animated holograms into the real world and blend the virtual and real objects to provide a new reality in which the user can create, communicate, and interact with the virtual objects as if they are part of the physical world. Moreover, it can recognize the surrounding environment with the help of built-in sensors and keep track of the spatial mapping of the physical environment.

In this way, HoloHome provides a new means of interaction with the smart home, allowing the inhabitants to manage the smart devices within the domestic environment through the associated holograms in the MR application. It works as a home controller MR interface that understands the physical environment and aligns the virtual objects to the real environment in order to create the ultimate sense of the presence for the users.

3.2 Software

The HoloHome MR application is designed and developed with Unity 3D engine (Unity 2023), exploiting the Universal Windows Platform (UWP) (Microsoft 2023c) and MRTK (Microsoft 2023b). Unity 3D is one of the leading cross-platform development engines with a very effective 2D/3D rendering of high visual graphics and C# scripting development language. Also, it natively supports HoloLens as a platform as part of the UWP, which makes the development process faster and easier. On the other hand, MRTK operates as an extensible framework that provides the basic building blocks of HoloLens development featuring primary components such as input system, UI controls, spatial mapping, and speech recognition.

Exploiting these technologies, HoloHome aims to improve the user experience by coinciding parallel realities in such a way as to have both virtual and real physical objects aligning with each other with the precise location, orientation, and scale. This method significantly enhances the realities the users are experiencing since there is a physical object to touch whenever the user reaches for a virtual one to support the concept of Hyper-Reality (Woolley 1993). This feature helps to provide visual hints and support based on the proximity of the user to devices, which displays a proper list of virtual instructions when the user is in the vicinity of each object. The overall panoramic view of the HoloHome and how the virtual and real objects are aligned with each other are depicted in Fig. 1 through the lenses of the HoloLens.

Fig. 1
figure 1

The overall panoramic view of the HoloHome application through the lenses of the HoloLens

3.2.1 Spatially registered assistive technologies

One of the critical features of HoloLens is the ability to capture and analyze the real-world environment and its surroundings using spatial mapping, allowing the developers to implement more realistic MR applications enhancing the interaction between the virtual and physical world with seamless integration. Spatial mapping provides a mesh representation of the real-world surfaces in the environment using the HoloLens depth cameras inherited from the Kinect. Knowing the spatial mapping of the domestic environment, HoloHome maps the virtual objects into the physical environment with a perfect layout. For instance, HoloHome provides an invisible mesh representation of the floor to place the virtual objects, i.e. the house furniture and appliances, on top of it smoothly. It also helps the inhabitants to locate real devices and appliances within the smart home by means of visual hints such as virtual arrows, which give direction toward the intended object even if the object is not currently in the user’s field of view. Additionally, these visual hints include text boxes, images, and graphics such as coloring, circling, and blinking an object.

In addition, inhabitants can interact with the smart home’s objects and appliances through holograms that are projected in the proper geospatial position. While it may be straightforward and intuitive for some people to find and conduct the devices within the smart home, it can be extremely challenging for others to manage and regulate some devices. Older adults and people with mild cognitive impairments are more prone to short-term memory deficit; thus, they may face difficulties remembering the sequence of completing a task or how to set up and use some devices such as fuse box, video intercom, air conditioner (AC), or washing machine. Therefore, HoloHome offers some features and functionalities to provide hints and clues in the form of holographic text, graphics, and spatial voice commands, to assist the users in locating and conducting the home devices faster and easier. Relying on the spatial mapping system, HoloHome places the step-by-step visual instruction into the user’s field of view, ensuring the correct geospatial location and alignment of the virtual guide with respect to the associated real device as the user walks nearby.

These virtual real-time instructions have been employed within the system keeping in mind older adults with short-term memory deficits or mild cognitive impairments, empowering them to optimize their mental and physical well-being while maintaining a degree of independence. The following is the list of functionalities designed to assist the inhabitants of the smart home, especially older users, in performing their Activities of Daily Living:

  • Reminding users with virtual graphics and voice commands about their upcoming calendar events, such as medical appointments or train tickets, in case the inhabitant had already entered the event in the virtual calendar in advance;

  • Reminding users with virtual graphics and voice commands to leave their keys on a specified table at the entrance when arriving at home, so they would be able to find it more easily when leaving the house;

  • Informing the inhabitant about the unpleasant indoor temperature or humidity rate and helping them to conduct the AC to meet the preferred indoor comfort with virtual instruction placed next to the real AC and thermostat;

  • Reminding the inhabitant to take their medicines at the right time according to the predefined daily schedule with visual hints (text box, graphic) toward the real medication box and voice command;

  • Helping the user to avoid dehydration by implementing graphical hints and voice commands to remind them to drink enough water on a regular basis;

  • Supporting the user in conducting and regulating complex devices such as the thermostat, fuse box, video intercom, or washing machine with visual hints in the form of text and graphics toward the associated real object as they usually face difficulties remembering the sequence of executions;

  • Providing a list of all the HoloHome voice commands and related services for the inhabitants to let them easily exploit all the available features whenever they say “help.”

The usability of aforementioned functionalities has been assessed and is reported in Sect. 4.

3.2.2 User adaptability and customization

One of the novelties behind the HoloHome MR interface is the possibility of adapting the system according to various users with different health conditions, preferences, and disabilities. The concept of user adaptability in this framework is twofold; first, envisioning the MR application for different end-users with various health conditions and preferences, and second, handling multiple inhabitants within the same household environment. The former concept made HoloHome anticipate special smart services and various means of interaction—the hand gesture, clicker, voice command, or head–gaze—to include diverse groups of people with various health conditions. The latter, however, led HoloHome to exploit the idea of multiple inhabitants living in the smart home with a tailored MR environment and the possibility of customizing the comfort preferences based on the specific user who is wearing the HoloLens.

In order to comply with the principles of the AAL environment to include older adults and people with mild disabilities, HoloHome foresees multiple interaction methods with the smart home—the possibility of performing the same task with the hand gesture, the clicker (a clicker device that comes with Microsoft HoloLens), voice command, or even just head–gaze in case of motor or speech impairments, which prevents the user from using the other interaction methods. These four interaction methods are inherited and implemented from Microsoft MRTK. The hand gestures include two different gestures, air-tap and bloom, while the clicker is a small remote pointer device that is paired to the HoloLens via Bluetooth connection. The voice command is also integrated within HoloLens to allow speech recognition for easier hand-free control over the smartglasses. The head–gaze, however, as it exists in the first generation of HoloLens, is a special case of gaze, which involves targeting an object with the user’s head direction to indicate where the user’s attention is focused (Microsoft 2023a). The inhabitants of the smart home are free to interact with the HoloHome MR interface in any of the four above-mentioned interaction methods based on their preferences and health conditions. However, the head–gaze option is only activated for users with particular health conditions, which prevents them from using their hands or voice. This may include people with upper limb motor impairment who face difficulties performing hand gestures, people with fine motor disabilities in hand who struggles to push the clicker button, and people with speech impairment not being able to use voice command. Moreover, implementing a feature to activate or deactivate the head–gaze allows the inhabitants to explore the environment freely without unintentionally triggering the buttons and interactable interfaces by looking at them for a few seconds.

The second concept of user adaptability in HoloHome revolves around managing and regulating the smart home, wherein a couple lives together; thus, the system can adapt itself according to each inhabitant. HoloHome enables the customization of four comfort metrics that can be sensed and measured via the indoor sensors and customized by the inhabitant: indoor illuminance, air quality, temperature, and humidity rate. Each user’s name, personal comfort metrics preferences, and the Boolean value of the existence of a disability that requires them to utilize the head–gaze have been collected and stored in a JSON file through a questionnaire the first time they launch the application. The default values are set in case the user does not fill out the form to define any specific preferences. To ensure the inhabitants’ privacy, all the personal data gathered from the users, such as preferences and health conditions, are stored anonymously through a unique ID. The users are free to set a name or a nickname for themselves or choose to continue with a random ID, which the application gives them. Once the inhabitant wears the HoloLens and opens the application, it asks the user to authenticate themselves by saying their name with a voice command. The current user then states the name (or nickname/random ID) that they already had stored in the registration phase and receives a vocal confirmation message indicating that the smart home has been adjusted according to the user in charge. A visual display loads the user’s upcoming calendar events, while in the background, HoloHome has retrieved the comfort preferences of the mentioned inhabitant from the JSON file. These comfort preferences include the current user’s preference for the indoor temperature, humidity, and luminosity of the current room, which may be different among the inhabitants. Moreover, in the case of the user with a special health condition that requires them to exploit the head–gaze interaction method, HoloHome activates the head–gaze option so the inhabitants with upper limb motor disability or speech impairment could also benefit from the application.

HoloHome is set to fetch the domestic comfort data from the embedded sensors almost constantly—precisely every minute to compromise between accuracy and MR graphical performance. It then compares the actual measurements coming from the sensors with the user’s preferences and triggers the MR environment by prompting an action suggestion to conduct the proper actuator in case the actual environmental data does not conform to the user’s preferred value.

3.3 Communication

In order to accomplish the idea of smart home and home automation between ubiquitous and heterogeneous devices, the system needs to leverage a robust IoT network. The “house of the future” relies on a solid network of smart and ubiquitous devices that collectively form a network of interconnected and interrelated sensors and actuators to sense, measure, collect, and exchange environmental data between the physical environment and the MR interface. MR has been proven to be one of the most effective solutions for visualizing the IoT dashboard in a more engaging and immersive manner to diminish human distraction and enhance human–computer interaction.

HoloHome acts as an immersive and interactive interface to manage, control and regulate the smart devices within the smart home. It captures the environmental data—such as current temperature, humidity rate, luminosity, or air quality—from the sensors within the smart house, prompts the user with a proposed action to take via the MR interface and sends the user’s decision to the proper actuator to trigger the physical environment and maintain the domestic comfort. The conceptual architecture of HoloHome infrastructure, its connection with microcontroller-enabled sensors and actuators, and their interaction system are illustrated in Fig. 2. HoloHome is connected to a network of WiFi-enabled smart nodes, which collectively produce data streams to be transmitted to the proper receivers through the network protocol. The “house of the future” exploits the Arduino (Arduino 2023a) microcontrollers equipped with the Arduino WiFi shield (Arduino 2023b)—for serial to WiFi data transmission—to enable the sensors and actuators to transmit and exchange data. The Arduino board is programmed with C++ programming language exploiting the Arduino IDE (Arduino 2023c) to enable data transmission to/from the associated sensor connected to the Arduino board.

Fig. 2
figure 2

The conceptual architecture of HoloHome, its connection with the microcontroller-enabled sensors/actuators, and their interaction system to retrieve environmental measurements and user preferences and give suggestion/warning to user to trigger the environment

In particular, HoloHome exploits an Arduino Uno to connect the real lamp to the MR interface, allowing the inhabitants to control the lighting within the smart home through the virtual home controller. The user may change the indoor lighting through the HoloHome virtual buttons, which are inserted next to the real lamp, with hand gesture, voice commands, clicker, or head–gaze.

Another Arduino board is connected to the comfort sensors, i.e. the indoor temperature, humidity rate, air quality, and luminosity. The temperature and humidity sensor measures the real-time temperature and humidity rate of the current domestic environment and transmits the data via WiFi shield to the HoloHome for further decisions. After receiving the current measurements and comparing the results with the user preferences coming from the JSON file, if the system detects an undesirable indoor value, it warns the user via the MR interface and suggests turning the air conditioning system on/off to maintain the preferred indoor comfort. In a similar procedure, the air quality and the level of the luminosity of the domestic environment are sensed and measured with the air quality and luminosity sensors, respectively, and the data are transmitted to the HoloHome through the internet. The MR interface then informs the inhabitant by inserting a warning/suggestion into their field of view together with some action suggestions in the form of graphics and voice commands to maintain indoor comfort according to the user’s decision.

4 The usability evaluation on healthy users

Usability is one of the crucial aspects of any newly developed technology that must be investigated and analyzed properly. It can help the developers assess the usability, effectiveness, and satisfaction among the users while receiving suggestions to improve the system.

This evaluation of HoloHome is performed on a sample of healthy adults with the goal of collecting information, suggestions, and feedback on the usability of the system for further improvements and future modifications before further assessments of senior users and people with disabilities. A sample of 10 healthy and voluntary adults, including three men and seven women with an average age of 36 years old (range 25–64) was chosen, which is suggested to be an adequate sample size (N = 10) for this purpose (Nielsen 2012). The experiment took place in the STIIMA Living Lab in Lecco, with all of the participants being Italian and coming from the same geographical region (Lombardy, Italy).

4.1 Measures

In order to better evaluate the usability of the HoloHome, some quantitative and qualitative measures have been taken that are listed as follows:

  • Task analysis: it is a methodology that allows the experimenter to identify, quantify and prioritize the problems that the users face while using the system, observing the users as they interact with the system (Rosala 2020). Subjects are asked to complete specific tasks within the MR environment, and the number of errors committed is quantified. In this case, users were asked to use a random interaction method for each task to observe the emergence of different plausible issues. Nonetheless, each participant was given the opportunity to try all three interaction methods with HoloHome (hand gesture, voice command, and clicker).

  • Think aloud protocol: while the performance of the users is being observed, information on the activities, users’ thoughts, and difficulties are collected following the protocol called “think aloud.’’ Following this method, participants think aloud while they are performing the assigned tasks. Users must express whatever comes to their minds, including what they are watching, doing, and how they feel. This allows the experimenter to collect further qualitative data, noting difficulties and problems related to usability, as suggested by Lewis and Rieman (Lewis and Rieman 1993).

  • Structured interview: at the end of the test, subjects were given a structured interview based on a modified version of the System Usability Scale (SUS) (Sauro 2018), a questionnaire of ten usability items evaluated on a 5-point Likert scale in which the users are asked to rate their level of agreement on a scale ranging from “0–totally disagree” to “4–totally agree.” The modified questionnaire—already proposed in the literature for assessing immersive VR/AR/MR systems (Moosburner et al. 2019)—consists of 15 items in which the additional elements concern ergonomics, unpleasant physical sensations (cybersickness), visual clarity, the field of view, and the effectiveness of the gesture command. At the end of the interview, users were asked to consider the three interaction methods and to list them in order of preference, expressing their opinions and feelings for their answers.

4.2 Protocol

In order to assess the usability of HoloHome, a set of activities was chosen as a hypothetical daily scenario for the smart home inhabitants. The proposed protocol for the study of the system foresaw the user’s entrance, authentication, and interaction with different devices and appliances within the smart home as the MR interface triggers the user’s field of view.

The participants were welcomed to the lab, and the purpose of the evaluation and the ultimate goal of HoloHome were explained to them. All the participants had read and signed the written informed consent prior to the experiment. The users are informed that they will be asked to perform various tasks as requested by the guiding voice (the experimenter or HoloHome voice command) during the experiment. The user is not limited in terms of time within which to complete the tasks, and they are free to express aloud any thoughts and considerations that come to their mind.

Participants are expected to try all three interaction methods, which are designed to be evaluated in this pilot study, i.e. the clicker, hand gesture, and voice command. The head–gaze technique, however, as explained in Sect. 3.2, is supposed to be enabled only for users with a specific health condition that requires them to exploit the application with this option. As a result, trying the head–gaze interaction method is not included in this study since all the subjects are healthy adults, and HoloHome automatically does not activate the gaze option.

In order to avoid the influence of the unfamiliarity with the HoloLens device on the number of errors, all users were given a few moments to practice HoloHome’s three interaction methods and explore the room. For this purpose, after the experimenter launched the HoloHome application on the HoloLens device, helped the user to wear HoloLens, and the correct view of the environment was checked; each subject was asked to interact with the virtual “television” in the HoloHome application using all three methods of interaction (hand gesture, voice command, and clicker) until they felt confident about using the HoloLens and interacting with the system. The training time was flexible based on each participant and how fast they felt ready to start the experiment, but all the users stated their confidence to start the test before five minutes of training.

The proposed protocol of the study includes a series of sequential tasks to complete within the HoloHome application:

Authentication: the user must authenticate themselves through the associated interface by declaring one of the two preset user names using a voice command. A virtual box appears in the user’s field of view with a text and voice command asking them to say their name in order to retrieve their personal data.

Entrance: When users walk into the house, they receive an automatic visual and audio signal inviting them to leave their keys at the entrance table next to the door. The experimenter asks the user to close the alert window when the keys have been placed correctly. A virtual box appears next to the entrance table with a text and voice command incorporating an arrow toward the table to show the user where to leave their keys.

Air conditioner: an automatic voice command warns the user about an undesirable indoor temperature higher than the user’s preferred setting. The visual alert window with the warning text and corresponding action buttons has also been displayed next to the thermostat. The experimenter asks the subject to check the temperature and humidity rate on the thermostat and turn the AC on/off as they prefer.

Television: the experimenter asks the user to stand in front of the virtual television and interact with it by performing a series of sub-tasks: turn on the TV, switch to the next channel, switch back to the previous channel, pause the television transmission, resume the transmission, and turn off the TV. In order to perform these tasks, a virtual interface is designed next to the virtual TV incorporating multiple buttons for turning on and off the TV, switching between the channels, pausing, and resuming the TV.

Medication: an automatic voice command and a visual alert with written text next to the real medicine box remind the user when it is time to take their medication. The experimenter asks the user to approach the medicine box and close the warning window when they are done taking the pills.

Lighting: the user should turn on the real lamp in the room using the virtual buttons available in the HoloHome to control the room and turn it off again.

Video intercom: the video intercom rings and triggers the user with acoustic and visual signals toward the real intercom. The user is asked to check the video intercom, open the door using the graphical instruction next to the intercom either by pressing the button or a voice command, and welcome the guest.

At the end of the test, the investigator interviewed each user, as previously explained.

4.3 Results, limitations, and assumptions

Although this experiment aims to evaluate the usability of the HoloHome application rather than the HoloLens device, the usability of the HoloLens has a significant impact on user experience while wearing the HoloLens and using its interaction methods. Microsoft HoloLens is one of the best commercial solutions for MR applications; however, it brings several drawbacks and limitations, including the heavy headset, small field of view, limited hand gestures, and speech recognition that is quite susceptible to bias against certain groups of users. Some of these constraints have already been discussed in (Evans et al. 2017) and (Munsinger et al. 2019); however, this experiment aims to understand HoloLens limitations, distinguish them from HoloHome’s usability factors, and discuss the interplay between these two issues.

It is also worth noting that some of the aforementioned HoloLens issues are addressed and mitigated in Microsoft HoloLens 2 (second generation) (Microsoft 2023d) to some degree. Minor weight reduction in the headset, a slightly wider field of view, additional hand gestures and tracking, and an improved speech recognition system are reported to have been implemented (Paez 2019).

Another factor to consider is the fact that almost all the participants started using the HoloLens device for the first time in this experiment, with the exception of two users who had limited prior experience. The novelty of the device and its totally new and shifted interaction methods—with respect to the traditional technologies such as mobile interfaces and GUIs—make the experiment slightly biased for the novice HoloLens users, as it has been observed that the users who had some experience with HoloLens beforehand, encountered fewer errors.

Finally, wearing the HoloLens device around the house for daily activities might seem inconvenient at the moment, but the main assumption in this work is the possibility of having the MR glasses in a light and comfortable way, just like the ordinary prescribed glasses in the very near future.

4.3.1 Task analysis

All of the ten subjects were able to complete all the proposed tasks except for one case that had to be interrupted before the “video intercom” task due to the internet disruption, but the process was resumed afterward. Although the experiment did not limit the users in time, all the participants finished the activities within 15 min. Furthermore, all the participants managed to perform all three interaction methods designed to be validated in this study (the clicker, hand gesture, and voice command). Considering the mistakes made to complete the different tasks, it turns out that the voice command is the least functional as the system does not always recognize the user’s voice commands, especially for users who are not native English speakers. In fact, none of the subjects (N = 3) who were asked to use the voice command to “switch the TV channel” was successful, even trying five times, and they had to use another interaction method to complete the task. Also, two out of four subjects were unable to “turn off the TV” with voice command, and yet two out of four subjects managed to “turn on the TV” vocally after three failed attempts. Additionally, two out of three subjects experienced the same problem while performing the task “turn on the AC” with two attempts for each subject. Among the three subjects who used the voice command to “turn off the lamp,” one took two attempts, while another took four attempts. One out of four subjects needed five attempts to complete the tasks “turn on the lamp” and “close the medicine box” with the voice command. In addition, half of the sample had to repeat the user authentication task several times since the participants’ pronunciations were typically with an Italian accent rather than American English, and the HoloLens speech recognition system was unable to understand the voice command. The detail of the voice command usability is reported in Table 1, where N represents the number of subjects who were asked to perform the associated task using the voice command. The participants’ IDs and the number of attempts they had to make to complete the task are also reported. Note that the tasks in which no participant made any error are not reported.

Table 1 The usability and effectiveness of the interaction method “Voice Command” to complete each task

As for the hand gesture, some subjects encountered problems and made mistakes in performing the task, and generally, it took each participant a few trials of the air tap gesture before it worked. To be specific, for the “turn on the TV” task, two out of three subjects made mistakes with the hand gesture, the first subject with just one unsuccessful attempt, and the second subject with four trials before completing the task. One out of three subjects had to repeat the air tap twice to complete the task “pause the TV,” two out of four subjects made just one mistake in performing the “turn off the TV” activity, and only one subject out of three made a mistake in the task “turn on the lamp.” The detail of the hand gesture (air tap) usability is reported in Table 2, where N represents the number of subjects who were asked to perform the associated task using the hand gesture. The participants’ IDs and the number of attempts they had to make to complete the task are also reported. Notice that the tasks in which no participant made any errors are not reported.

Table 2 The usability and effectiveness of the interaction method “Hand Gesture” to complete each task

The problem has always been due to the incorrect positioning of their hands (which must be in front of the HoloLens camera to be detected) or the users’ wrong execution of the air tap gesture, which is found to be improved after a few attempts and more training. The clicker device provided by Microsoft was found to be the most functional method of interaction, and no one made any mistake in using it.

4.3.2 Questionnaire’s results

The means and standard deviations of each item of the modified SUS questionnaire have been analyzed and are reported in Table 3. Notice that questions marked with an asterisk have a negative meaning, and a higher grade corresponds to lower user satisfaction. The data indicate that the users express a neutral judgment with respect to the intention to use the application. As emerged from the users’ comments, this judgment is largely influenced by the device (HoloLens) and not by the application (HoloHome) itself: in particular, the uncomfortable feeling of wearing the HoloLens, the problem of wearing prescribed eyeglasses while using the HoloLens, the limited field of view and the difficulty of interacting with the system through different interaction methods, especially the speech recognition, weighed heavily on the users’ judgment. The HoloHome application, on the other hand, was found to be easy to use; its functions are reported to be well integrated, and the participants felt confident about how to use the HoloHome during the test.

Table 3 The mean and standard deviations for each item of the modified SUS questionnaire

Considering only the first ten items of the questionnaire, and therefore the standardized SUS questionnaire, the scores report an average score of 71.5 ± 10.62 out of 100, which is the maximum possible score according to SUS creator (Brooke and others 1996). This value, according to the SUS scale evaluation (Sauro 2018), represents a good usability score. The details of the HoloHome SUS for each participant and how they fall above or below the HoloHome SUS score and average SUS score are illustrated in Fig. 3. All the participants are assigned a random unique ID, such as S1, to preserve their privacy.

Fig. 3
figure 3

HoloHome SUS score for each participant assigning a random unique ID to better visualize how they fall above or below the HoloHome SUS score and average SUS score

Observing the correlations between the modified SUS questionnaire’s items, it emerges that the ease of use correlates negatively with the need for support (r = − 0.67, p = 0.034) and positively with the recognition of the commands (such as hand gesture, speech recognition, and clicker) (r = 0.74, p = 0.014). The need for support also correlates positively with negative sensations (r = 0.92, p = 0.000) and negatively with ergonomics (r = − 0.70, p = 0.025), the clarity in which objects were seen (r = 0.95, p = 0.000), and command recognition (hand gesture, speech recognition, and clicker) (r = − 0.70, p = 0.025).

Ergonomics positively correlates with object identification clarity (r = 0.63, p = 0.046), while cybersickness negatively correlates with the command detection by the system (r = − 0.67, p = 0.033) and clarity with which objects are identified (r = − 0.89, p = 0.001).

Finally, the inconsistency in the system correlates positively with the need to learn many processes before being autonomous in the use of the application (r = 0.88, p = 0.001).

4.3.3 Preference on the interaction methods

In order to better rank the different interaction methods (hand gesture, voice command, and clicker), a score was given to each of them on how much it was appreciated by the users (first preference = 2 points; second preference = 1 point; and third preference = 0 points). According to this ranking result, the clicker is the most popular with 13 points, followed by the hand gesture with 10 points, and lastly, the voice command with 8 points. Asking the motivation of their preferences, the subjects reported that the clicker always worked and, it was immediate and easier to learn, “does not require finger positioning.” On the other hand, those participants who preferred the hand gesture stated that “it is better because you first identify the command and then act, I trust the result more, and I noticed it always worked” and that “the hand gesture is more natural even with respect to the voice command, even if it would be uncomfortable when my hands are full.” Nonetheless, another participant commented that the hand gesture did not always work and it was frustrating; moreover, “If I had cognitive difficulties, positioning my hand for the gesture would be hard.” This participant also added that using only the gesture command can be physically tiring for some people.

The voice command obtained less score, but the participants’ comments actually demonstrate that it could be the most preferred interaction method if there were no language barriers; “the voice command would be more convenient, but it does not always work, I would prefer the voice option had it recognized the Italian language,” “the voice command is easier, but it should be in the mother language,” and “I prefer the voice command because it is immediate and I do not have to aim at anything with my head or hands.” Another subject added that he prefers the voice command, but he would probably not be satisfied anyway because “I would feel stupid talking to myself.”

4.3.4 Free comments

During the execution of the test and the structured interview, the spontaneous comments made by the subjects were recorded by the investigator and subsequently divided into thematic categories. The macro-categories that emerged are: “visual problems,” “problems with interaction methods,” “message display,” “ergonomics,” “application functionality,” “adaptation with the end-user,” and “general comments.” All of these comments, divided into their categories together with available or future solutions, are depicted in Table 4.

Table 4 The qualitative usability results. Random comments from the participants and proposed solutions

In the first category, the subjects exposed the problems related to the visual aspects; this happened in particular for two subjects who had to wear prescribed glasses under HoloLens. As a result, the HoloLens could not be aligned perfectly with their eyes, and in both subjects, this caused an image centering problem which consequently also affected their interactions as the HoloLens could not be aligned with the user’s head ray toward the objects. It is also worth noting that the size and shape of the eyeglasses are important factors as the user with smaller eyeglasses could manage to fit the HoloLens on top of it, but the user with the bigger eyeglasses had much more difficulties in aligning the holograms (“it is difficult to aim the pointer on the right button to click because I think I am not aligned with the image”). Three people found it difficult to identify objects due to the limited field of view (“it took me a while to understand that the TV was actually a TV because I could not see it all at once,” “I had to get down a lot to see below”), but once an object was identified, they could see it well. One person expressed her difficulty in integrating the real object with the virtual one, and another subject declared that “looking down is much more disorienting and uncomfortable than looking in any other direction.”

As regards the category “problems with interaction methods,” almost all of the subjects commented that they would prefer to use the voice command, but it was difficult because the system could not detect their voice commands easily, and as one subject pointed out, “the pronunciation in English is complex.” One subject reported, “if the system understood the commands immediately, the application would also be easier to use.” Another subject also added that this problem “may persist even in the user’s mother tongue, because it seems to me that the system recognizes male tones better than female ones and also older people may have pronunciation problems due to deficits in vocal articulation.” Also one subject became impatient using the hand gesture as he found it ineffective and he closed the warning window by saying “the clicker is better.”

Regarding the “display of messages” category, three subjects suggested bringing the warning windows closer to the object they refer to (for example, “the message referring to the keychain is too far from the bowl”). Nevertheless, it turned out that having many written instructions in the environment gave them more self-confidence in what they had to do to interact with the application.

With respect to “ergonomics,” most of the criticisms were directed at the HoloLens device, which was inconvenient with a very narrow field of view (“I gave a low rating to the intention to use because I find HoloLens uncomfortable; otherwise, I would have given a neutral rating”). Moreover, in two cases, the subjects felt the need to hold the HoloLens with their hands. Despite the inconvenience, one subject stressed that the device allowed him to “engage in the environment, especially when using the TV.” However, as for the application itself, the users found the virtual objects realistic and their colors pleasant, although one subject suggested changing the intensity of the white as “it is too intense and it bothers me.”

As for the “functionality of the application,” one user commented, “I know it is a prototype; however, these features seem simple yet compliant with life at home.” A user expressed her confusion with the functionality of the TV object as “there are two channels, and I would have expected to be able to change them by pressing the ‘up’ button; instead, I had to press’down’, which confused me.” Another subject suggested moving the TV controller window under the TV so users do not have to move their heads too much to interact with the object.

Five participants also tried to answer the questionnaire considering the older adults’ point of view and commented on the ease of use from the older people's perspective—“adaptation with the end-user.” One subject mentioned, “I did not need support, but I think someone with cognitive deficits would.” At the same time, another subject stated, “it is likely that those with slight deficits will struggle at first, but then they would be able to use it easily; however, if they have serious deficits, they would not be able to use the tool in general.” According to one user, however, the fact that the tool was not so easy to use would support the attention problems of the end-users, who should pay more attention to complete the objectives. Finally, five comments recorded could not be merged with any specific category and were therefore classified as “general comments.” One subject declared that the environment “is nice” and another participant added, “it is nice, but looking down is uncomfortable.” One person reported dizziness at the end of the task, and another person claimed that the application’s voice command was robotic and should be more human-like.

4.4 Discussion

In general, the participants of this study appreciated the application and expressed their satisfaction and interest in the HoloHome application. However, some concerns have been raised about the ergonomics and convenience of using the HoloLens device. Although the main aim of this study is not to evaluate the device, its usability massively impacted the usability of the HoloHome application. In general, participants expressed their discomfort with wearing HoloLens for prolonged use, which would deteriorate even more for users with prescribed glasses as they cannot fit properly on their heads. This issue and the consequential visual disorientation must be considered, especially for older adults who almost always need prescribed glasses. Furthermore, the narrow field of view of the device is found to confuse some users in the identification of some large objects.

Another major concern revolves around the interaction methods and their efficiency, particularly speech recognition, which was found to be the least functional in non-native English speakers and yet the most popular and immediate for those users who managed to use it successfully. However, it is important to note that 80% of the participants of this study had never used HoloLens before; therefore, it is expected to see an improvement in using the device and its interaction methods with more training time and prolonged use. It can be a good practice to spend more time training the users with HoloLens prior to the experiment in the next study on older adults so that their lack of experience in using the device would not influence the judgment on the application.

Regarding the HoloHome application itself, it has received positive feedback from the participants of this experiment with a good usability score of 71.5%, and the only specific suggestion was to bring the written instruction windows closer to the associated objects. This suggestion could also be because of the fact that the field of view is quite limited, and users would like to see each object and its instruction window in every frame without the need to turn their heads.

This present study on healthy adults highlighted a few important improvements needed prior to proposing the system to older users. In addition to the longer training time allocated to them, the user interface must be easy to use and perceived as useful and safe; otherwise, the technological solutions will not be adopted (de Belen et al. 2019a, b, c). The virtual objects are best to be fitted within the user’s field of view, and the instruction and alert windows related to each object must be as close to the object as possible to avoid distraction and confusion. Privacy concerns are also deeper among the older users comparing the younger generation as they typically tend to be more sensitive about the new technologies and how they collect their personal data (Heek et al. 2017).

5 Conclusions

This work presented HoloHome, an MR interface to manage and regulate smart home features and components through the protocols of IoT. HoloHome is an MR application designed and deployed on the Microsoft HoloLens (first generation), introducing an innovative means of interaction with the “house of the future” in which the inhabitants are able to interact with their smart home more conveniently and intuitively. The framework aims to provide tailored and customizable domestic comfort based on each individual’s health conditions and preferences. Moreover, it supports various groups of inhabitants, including healthy users, older adults and people with disabilities, in performing their Activities of Daily Living by providing continuous support, instruction, and practical assistance, thus promoting the inhabitants’ comfort, safety, independence, and well-being. The proposed system is equipped with a set of guidelines and instructions that can be useful to all the inhabitants but particularly can support people with limited mobility and reduced cognitive functions. Finally, the usability and effectiveness of the proposed MR application have been evaluated through a modified version of the SUS questionnaire on a small sample of healthy people who found the HoloHome a promising tool with easy-to-use and well-integrated functions. The standardized SUS questionnaire reports a usability score of 71.5 ± 10.62 out of 100, representing a good perceived usability level.

As future works, the upcoming version of the HoloHome will be developed and deployed on the second generation of Microsoft HoloLens, to investigate the new features of the latest HoloLens device and evaluate the reported improvements. This work also foresees expanding the target users by adding specific types of disabilities and adapting HoloHome and its assistive techniques accordingly. The possibility of semantically enriching data on the inhabitants’ health condition and preferences in such a way as to let the HoloHome reason over semantic data and infer the inhabitant’s optimum environmental values is also under investigation. Finally, the evaluation of the application will be assessed with a sample of older adult participants and people with mild disabilities to better understand the perceived usability and acceptance of the system from their point of view. The emergence of the usability issues and comments in this pilot study with the healthy adults allows the system to improve prior to subjecting more fragile users to the new technology. This will also enable the possibility of focusing on other aspects strictly related to the aging people, such as attitudes toward the new technology and particularly the application, and privacy and security concerns they may have.