A Cloud Robotics Solution to Improve Social Assistive Robots for Active and Healthy Aging

Technological innovation in robotics and ICT represents an effective solution to tackle the challenge of providing social sustainable care services for the ageing population. The recent introduction of cloud technologies is opening new opportunities for the provisioning of advanced robotic services based on the cooperation of a number of connected robots, smart environments and devices improved by the huge cloud computational and storage capability. In this context, this paper aims to investigate and assess the potentialities of a cloud robotic system for the provisioning of assistive services for the promotion of active and healthy ageing. The system comprised two different smart environments, located in Italy and Sweden, where a service robot is connected to a cloud platform for the provisioning of localization based services to the users. The cloud robotic services were tested in the two realistic environments to assess the general feasibility of the solution and demonstrate the ability to provide assistive location based services in a multiple environment framework. The results confirmed the validity of the solution but also suggested a deeper investigation on the dependability of the communication technologies adopted in such kind of systems.


Introduction
The number of Europeans over 60 years of age will increase at a rate of two million per annum, while the working age population will shrink because of the low EU birthrate [1]. As a result, in 2060 there will be one retired senior for every two persons of working age (aged 15-64) [2]. The aging process causes a physiological decrease of the motor, sensory and cognitive abilities of the people who then may have trouble remembering, learning new things, concentrating or making decision about everyday life. Most of older are affected by one or several chronic diseases requiring several medicines and the periodic monitoring of their health parameters. This will increase the demand for nurse practitioners (+94 % in 2025) [3] and physician assistants (+72 % in 2025) [4] with several implications for the quality of care and for the configuration of future cost-effective care delivery systems. Furthermore, one in six of all 74 million elderly people now living in Europe is at risk of poverty [5] and the number of elderly persons living alone will increase more and more. Moreover most of the EU seniors want to remain in their familiar environment and to live as independently as possible [6], even if affected by age-related limitations.
At the same time, the social and economical sustainability of a safe and independent aging of the elderly is expected to one of the next post-crisis challenges. For this reason, an aging society could benefit fro the use of intelligent agents, such us smart homes and service robots, to assist in fulfilling the daily needs of the elderly [7][8][9][10]. In Europe this approach is called ambient assisted living (AAL) [11] as the joint programme that funded and is funding several projects having this focus. According to several researches [9,12,13] the main needs of the elderly can be summarized as: -to live in their own home maintaining their autonomy, independence and quality of life but in a safe and secure context; -to be active and participate in community life in order to reduce their sense of loneliness and general negative feelings; -to retain control over their own life even when they need care and assistance; -to increase the attention of doctors and caregivers to their health.
As older people spend more time in their homes [6], smart living systems, sheltered houses and 'age-friendly' environments could be fundamental tools to help seniors live independently, manage correctly their health care, delay/avoid institutionalization and stay active as long as possible. Robots could play a fundamental role in augmenting the utility and the efficience of the use of such technologies and services [7] because, in addition to their ability to provide physical support to older persons, they can also cooperate and interact with them [14][15][16] thus facilitating their care and making the therapeutic process more enjoyable [17].
A social assistive robot for AAL can thus provide services of great utility for medication management, care and appointment reminders [18], monitoring of vital signs and the environment for user safety and security [7,19], and can also improve seniors' context awareness and situation recognition to help them and their caregivers in taking daily decisions.
However the success of these innovative service solutions greatly depends on the level of reliability and acceptability of these tools as perceived by elderly users. These aspects are crucial for the real deployment of these smart services in private homes and residential facilities in the near future. The acceptability of technological devices and services greatly depends on their utility, effectiveness, efficiency, reliability and ease of use as perceived by end users [20][21][22]. Despite their technological complexity the robotic agents can enhance the effectiveness and the efficiency of the assistive services, while advanced human robot interaction (HRI), implementing communication strategies more similar to natural human ones (e.g. speech and gestures), would improve the comfort of use of the entire system. In order to execute effective assistive services and to adopt the most natural interaction approaches the robotic assistants need to consider and elaborate a great deal of environmental and contextual data. Booth the performance of the robots and their social behavior can be improved by the recently introduced cloud robotics paradigm [23,24]. Cloud robotics was defined as the combination of cloud computing and robotics?. Thus Cloud robotics is not related to a specific type of robot, but to the way robots store information and access a base knowledge. As a matter of fact, cloud computing could give to robotic systems the opportunity to exploit user centered interfaces, computational capabilities, on-demand provisioning services and large data storage with minimum guaranteed QoS, scalability and flexibility. Cloud robotics is expected to affect on the acceptance of robotic services, enabling a new generation of smarter and cheaper robots compared to the classic stand-alone and networked robots.
From the technical point of view, the present paper describes a cloud robotic system for AAL implementing the Robot-as-a-Service (Raas) paradigm, including a service robot integrated with a number of smart agents which exploit the potentialities of the cloud to improve the capabilities of the system and consequently the service performance. In particular, this paper aims to improve the current state of the art in cloud robotics, by designing cloud on-demand AAL services, where the connected robot and smart environments cooperate to provide assistive location-based services. The system involves autonomous robots, smart environments and a cloud platform to automatically accomplish to the services required by the users. The autonomous robot was able to perform speck recognition using a wearable microphone on the users, to recognize the keywords associated to predefined service requests. Once a request was recognized, the robot retrieved from the proposed cloud system, all the useful information to reach the user and perform the service. The Ad-hoc services and the technologies were designed to leverage the use of the cloud in the AAL domain and get the assistive robotics more close to real cost-effective deployments, while respecting important AAL requirements in terms of dependability and acceptability. The RaaS design improves the traditional server applications at least for the following reasons: -The elasticity of the cloud allows allocating increasing hardware resources (storage and processing) as the number of connected agents and required services increase without discontinuity or service faults. -The resource redundancy of the cloud make it highly available and more fault tolerant than the classical server approach. -The cloud can manage a huge amount of simultaneous connections from smart agents for data collection and processing, allowing big-data processing, and carrying out learning algorithms in the field of AAL and assistive robotics.

Related Works
Over the last years several stand-alone social and assistive robots have been developed to support the elderly and their caregivers in their daily activities. For instance the Giraff robot (ExCITE Project) [25] was developed to provide tele-presence services and to support elderly persons in communication activity, while the Hobbit robot [26] was designed to detect emergency situations, handle objects and performs activities to enable seniors to stay longer in their homes. Other robotic solutions have been designed to provide remote medical assistance in hospitals or private houses, such as for the robotic nursery assistant described by Hu et al. [27], the VGo robot from VGo communications (New Hampshire, USA) and the RP-VITA robot from InTouch Health (Callifornia, USA). These stand-alone robots are in charge of the entire sensing, planning and performance of the tasks and usually required high computational capabilities and expensive technologies that make them unaffordable in delivering complex services.
In order to effectively and efficiently provide complex assistive services like object transportation and locationbased services, several robotic platforms have been designed according to the the networked robotics paradigm [28].
Networked robotics leverages wireless sensor agents distributed in smart environments and wearable and personal devices to reinforce the sensing capability of the robot with other external information that could improve the efficiency of the robot's planning and its cooperation with the end users. Sensors distributed in the environment can also improve the safety of the robots by providing the necessary information to avoid risky and dangerous unwanted robot-human interactions. A current limitation of service robots moving in unstructured environments, is their inability to detect humans out of their sensing range. This situation may occur whenever an individual is approaching the robot from a direction that is not covered by, for example, the cameras, laser range finders or ultrasonic sensors, which are often used to detect human, help the robot navigate and avoid obstacles. Similarly, a robot can't detect the presence of humans beyond walls, wich increases the risk of accidents, for example when a robot turns a corner. The use of environmental sensors may improve the robot's situation awareness by providing information about the presence of people in the robot's surroundings, helping robots behave safely. In accordance with this approach, Arndt and Berns [29] investigated the current state of the art of networked robots, and in particular the PEIS-Project [30] and the AmICA smart environment, integrated with a companion robot [31]. They concluded that smart environments could be profitably used to shift some complexity away from the mobile machines to the smart environment without compromising the safety of the overall system. Furthermore, a smart environment could significantly reduce the time for service delivery, by providing the robot with information about the user's position [31]. The opportunity of the early detection and the prevention of potentially unsafe interactions between robots and people by leveraging the use of smart environment was also highlighted by Cavallo [15]. The use of distributed or wearable sensors can also improve the usability and acceptance of assistive robots, by providing innovative human robot interfaces. Recently, some research has focused on the use of wearable brain machine interfaces (BMI) to provide assistive services to impaired users. The BMIs in [32] and [33] were used to control robots or smart homes by observing brain waves and interpreting the user's will. In such systems the user wears cutaneous electrodes to measure the brain waves. Usually the BMI requires a so called calibration or training phase, where the user is asked to concentrate for example on specific actions, that will be related to specific control inputs for the BMI and the robotic agent. After the training, the BMI will recognize the user's control inputs, with a success rate that is often less than 95 % even using commercial solutions as in [34]. After the calibration and training, the user will control the connected robots or devices by concentrating on the desired control. The major drawback of such interfaces lies in the low information transfer rate provided by brain waves, the obtrusiveness of the cutaneous electrodes, the need for training and the high level of concentration required to give commands to the BMI. For these reasons, the interface of smart environments and robots with wearable BMIs to assist seniors is still not widespread for AAL applications, where the dependability and acceptance of the interface is crucial.
Smart environments can thus be used to enhance the robots' sensing and planning capabilities, improve the HRI, facilitate the tracking and monitoring of patients, and also allow for better and long-term daily activity recognition [35]. Furthermore the integration of robots in smart environments can provide new opportunities in the assessment of the dependability, acceptance and usability. External sensors can provide impartial and additional information respect to the use of traditional questionnaires or the on-board sensors like in [36]. Sensor networks can track the users positions before, during and after the interaction with robots, to better characterize the entire HRI process. Wearable sensors can extract data on the user stress level (like the heart rate, heart rate variability or skin conductance) during the interactions. Environmental sensors can give information on the environmental conditions that may affect for example the robot's perception (vision, speech recognition) in the task execution like the lighting condition, the presence of people in the robot's proximity, power outages or the acoustic noise level. These kind of information can be used to better characterize the robots' dependability respect to environmental factors. Similarly, smart environments provided with indoor localization systems, can improve the self-localization or the navigation ability of robots respect to the current state of the art in [37]. An indoor localization system can provide the initial data on the robot's position after a wake-up, a reset or a fault of the inner sensors or the navigation system. This information can simplify the procedure and reduce the time for the robot to locate itself in the environment, using the embedded sensors like laser scanners, camera or ultrasounds. -Indoor user localization [15,16,35,38].
In the literature, the opportunity to know the position and pose of the users, as well as the environmental conditions where the robots and humans interact, were considered crucial information for implementing socially believable robot behaviors [39], improve the robot's proxemity and the user's comfort in using the robot [40,41].
The information required to provide such services could come from wearable and environmental sensors as well as distributed intelligent agents such as are described in Table 1 according to the networked robotics paradigm and [35,40].
Indoor user localization, in particular, is one of the most challenging requirements for the assistive robotic systems of today. Smart environments are considered an enabling technology to improve robot navigation, provide personal services directly to the users, reduce the time for service delivery and improve its safety in case of critical situations. This is because when a robot knows the users' positions in a domestic environment (even out of its sensing range), it can efficiently and safely navigate toward them to provide the proper service to the proper user [15].
Furthermore, the opportunity to make a robot able to face the user with a proper pose and at a comfortable, safe and proper distance, can positively affect the HRI and the acceptance of the robot [7]. The ability of a robot to efficiently seek the user by exploiting an ambient intelligent infrastructure is still an open scientific challenge [42]. Some recent research in this field have been founded by the European Communitys 7th Framework Program (FP7/2007-2013), like the Com-panionAble project [14], the GiraffPluss project [16], and the Astromobile experiment in the ECHORD project [43] [15]. The knowledge of user position also facilitates patients' tracking and monitoring processes for better and long-term daily activity monitoring and allows the recognition of critical situation.
However recent research has mainly focused on the development of robotic solutions for home applications, in a one-robot-one-user interaction model. Very few robotic applications have dealt with the integration of a number of smart environments, users, and robots, to provide social and assistive services in different and heterogeneous environments. Even if recent assistive robots focused on the support of consumers in crowed shopping malls [44] and multi-floor wide buildings like the CoBot robots [45], most assistive robots for AAL applications are still designed to carry out services in a one-robot-one-house? or in a "one-robot-one-user" interaction model. This approach doesnt match the recent trends in housing and social services [46], where the cooperation between seniors, and the sharing of goods and services are expected to improve the sustainability of an aging population. One of the few projects in line with this concept is the European project Robot-Era (GA 288899), developing 3D robotic services for aging well, that are a plurality of assistive services implemented by mean of a multitude of cooperating robots integrated with smart environments and acting in heterogeneous spaces, such as homes, condominiums and urban environments [47].
The paucity of examples of social robots for AAL in a multi-user or multi-environment scenario could be due to the limited computing capabilities and insufficiency of these robots for continuously supporting daily activities [24]. Continuous care support indeed requires the ability to assist a number of users in a variety of heterogeneous environments, and thus (i) perform complex reasoning, (ii) store a huge amount of data, (iii) provide assistive services fluently and repeatably, and (iv) interact with humans in dynamic, complex and unstructured environments.
The novel cloud robotic paradigm can fit these requirements by extending the concept of networked robotics [23] and enable a new generation of socially believable robots. It has been defined as the combination of cloud computing and Fig. 1 Architecture of the RaaS System-The cloud platform comprises the cloud storage modules (DB and DBMS) and the cloud computing agents (ULM, ESM, HRIM and EMM). The WSNs, service robots and GUI modules are the intelligent agents that interact and communicate with the users to provide the assistive services. Caregivers can remotely monitor the seniors and the environments by connecting to the cloud robotics, enabling the provisioning of on-demand robotic services to a greater extent that has been ever done before. The service providers can leverage the elasticity of the resources of the cloud to deliver robotics services to users on-demand, regardless of the number of agents and users involved.
The storage and computational resources of the cloud enable robots to offload computational capability and perform complex processing, share information about users and environments, training data and learning processes [48]. The cloud will provide the resources to efficiently perform challenging robotic services like object recognition and manipulation, as well as perform social navigation and advanced human-robot interactions. Today's high throughput mobile communication technologies (e.g. Wi-Fi, Wi-Max, 3G and LTE) will ensure high speed and reliable data exchange for an high quality of the data transmission between the robotic agents and the cloud.
In this context, the present paper aims to take a step in the current state of the art, by designing a Robot-as-a-Service infrastructure, able to provide assistive user location-based services taking into account the requirements for AAL. The proposed RaaS system was designed to assist seniors in a multi-user and multi-environment perspective, using cloud services to improve the ability of the connected robots. No Wizard-of-Oz functionality was implemented, since the cloud was intended to extend the sensing and reasoning capability of the connected autonomous service robots. In order to make a preliminary assess of the system's dependability, an experimental set-up was performed to evaluate its reliability and accuracy in delivering robotic location-based services.

System Architecture
The RaaS system was designed to be scalable and fit the requirements for providing robotic services either in the home or in nursery-home environments, in a multi-user and multi-environment vision. It comprises hardware smart agents distributed over heterogeneous remote physical environments and software agents in a cloud platform as in Fig.  1. The hardware was selected to provide physical and cognitive support to the users and the proper user interfaces for system management and control. In particular, smart environments were instrumented with distributed and wearable sensors to extract as much information as possible on users' positions and the status of the environments. A robot was integrated to provide physical and cognitive support to the users and exploit the robot's embodiment to improve service acceptance. Data storage and processing were performed by software modules in the cloud, as well as the GUI for the system management and control.
From the caregiver's point of view, the cloud robotic services were designed to simultaneously manage more seniors at the same time, regardless to the time and the location of the The robot receives the command and plans the trajectory to reach the user and the mode of engagement. When in front of the user, the robot interacts using the embedded interfaces and through natural language to remind the user about the medicine End-user The end-user takes the medicine seniors. As seen in Table 1, a number of sensors were selected to provide the data to improve the performance and the social behavior of the robots. The selected sensors could be used for home monitoring, critical situation recognition for safe and secure living as well for determining users' positions to improve the performance of location-based robotic services (e.g. drug delivery and medication reminder services). Tables 2 and 3 present two possible examples of application scenarios (medication management and the recognition of critical situations) showing the service scheduling and the role of the agents of the RaaS system.

Agents
The hardware agents included in this system were: the service robot, the wireless sensor networks (WSNs) and mobile personal devices such as smartphones and tablets (see Fig.1). The agents performed machine-to-machine (M2M) and machine-to-cloud (M2C) communications [49] to exchange data between them and the cloud. The M2M communication took place using Wi-Fi and ZigBee protocols and enabled Service robot The service robot was developed in the Robot-Era project [39] using a SCITOS G5 platform (Metralabs, Germany) as a basis. The robot was designed to provide either physical and cognitive support to aged. In particular, it leverages the use of an integrated robotic arm for object manipulation, a tray for the transportation of things and an handle for walk support. It communicates with the user by means of an embedded touch screen and with speakers and microphones for speech synthesis and recognition. The users can communicate with the robot to ask for a service or to control it, by using the embedded microphone or using a Bluetooth connected wearable microphone. In particular, the robot can recognize specific keywords when a user is speaking, corresponding to the commands or the services that the robot can perform. The robot can perform speech syn-thesis through the speakers, to interact with the user. For example the robot can remind the user to take a medication or about an appointment. A SICK3000 laser scanner (from Sick AB, Germany) was installed on the front of the robot to detect obstacles and navigate in indoor unstructured environments. An embedded PC collected the data from the robots sensors, performed path planning, and provided autonomous navigation capability and the obstacle avoidance ability to the robot. The robot exchanged data with the cloud through a Wi-Fi module to get the information on the user's position and the required services. Whenever an user required a service, the robot was able to retrieve the user's position from the cloud, autonomously compute the path to reach the user and perform the required service. No camera was used to navigate or perform user detection, recognition or localization, to comply with the AAL privacy requirements and get this kind of robotic services more acceptable.
Wireless sensors networks (WSNs) There were two Zig-Bee WSNs included in this system: one for user localization (LNet) and one for environmental monitoring (SNet). The mesh network topology was implemented for the SNet and the LNet, to allow the devices to exchange data with each other and have a more dependable message routing then occur with the classical star and tree typologies. Multi-hop message routing was enabled to perform data exchange over the devices' radio range and extend the services provided by the smart environments over large areas like condominiums and nursing homes. The LNet was designed for multiple user localization, observing the Received Signal Strength (RSS) [50] of the messages exchanged between the radios. It was composed of a ZigBee Coordinator (ZC), a data logger (DL), a wearable Mobile Node (MN) and a set of ZigBee anchors (ZAs). The MN periodically sent messages to all ZAs within one communication hop. Each ZA computed the RSS as the ratio between the received and transmitted electromagnetic power [51] on the received messages and transmitted this value to the DL. ZAs were placed in fixed and known positions in the environment, in particular they were installed on walls and furniture to monitor the most accessed or interesting areas of the rooms and achieve an in-room localization accuracy as suggested in [35]. Each ZA was equipped with a 60 • sectorial linear horizontal polarized antenna that spotted the workspace on the antenna bore-sight. But the MN used an embedded omnidirectional horizontal linear polarized antenna for data transmission. Sectorial antennas were introduced to improve the signal to noise ratio of the RSS observations for the user localization [52]. The LNet was designed to locate up to three users at the same time, both in the Domocasa and the Anghen experimental sites. The network provided data at a refresh rate sufficient to locate an user once every second (1 Hz). The user position refresh rate was a trade-off between the number of devices installed in the environment that must share the same communication medium without interfering with each other, and the number of simultaneously traceable users. That refresh rate was maintained in the range between 0.2 and 2 Hz, which in the literature, respectively in [35] and in [53] has been considered sufficient for delivering assistive location-based services. The LNet provided the data to implement RSS-based localization algorithms on the proposed cloud platform, and locate the users inside its workspace. The SNet was developed for home monitoring and the passive localization of people. It comprises a ZC, a DL, and a set of sensor nodes (SNs). Each SN contained a selection of sensors to improve social assistive robots as in Table 1, such as Passive InfraRed (PIR) sensors, pressure sensors placed under a chair or a bed, switches on doors or drawers, gas and water leak sensors,and sensors for temperature, humidity and light. The LNet and SNet were set to different channels to avoid interference and ensure the proper bandwidth for the localization and environmental monitoring services. Each DL node was connected to a PC via USB, to upload data to the Cloud.

Cloud Platform
In accordance with the RaaS paradigm, the cloud included (i) a storage service comprising a DataBase (DB) and a DB Management Service (DBMS), and (ii) the computing service, comprising a user localization module (ULM), an event scheduler module (ESM), an environmental monitoring module (EMM), an human-robot interaction module (HRIM) and a web application as graphic user interface (GUI).

DB and DBMS
The cloud DBMS and the DB were designed to store and organize the information for the improvement of the robot's social behaviors and the quality of the service. The cloud DB improves the scalability of the system, because of its huge storage capability and high accessibility. The DBMS manages all the DB entries and queries and ensures privacy and data security limiting the access to only authorized users. The DB comprises a number tables, collecting data on the status of the monitored environments, the installed sensors, and the users. The DB is conceptually divided into three different parts, to improve the system's scalability over the users and the environments. In particular, it comprises three main entities: one related to the sensors, one to the users, and one to the environments (Fig. 3). The list of the sensors installed in the LNet and SNet was reported in entity S. A unique identification number (e.g. the ZigBee sensors EUI64) was used as the primary key of an n-upla that contains the sensor type (e.g. light, temperature, ZigBee anchor, presence detection), the position in the environment (x,y coordinates), the calibration parameters if needed, and the sensing workspace in square meters. For each ith type of sensor, a specific entity (Mi) collected the sensor output over time. The DB also provided data for multi-environment and multi-user services, and information regarding the users was reported in entity U. The user entries include the information to improve the User-Robot interaction and to provide the proper services, like the user's name, age, height, the gender and the propensity to use robotic services. The Table P recorded the users positions in terms of x,y coordinates and also included semantic labels to identify the occupied room or area of interest in a human readable manner. The KF_Matrix entity collects, over time, the Kalman filter state, state covariance and measurement covariance matrix, which are specific for each user. The alarm table reports the complete list of the alarm that have occurred in daily life. The House and the Room entities report a complete description of the multi-environment context. In particular, they include data on the physical dimensions of the connected houses and of their rooms, and the useful semantic human-readable information (e.g. bedroom, kitchen, corridor...). For each room, the DB included the numerical and semantic description of one or more areas of interest. The areas of interest were selected taking in account the European report on How Europeans spend their time, from EUROSTAT [54]. EU residents aged 20 to 74 spend respectively the 40 % of their free time watching TV, 18 % socializing and 10 % reading, whereas sleeping takes almost the 35 % of the entire day. Meals and personal care take up to 2 h and 22 min per day, and some of the most time consuming home activities are performed in the kitchen, cooking and cleaning dishes (57 min per day). This suggested selecting areas of interest in the kitchen at the sink, stove and table, in the bathroom, near the sofa in the living room, and in the bed areas in the bedroom. In this way the installed sensors in these areas allow monitoring the most accessed areas of the home. As a future work, the DB will have the ability to upload data also from the sensors of the connected robots to improve the situation awareness of the intelligent software agents in the cloud. User localization module (ULM) This software module was designed to locate several users in several environments and support robots in sustaining a continual care service. The software acquired data from the heterogeneous commercial and ad-hoc sensors in the connected SNets and LNets, to estimate the users' positions. A sensor fusion approach was implemented to compute the users position in a robust and scalable manner. The accuracy and cost of the indoor localization service depends on the typology and number of the sensors installed. In the case of a sensor fault, the user position was estimated by fusing data from the remaining ones, improving the reliability and robustness of the service. The ULM can simultaneously process data from the connected MNs and locate the users over different environments. The sensor fusion approach was based on a Kalman filter (KF) for the user localization. It was implemented exploiting both range-free [55,56] and range-based [57] localization methods as suggested in [39]. The range-free localization and presence detection methods described in [58] and [59] were performed to minimize the impact of installation mistakes and calibration issues on the system's accuracy according to [60]. The trilateration method introduced in [57] was implemented to improve the localization accuracy in the anchors neighborhood.
The ULM was designed to be independent from the typology of the connected sensors leveraging the information stored in the cloud DB. Whenever a sensor provided data to the ULM, the ULM performed a query to the DB, retrieving useful information on the sensor like its position, the typology and the unit of measure of the provided data. Once the sensor's observation has been recognized, the information is sent to the KF for processing. In this design, the ULM is technology agnostic, and the data for user localization may come from commercial or ad-hoc WSNs, smart devices or IoT agents.

Environmental monitoring module (EMM)
This module processed all the data concerning the environmental conditions, for remote room monitoring or the detection of critical situations. The EMM was in charge of triggering events concerning user safety and security, like the detection of intruders, the presence of wet and slippery floors, gas leakages and uncomfortable climatic or living conditions. Event scheduler module (ESM) The Google calendar tools and API [61] were integrated into the ESM to demonstrate the opportunity to include third-part software and services into the system, improving its maintainability. The ESM was designed as a general purpose event scheduler, able to retrieve appointments and service requests from the calendar and trigger the appropriate commands and service requests to the connected robotic agents. It can be used for medication and care management, the management of daily life activities and to promote social activities and foster healthy life styles. Depending on the users' cognitive abilities, the calender can be set by the users themselves or by their caregivers.

Human robot interaction module (HRIM)
The HRIM was designed as a proof of concept, to address some issues regarding the way robots navigates to the users, to attempt a service or an interaction. A software module was dedicated to the definition of the user approach strategy. This module waits for an human-robot interaction event or an interaction request. If a human-robot interaction occurs during the service, the HRIM retrieves all the necessary data from the cloud DB on the user and the environment to estimate the proper robot proxemics. For each service concerning interaction with a human, the robot could be directed to the human at a different speed and positioned at a specific distance and orientation, depending on the user's position and posture, the dimensions of the room and the lightning conditions as suggested in [40,41]. GUI The GUI consisted of a Web application for remote home monitoring and the supervision of the users locations. It was connected directly to the DB on the cloud exposing a public static IP. The GUI access was restricted to only authorized people for security. The interface home page provided the mean values of the lighting, humidity, and temperature for each sensorized room. In addition, an alarm web page provided a list of the alarms that occurred, while the localization web page reported the rooms where the users were located.

Experimentation and Methodology
This section presents the preliminary experimentation performed to test the reliability of the DBMS module and assess the performance of the ULM in terms of user localization accuracy. The two cloud modules had different natures and thus they were tested according to different experimental protocols and specific metrics were selected. The experimentation was performed in two remote pilot sites to assess the performance of the DBMS and the ULM agents in a multienvironment context.

Pilot Sites Description
The two experimental sites were a smart home located in Italy (Domocasa Lab, Peccioli, Italy), and an assisted residential condominium in Sweden (Angen site, Orebro, Sweeden). In particular, the Angen site was selected to demonstrate the ability to remotely manage residential facilities and provide AAL services, by implementing a cloud robotic solution.

DomoCasa Lab (IT)
The DomoCasa Lab is located in Peccioli (Italy) within the Living Lab of Scuola Superiore Sant'Anna. It is a 200 m 2 furnished apartment which attracts people to give their own contribution for experimentation with companion robots. It comprises a living room, a kitchen, a restroom and two bedrooms. Each room was instrumented as in Fig. 2 with at least a temperature, an humidity and a light sensor, while fifteen anchors, six PIRs, and five sensorized carpets and pillows were installed for the user localization.
Angen nursery (SE) The Angen site is a 5-floor residential facility composed of private flats, common areas and two domotic apartments dedicated to research activities (see Fig. 3). The two apartments furnished as real homes were used as a living lab. The localization and the sensor network workspace covered an area of approximately 145 m 2 , distributed over the two smart apartments on the first floor and the common area of the laundry on the fifth floor. The ZigBee stack provided the opportunity to tackle the challenge of monitoring such a five-floor wide indoor environment, by leveraging the multi-hop message routing and the mesh networking of the installed localization and sensor networks. The LNet in the Angen site was instrumented with 18 ZAs, distributed over the two apartments on the first floor, and the laundry on the fifth floor. The particular configuration of the Angen site, required the installation of two ZAs instrumented with an onmidirectional antenna instead of a sectorial one, to bridge messages between the first and the fifth floors. The two devices were used to implement the multi-hop message routing between the first and the fifth floor, and provided the opportunity to continually locate the user over the entire workspace. The SNet comprised eight sensor boards measuring the internal temperature, humidity and light, while a gas sensor was placed in the kitchen to detect gas leakages. A switch, two pressure sensors, and three PIR sensors were placed for presence detection. In particular, a switch was installed at the kitchen door, while a pressure sensor was placed on a chair in the kitchen, and one on the sofa in the living room of the first apartment. Again, in order to ensure the opportunity to continually monitor the laundry, two SN devices were installed in the stairwell to act as a wireless bridge between the condominium floors.

Experimental Settings
A low cost PC with a Wi-Fi module was used in the Domocasa and Angen sites to gather all the sensor outputs and send them to the remote PC that acted as a cloud and implemented the assistive robotic services. In this experimental set-up, the remote PC was located in Peccioli, and had a public IP.
In the system configuration, tested during the experimentation, the ULM, DB and DBMS were run on the remote PC. In order to investigate the accuracy of the localization service in these two different environments, two users wore an MN and moved over a pre-planned trajectory both in the Domocasa and in the Angen site (Fig. 4).
In Domocasa, the user moved over a pre-planned trajectory (see Fig. 4) from the Living room to the double bedroom and backward within an overall localization workspace of 92 m 2 . The start and the end points of the trajectory coincided, the user crossed the kitchen and the bathroom and stood for a minute on each one of the 18 specific positions of interest marked as in Fig. 2. The positions of interest were selected for their significance in the activities of daily life, like for example in front of the sink, the bath, the sofa or the bed.
In Angen the first apartment (marked with a blue line in Fig. 4) was selected as the apartment of the user, to test the localization system, while the second apartment and the laundry were sensorized to simulate a daily life activity: (1) a visit to a neighbor, and (2) the use of the washing machines in the laundry. During the experiment, the user walked according to Fig. 4, and moved within the two sensorized apartments and the laundry on the fifth floor. The user stood for 1 min in 12 specific positions selected in the 145 m 2 sensorized area in Fig. 4 The pre-planned trajectory for the evaluation of user localization accuracy in the two pilot sites Angen. The trajectory was intended to simulate an ordinary day where the user went to visit the neighbor and then went to the laundry to wash clothes. For each site, seven experimental trials were performed to provide a consistent data set for the evaluation of the performance of the user localization service.
During the experimentation, a PC located at the experimental site of Peccioli provided all the developed services and simulated the cloud, as in Fig. 5.

Metrics for Assessing the Responsiveness and Reliability of the DBMS
The performance of the DBMS was assessed through two parameters: the round trip time (RTT) and the data loss percentage (DL). 1. Round trip time The RTT is the time required for a signal pulse or packet to travel from a specific source to a specific destination and back again [62]. It differs from the "ping time" since it takes in account also for the time to get the message up to the application layer. 2. Data loss The DL value is given in percent and calculated as the ratio between the succeeded requests and the total requests.
These parameters were computed over 24 h both in Italy and Sweden to assess the quality of service over the entire work-day and the night.

Metrics for Assessing the Accuracy of the ULM
The localization accuracy was evaluated to assess the ability of the system to provide location-based services for AAL applications. The accuracy of the localization module was assessed through the following time measurements: 1. Mean localization error This error was computed as the difference between the actual user position and the position estimated by the localization service on the cloud. The ground truth measures were obtained using a measuring tape to get the actual position of the user in terms of the reference system of the experimental site (Domocasa or Angen).

Root mean square error (RMSE) This parameter is used
to assess the goodness of the Localization Error.
The parameters were computed for each point of interest in the trajectories performed by the user in Angen and Domocasa. These values were averaged over the seven experimental trials, and eventually the mean localization errors over the entire trajectories were estimated to assess the localization accuracy of the system.

Results
The quality of the assistive services provided by the proposed solution was assessed for each experimental site, computing: -The analysis of the RTT as the time a robot waits for the user position, after a request to the server. The RTT differs from the classical ping measure, since it includes the processing time in the application layer. -The DL, as the percentage of services undelivered due to information loss, divided by the total number of service requests (Table 4).
In particular, the RTT was computed as the mean time over 24 h, and in order to take into account the varying use of bandwidth over the day, the RTT was also computed at night (8 h) and during the work-day (10 h). As shown in Table 5. The mean RTT in Domocasa was 40 ms, while for the Swedish site the RTT was 134.57 ms. The localhost RTT acquired during the experimentation was 7.46 ms, and was used as a benchmark. The RTT night data was computed from midnight to 8 a.m., while RTT day data was computed from 8 a.m. to 6 p.m. The Angen site exhibited a lower service responsiveness (higher service RTT), since the remote server for user localization was placed in Italy, at a distance of about 2000 km. The DL value was computed as the ratio between the number of successfully addressed requests for user position and the total number of requests. In particular, a request for user position was sent at the rate of 1 Hz to the DBMS to simulate the call of a number of robotic services from several users. The number of service fails was less than 0.5 % in Italy, and 0.002 % for the Angen site. This result demonstrated that a high service reliability could be achieved even monitoring very far environments.
For each point of interest in the user trajectory in the Domocasa and Angen sites, the mean localization error and the error variance were computed, as shown in Fig. 6. In Domocasa and Angen, the mean absolute localization errors were respectively 0.98 and 0.79 m, while the root mean square errors were respectively 1.22 and 0.89 m. The standard deviation of the absolute errors was 0.57 m in Domocasa and 0.47 m in Angen. On average, the absolute localization error considering the two setups was 0.89 m, and the RMSE was 1.1 m. The localization error and its standard devia- tion depend on the environment and the number of installed sensors. Indeed the localization accuracy was different for each monitored room. In particular, the presence of electromagnetic reflective surfaces in each room creates stationary waves that affect the accuracy of the RSS based localization systems. Furthermore, the number (device/m 2 ) of installed anchors or presence sensors affects the overall accuracy of the system. The more the localization sensors are installed in a room, the finer could be the accuracy of the localization service.
The results demonstrated that the proposed localization system was able to locate several users in remote environments, with an appropriate in-room resolution. Indeed, for AAL applications, a meter-level localization accuracy has been considered sufficient to deliver assistive services to users [35].
The opportunity to get data from different kinds of sensors, like anchors for the observation of the RSS from mobile nodes and traditional presence sensors, positively affected the performance of the localization system, improving the accuracy in specific areas of interest. The advantage in the use of presence sensors to enhance the localization accuracy of the system, was measured as the reduction of the localization errors, using the switch at the kitchen door and the pressure sensor on the chair in the kitchen and on the sofa in the Angen site as shown in Table 4. The user position was estimated in two different experimental trials, with and without the presence sensors connected to the SNet. The use of the presence sensors increased the localization accuracy in the selected positions by an average of 35 %.

Discussions and Conclusions
The proposed work demonstrated the feasibility of the proposed cloud robotic solution for the provision of location based and personalized assistive services to seniors in relevant environments including an home and a care facility environment. The tests on the reliability and responsiveness of the system, demonstrated its ability to provide location based services to remote sites (2000 km) with a mean delay time of less than 134.57 ms and a data loss of less than 0.5 %, which can be considered sufficient for AAL applications. The reduced amount of time spent for the processing and the providing of information to the connected robotic agents make it possible to image the use of a single cloud infrastructure for the management of a number of connected agents and the provisioning of assistive services to an extent of users. This impact positively on the social sustainability of cloud robotics for the provisioning of services to the ageing population. A series of novelties were introduced also to get a step forward in the state of the art in the indoor localization of users, to tackle the challenge of providing location based services in scenarios featured by the possible presence of a plurality of users sharing the same environment. In particular the localization system made use of sectorial antennas to spot specific areas of interest for the identification and the localization of the mobile radios worn by the users. Furthermore the use of ZigBee radios allowed the development of self-healing mesh networks and the performing of multi-hop message routing. This kind of networking solution enabled the connected sensors to exchange data for the users localization and the context monitoring both in homes or multy-floor buildings improving the molecularity of the solution and the dependability of the wireless radio links. The proposed sensor fusion algorithm improved the localization performances by using data from different typology of sensors and localization techniques. The sensor fusion was intended to improve the system tolerance to hardware faults and minimize the impact of installation mistakes and calibration issues on the system accuracy. The proposed localization service achieved a meter level accuracy in locating people that was sufficient to address robots to the users and provide assistive services [35] like a drug or a medication remind. The ability to distinguish between users sharing the same environment, enabled the system to deliver person-alized services in homes with more than one inhabitant or in more relevant environments like care facilities. Thanks to this technology the robot was also enabled to recognize and roughly locate users without using cameras, complying with privacy issues and enhancing the system acceptability. The cloud database provided a shared base of knowledge about the users positions, the environment and the robot status, and acted as a blackboard where the connected smart agents can set or get the data for continually deliver assistive services. New robotic agents, smart environments or single sensors could be easily integrated in the system, by means of an internet connection and the use of a compatible communication protocol improving the scalability and the maintainability of the proposed system. Two possible examples of application scenarios were discussed in Sect. 3 more complex services could be designed based on the proposed RaaS architecture. The cloud features, including the scalable computing and storage resources are opening new kind of opportunities for the service robotics. The outsourcing of the computational resources allows to design cheaper and lighter robots more suitable for the market. Furthermore, the ability to share relevant and a relevant amount of information between the robots and the connected smart agents would improve the robots' context awareness and their ability to provide advanced services to a plurality of users. The opportunity to drive a robot to a specific user among the others and provide a dedicated assistance (medication, health status assessment, walk support) based on the user preference or needs would also improve the utility and effectiveness of system. A cloud connected robot would also be able to show new behaviours or implement new functionality by simply receiving instruction from the cloud without the need to reprogram the platform, thereby improving the robot adaptability and the service customization. More complex assistive services could be imaged based on the proposed system, where for example the user asks for a walk or a stand support, or the transportation of objects. To do this, a more complex humanrobot integration would be provided, and the localization system would achieve an higher resolution. The localization accuracy could be improved by installing more anchors in the environments, or introducing new types of sensors into the sensor fusion algorithm of the ULM. For example, the use of Ultra Wide Band (UWB) devices for locating mobile devices measuring the time of arrival of electromagnetic waves seems promising. Nevertheless, the localization accuracy is not the only parameter to take into account when designing an indoor localization system for AAL. The low form factor of the wearable devices, the ability to work for an entire day without the need for battery replacement and the ability to monitor wide environments using a single sensor network to avoid the disconnection of the devices while moving, are important features that affects the usability of the localization technologies. Further research will concern the improvement of the localization system, by introducing an Inertial Measurement unit (IMU) into the wearable device. The IMU would be useful to estimate the orientation of the user in the environment to improve the quality of the human robot interaction. Indeed, thanks to this information the robot would improve its proxemics computing the proper trajectory, the orientation and the distance to interact with the user and provide the service in a socially believable manner.
When it comes to AAL, the dependability of the technologies is crucial and the use of wireless broadband could negatively impact on the reliability, availability and responsiveness of cloud based services by introducing delays or data loss. Further investigations will be focused on the selection of the proper communication technologies (e.g. LTE, 3G…) and the strategies to automatically assess and restore the dependability of the data exchange. For example, a possible recovery actions in case of performance degradation would be the switching to different communication technologies or the reduction of the data flow by performing some pre-processing on the robotic agents. Specific investigations will be focused on the improvement of the human-robot interaction and thee definition of advanced human-robot interaction models based on a extended sensor fusion approach and machine learning algorithms in the cloud, to improve the acceptability of the personalized robotic services.
Manuele Bonaccorsi received the Master Degree (cum laude) in Bio-Medical Engineering at University of Pisa in 2010 and the Ph.D. in Biorobotics (cum laude) from the Scuola Superiore SantAnna in 2014, where he is currently working as a researcher. His main research activities concern the ambient assisted living and the localization of people in indoor rooms using Wireless Sensor Networks. In the period 2010-2015 he was Involved in the ASTROMOBILE (Echord experiment-G.A. 231143) and in the Robot-Era (FP7/2007-2013-G.A. 288899) European Projects to improve the usability and the situation-awareness of companion robots. He also participated in public and private industrial research projects on product innovation to design ICT systems for freezers and blood/plasma banks for medical application.
Laura Fiorini received the Master Degree (with honours) in BioMedical Engineering at University of Pisa on April 2012 and received the Ph.D. in BioRobotics (cum laude) from the Scuola Superiore Sant Anna in February 2016. Currently she is a post-doc at the BioRobotics Institute of the Scuola Superiore SantAnna. She was a visiting researcher at the Bristol Robotics Laboratory, UK. Her research field are Cloud Service Robotics, and Activity Recognition system to prevent, support and enhance the quality of life of senior citizens. Filippo Cavallo MScEE, Ph.D. in Bioengineering, is Assistant Professor at BioRobotics Institute, Scuola Superiore Sant Anna, Pisa, Italy, focusing on cloud and social robotics, ambient assisted living, wireless and wearable sensor systems, biomedical processing, acceptability and AAL roadmapping. He participated in various National and European projects, being project manager of Robot-Era, AALIANCE2 and Parkinson Project, to name but a few. He was visiting researcher at the the EndoCAS Center of Excellence, Pisa; at the Takanishi Lab, Waseda University, Tokyo; at Tecnalia Research Center, Spain. He was granted from the International Symposium of Robotics Research Committee as Fellowship Winner for best Ph.D. thesis in Robotics; from the Regional POR FSE 2007-2013 for a 3-years Research position at The BioRobotics Institute; from the ACCESS-IT 2009 for the Good Practice Label in Alzheimer Project; from the Well-Tech Award for Quality of Life with the Robot-Era Project. He is author of various papers on conferences and ICI journals. Alessandro Saffiotti is full professor of Computer Science at the University of Orebro, Sweden, where he heads the AASS Cognitive Robotic Systems laboratory. He holds a M.Sc. in Computer Science from the University of Pisa, Italy, and a Ph.D. in Applied Science from the Universite Libre de Bruxelles, Belgium. His research interests encompass artificial intelligence, autonomous robotics, and technology for elderly people. His main focus is the integration of Artificial Intelligence and Robotics, and he is leading several international initiatives on this topic. In particular, he is the Coordinator of the euRobotics topic group on AI and Cognition in Robotics. In 2005 he introduced the notion of Ecology of physically embedded intelligent systems as a new approach to include robotic technologies in everyday life, which has been used within several EU projects. He has published more than 140 papers in international journals and conferences, his h-index is 35 in Google Scholar and 12 in WoS. Saffiotti has organized many international events, and in 2005 he was a program chair of IJCAI, the premier conference on Artificial Intelligence. He is PI for four EU FP7 projects, and he is involved in several EU networks and in many national projects. He is a member of AAAI, a senior member of IEEE, and an ECAI fellow.
Paolo Dario received his Dr. Eng. Degree in Mechanical Engineering from the University of Pisa, Italy, in 1977. He is currently a Professor of Biomedical Robotics at Scuola Superiore Sant Anna (SSSA) in Pisa. He is and has been Visiting Professor at prestigious universities in Italy and abroad, like Brown University, Ecole Polytechnique Federale de Lausanne (EPFL), Waseda University, University of Tokyo, College de France, Zhejiang University. He was the founder and is currently the Coordinator of the BioRobotics Institute of Scuola Superiore Sant Anna, where he supervises a team of about 120 researchers and Ph.D. students. He is the Director of Polo Sant Anna Valdera, the research park of SSSA. His main research interests are in the fields of BioRobot-ics, medical robotics, micro/nanoengineering. He is the Coordinator of many national and European projects, the editor of special issues and books on the subject of BioRobotics, and the author of more than 200 scientific papers.