1 Introduction

Robot companions are becoming more common and familiar in our lives (Dario 2011). According to the International Federation of Robotics’ statistics, in 2013, about 4 million service robots for personal and domestic use were sold, 28 % more than in 2012, increasing the value of sales to US$ 1.7 billion (IFR 2015). In particular, service robotics has received a lot of attention from industry and academia, in order to face societal and demographic challenges (Zaidi et al. 2006; Stula 2012).

According to ABI research (Solis and Carlaw 2013), service robots represent the second potential big market opportunity.

In this context, Ambient Assisted Living (AAL) solutions aim to meet users’ and stakeholders’ needs, providing ICT and robotic services able to assist the user during daily activities (van den Broek et al. 2010; Moschetti et al. 2014). The main benefits which can been achieved by service robots are:

  • support senior citizens during daily activities (e.g. participation in social events, reminders, surveillance) enhancing their independent living;

  • enhance the quality of life, compensating for motion and cognitive deficits;

  • improve the quality of health services, reducing the cost for society and public health system.

In general, domestic robots need to be friendly, interact with users, and autonomously move inside the house without revolutionizing the familiar environment. On the other hand, the solutions provided by robotics services should guarantee time continuity all day long. During the last years, two main robotics paradigms have been adopted. Stand-alone robots are designed according to a robot-centred approach, where the robot alone is in charge of the entire capabilities of sensing, planning and acting (Cesta et al. 2010; RP-Vita; Aldebaran). This approach has some limitations due to the extension of the robot sensing area, the robot payload, batteries and computational capabilities. Even though the robots increase their particular abilities, they remain insufficient for continuously supporting daily activities adequately. To better understand this turning point, just think of your daily life: you perform several activities in different contexts and rarely in one place. Now the question is: are stand-alone robots sufficient for continuously supporting a person during the day in a wide range of activities and in different environments? User requests are very different and depend on the specific needs of a specific moment.

The networked robot paradigm (Sanfeliu et al. 2008) distributes sensors and computational capabilities over a smart environment and intelligent agents, like wearable and personal devices, extending the effective sensing range of stand-alone robots, improving their ability to plan and cooperate (Coradeschi et al. 2014; Volkhardt et al. 2011; Simonov et al. 2012). Therefore, the problem of the continuity of the service and computational limitations still remains (Kamei and Nishio 2012).

In this context, the cloud robotics paradigm is trying to overcome the limitations of the stand-alone and networked robotics paradigms, by integrating robots with cloud computing resources (Kuffner 2010). Recently, the cloud robotic paradigm has been defined as “any robot or automation system that relies on either data or code from a network to support its operation, where not all sensing, computation and memory is integrated into a single standalone system” (Kehoe et al. 2015). This new generation of low-cost robots (Kuffner 2010) can use wireless networking, big data, machine learning techniques, and the Internet of Things to improve the quality of their services of assistance (Lorencik and Sincak 2013). Robots with different capabilities have the possibility to share data, knowledge, and skills, exchanging information with other agents connected to the network, leading to the reduction of the overall costs (Tenorth et al. 2011).

Cloud robotics is not a completely new idea. Indeed, during the 90s, Prof. Inaba (Inaba 1997) conceptualized the remote brain paradigm, in which the hardware agents can access to a remote “intelligence” with high computational abilities. But only over the last few years with the rise of mobile technologies has the growth of internet resources and the global penetration of smartphones [Mobile Planet] made concrete the idea of cloud robotics.

2 Related research

Over the last few years, several research groups have focused their efforts on the challenges of cloud robotics, and some recent examples of its application in service robotics can be found in the literature. First of all, additional concepts related to the cloud robotic paradigm need to be cited. Kamei et al. (Kamei and Nishio 2012) expand the concept of networked robotics, proposing a new research field called Cloud Networked robotics. They describe “The Life Support Robot Technology”, a Japanese project started in 2009, focused on the development of six robotic services with high safety, reliability, and adaptability. Furthermore, Chen et al. (Chen et al. 2010) introduced the concept of Robot as a Service (RaaS) which enforces the idea of a robot that uses services from a remote resources, “this all-in-one design gives the robot unit much more power and capacity, so that it can qualify as a fully self-contained cloud unit in the cloud computing environment.” Bonaccorsi et al. (Bonaccorsi et al. 2015) expand the cloud robotics introducing the concept of Cloud Service Robotics as “The integration of different agents that allow an efficient, effective and robust cooperation between robots, smart environments and citizens.”

As stated by Kehoe (Kehoe et al. 2015), the cloud robotics field can be divided according to its benefits. Some research has been focused on the use of large datasets including video, images and vast sensor networks which are difficult to manage with the on-board capacity. In particular, the software ODUfinder [Odufinder] is able to perform object recognition exploiting external databases which contain over 3500 pictures, whereas in Kehoe et al. (2013) a robot uses the Google object recognition engine. RoboEarth was the first EU project in the cloud robotics paradigm [RoboEarth]. It allows robots to share and store information as well as manipulation strategies and object recognition models. Robots can use the cloud to offload computation and collaborate to achieve a common task. Other publications report the use of external cloud computing resources to speed up computationally intensive tasks such as SLAM (Benavidez et al. 2015; Riazuelo et al. 2014) algorithms, object recognition (Oliveira and Isler 2013) and video and image analysis (Nister and Stewenius 2006). In particular, Quintas et al. (Quintas et al. 2011) proposed a cloud robotics context aware approach for an automated system composed of mobile robots and a smart home. In order to enhance the level of scalability of the system, this approach relied on cloud computing services. Additional research has been focused on the sharing of knowledge and on the use of crowd-sourcing as a resource for the robot.

Table 1 KuBo robotic services designed with the user centred approach

Among commercial solutions, Gostai [Gostai] has developed a cloud robotics infrastructure called GostaiNET. The robot intelligence is no longer embedded in the robot but executed in the cloud, allowing the remote execution of tasks such as voice recognition, face detection, and speech algorithms on any compatible robot. Engineers at Romotive [Romotive] have developed a companion robot which learns while you play. Thanks to the cloud, anyone can control Romo from anywhere in the world.

In this context, the aim of the present paper is to design and develop an innovative cloud-based robotic system with a user-centred approach and evaluate its technical feasibility for supporting senior citizens in daily activities at home. The system, called KuBo, is based on a mobile robot, shaped as a piece of domestic furniture, with low on-board abilities, which relies on cloud resources to extend its capabilities for interacting with humans and sensing the environment. A meticulous methodology was followed to, firstly, define the technical specifications to design and develop Kubo according to the end-users’ requirements; and secondly, to assess the technical reliability and safety of the robot in performing navigation and speech capabilities in a real environment. In particular, a specific domestic use case was designed to test the system in an apartment, inhabited by a elderly couple. The goal of the use case was to demonstrate the technical feasibility of the system when used by users. Therefore the system was left for 5 consecutive days and the elderly, after appropriate training, were able to freely ask Kubo to provide the conceived services.

The rest of this paper is structured as follows. In Sect. 3, the authors detail the applied methodology used in this research. Section 4 describes the proposed system. Section 5 summarizes the results and Sect. 6 discusses the results. Section 7 concludes the paper.

3 Methodology

This section focuses on the methodology used to build and test the KuBo robotic system to support senior citizens. The methodology is based on four phases. Phase I has been dedicated to the definition of the services, starting from an analysis of the needs of the elderly. Phase II comprised the development and integration of the KuBo system. Then in Phase III, the experimental protocol was developed and defined. Lastly, during Phase IV, the system was tested in a real environment with real users for five days.

3.1 Phase I: KuBo service definition

The services of the system have been studied and designed applying the User Centred Design approach (Heerink et al. 2009) in order to identify a concept responding to usability and acceptability criteria. A focus group, involving 19 elderly volunteers, aged from 64 to 85 years (\(\mu \) = 73.05, \(\sigma \) = 6.55), was organized in order to define the capabilities of KuBo according to the end-users’ requirements, such as their needs and lifestyles. The outcome of the focus group has produced a set of services grouped into three main areas: the use of the robot to get help for some activities, the need to have information in appropriate situations, and remote user assistance (see Table 1). The first group includes the (I) Carrying Object service, which allows the user to call KuBo to get an object stored on its shelf, such as a tablet, book or prescription glasses, (II) Internet Access is used to access web resources, such as weather forecasts, by means of a speech interface. This service is also used autonomously by the robot in order to modulate its interaction with the user. For instance, if the user has to go outside for an appointment and the weather forecast states that it is going to rain, KuBo suggests taking an umbrella. The second group contains the (III) Reminder Service, which remembers commitments and appointments, and the (IV) Monitoring Service, which alerts the user when dangerous situations are recognized by the smart environment. With the (V) Telepresence service, a caregiver can use KuBo for remote assistance. Another outcome of the focus group is that the participants prefer to control the robot through a combination of GUI (tablet) and vocal commands.

3.2 Phase II: system architecture design and implementation

The hardware and software architecture of the entire KuBo system has been developed in order to meet the users’ needs coming from Phase I. It is described in detail in Sect. 4. This phase led to the development of a small sized robot connected with cloud resources and environmental sensing abilities.

3.3 Phase III: definition of the experimental protocol

The KuBo system experimentation was conducted as a case study in which the trials took place in a real private house for 5 days. The case study was designed in order to allow elderly people to have interaction with the KuBo robot in a sequence of simple tasks according to the services defined in Phase I. During this period of testing, quantitative data was collected in order to evaluate the performance of two KuBo system modules in, e.g. navigation tasks and speech capabilities. The navigation module was chosen in order to understand whether on board functionality was a successful strategy, while the speech capabilities was selected as an example of a cloud service.

3.3.1 Technical performance evaluation tool

The evaluation metrics for assessing the technical performance are related to KuBo navigation tasks and speech recognition abilities. The navigation of the robot has three possible states: RUNNING, SUCCEEDED and FAILED. A navigation task begins when the robot goes to RUNNING (i.e. it receives a goal) and ends when it passes to SUCCEEDED or FAILED. The speech functions were evaluated considering each single sentence and the correctness of the elaboration. The following parameters have been employed to globally evaluate the system:

  • Success rate This is the percentage of the total tasks that succeeded. It gives information about the reliability of the system.

    $$\begin{aligned} Success\;rate (\%) = \frac{Succeeded\;tasks}{Total\;tasks}\cdot 100 \end{aligned}$$
    (1)
  • Failure rate It is computed as

    $$\begin{aligned} Failure\;rate (\%) = 100 - Success\;rate (\%) \end{aligned}$$
    (2)
  • Effective robot velocity This is applied only to the navigation tasks: it represents the average velocity of KuBo. It is an important parameter which influences the robot’s acceptability (Salvini et al. 2010). For the experiments, the velocity of KuBo was limited to 0.2 m/s. It is also related to the safety of the system.

  • Confidence This is applied only to the speech recognition functions: it represents the probability of the correctness of the recognition.

In addition, at the end of the test period, a researcher asked the elderly volunteers to perform again the KuBo services applying the “think aloud” (TAL) method (Lewis 1982). With the TAL method, the user is encouraged to report aloud any action or thought while carrying out the tasks. All users’ verbalizations were transcribed and then analyzed. In this study, a general inductive approach for the analysis of qualitative evaluation data (Thomas 2006) was applied. This approach consists of three phases: using raw data to create categories, establish the pertinence of the categories with the research objective, and the development of a theory. In this study, a model could not be developed due to the small sample size. However, the aim of this analysis is not to investigate the usability and acceptability because the sample is too small and not statistically significant. Therefore this analysis gives holistic information about the robustness of the system from an user point of view.

3.4 Phase IV: case study setup

The experiment was conducted as a case study in which a couple of Italian elderly volunteers tested the system in their home for 5 days. In this case study, the smart environment described in Sect. 4.2 is a simplified version comprising gas sensors, used as a proof of concept. During the 5 days, all the services were tested, including a simulated test of a gas leak. The developers provided their assistance in the house the first two days of experiments; for the remaining days, the system was accessed remotely and the users were left free to use the system. At the end of the experiment, the elderly users were more confident with the KuBo system and they performed again the services applying the TAL while a researcher transcribed their opinions.

4 System architecture

The architecture of the system is based on three components: the KuBo robot, the Smart Environments, and the Cloud Software as a Service (SaaS) (see Fig. 1). The high level software layer of the robot relies on cloud resources to endow KuBo with additional functions in a modular way. It is able to extend its sensing capabilities through using the smart environment, accessing web resources, and exploiting powerful voice and speech recognition services. The Smart Environment is composed of several devices providing the user’s position within the house as well as sensors to monitor the temperature, human presence, water/gas leaks. For the sake of example, when a sensor triggers a gas leak event, the robot retrieves the user’s position and moves towards him/her, warning the person.

Fig. 1
figure 1

System architecture. The robot is connected with the cloud through five modules, which allows it to extends its capabilities

4.1 KuBo robot

In this section, the design process and the software architecture of the prototype is detailed. The robot complies with several features required by a domestic robot.

4.1.1 Design

The role of user acceptability for a companion robot is crucial, therefore during the design phase of the prototype, two key points have been primarily considered: reduced dimensions to move easily in a domestic environment, and the use of a modern design style to improve the appearance and favour the friendliness of the platform. Hence, KuBo is based on the youBot [Kuka], commercialized by KUKA, a small-sized holonomic mobile base, and it is equipped with a laser scanner for navigation purposes, a depth camera, speakers, a microphone, and a tablet for human–robot interaction.

In order to favour the acceptance of the robot (Salvini et al. 2010), the original platform was modified with a design inspired by a typical “coffee table”, a common piece of furniture in homes. Figure 2 shows the design process of the robot. The overall height extends by about 30 cm and a new cover, made of black opal methacrylate, is mounted to an internal frame. Some adhesive tape has been used to personalize the prototype. The final dimensions of the prototype are 40\(\,\times \,\)40\(\,\times \,\)60 cm.

Fig. 2
figure 2

Design process of KuBo. The overall height of the robot is about 30 cm and a new cover, made of black opal methacrylate, is mounted to an internal frame. The last picture shows the final prototype

4.1.2 Software

KuBo is conceived as a platform with low computational capabilities that has to exploit cloud resources to carry out its tasks. Figure 1 depicts the architecture of the system and the software layers of the KuBo robot. The Service Manager executes the tasks, using properly the robot functions. All the software modules are implemented in ROS (Quigley 2009) and the only on-board ability is that of autonomous indoor navigation, which relies on the ROS navigation stack and uses the Dynamic Window Approach (Fox et al. 1997) for local planning and Adaptive Monte Carlo (Thrun et al. 2005) for indoor localization. It uses a 2-D static map of the environment to navigate and a laser scanner for obstacle avoidance and self-localization abilities.

The Service Manager is able to use all the modules in order to accomplish a particular task. For example, when the Reminder Module triggers an event (e.g. a doctor appointment), the Manager retrieves the user’s position through the Smart Environment Module, moves KuBo to him/her, notifies the user of the event using text-to-speech (TTS), and waits for user confirmation (Speech Recognition). If the user has to go outside, an appointment in this case, the Service Manager retrieves the weather forecast by means of the Internet Module and use the TTS to communicate the downloaded information. All these modules, with the exception of the Navigation Stack, rely on cloud resources.

4.1.3 Robot cloud modules

The system is implemented with five cloud modules:

  • Smart environment module This connects KuBo with the DataBase Management Software (DBMS), allowing the retrieval of information from the database after an authentication phase. It runs two TCP clients: the first requests the user’s position any time the robot needs to reach him/her, the second makes polling requests (1 Hz) to identify any changes in the database concerning environmental alarms. All the communications follow a proper string protocol based on JSON codification.

  • Reminder module This is able to link the Google calendar service with KuBo. It is based on Google Calendar API v3 [Google Calendar] with JSON data object. Using this API, it is possible to search and retrieve calendar events, as well as create, edit, and delete events. The user can set appointments through a web browser (or the mobile App synchronized with the calendar) and this module is able to activate the reminder service at the proper time.

  • Text-to-speech module This connects the robot with the Acapela Voice as a Service API [Acapela] using HTTP connections. This module stores locally the sentences already converted, to reduce the response time. It sends a text string and plays the audio files received from the service.

  • Internet access module This is used to retrieve generic information from web sites. In this implementation, the authors have implemented a weather forecast service as an example. The robot retrieves an HTML file from a specific web site using the HTTP protocol. The file is then parsed to find the proper information to communicate.

  • Speech recognition module This connects the robot with the Google Speech Recognition API through HTTP connections. This module also implements a dictionary of keywords to elaborate user requests, such as move to a particular room, request the weather forecast, or ask for the current time.

4.2 Smart environment

The Smart Environment is composed of two ZigBee-based wireless sensor networks (WSNs), one for user localization and the other for environmental monitoring. The user localization network is designed to locate multiple users at the same time, using received signal strength (RSS) (Esposito et al. 2015; Cavallo 2014). The WSN for environmental monitoring is composed of several sensors able to monitor the temperature, human presence, water/gas leaks, and control the lights. These sensors are distributed in the house in order to have a real-time measurement of the environment’s conditions. The information is processed and stored in the Smart Environment DataBase (see Sect. 4.3). This system manages several alarm procedures, such as a door’s opening during the night, a water or gas leak, and door/windows open when the user is outside. The performance and the accuracy of this kind of system was presented in (Bonaccorsi et al. 2015).

As already explained in Sect. 3.4, during the 5-day experiment, a simplified version of the WSNs was used, in order to not be too intrusive in their daily life and compromise the overall experiment. A gas sensor was used as a proof of concept.

4.3 Cloud SaaS

The software modules described in Sect. 4.1.3 are connected with specific Cloud SaaS. In more detail, they comprise four modules:

  • Smart environment database This stores the data collected from the WSN, while the DBMS administers entries and queries avoiding direct connection between the hardware agents (WSN and robot) and personal data. It is implemented as a relational database, based on MySQL, which has several tables: one for each sensor type containing their outputs, one with a list of the installed sensors (typology and unique identification number), one collecting environmental alarms, and another table recording the user’s estimated position. The outputs from the physical agents and the estimated user position are sent to the DBMS and recorded in the DataBase.

  • Acapela VaaS This takes as input the text string to translate, a language, and a voice type, and produces an MP3 audio file. This service allows using several languages and several voices (male, female). The robot uses this service only for unknown sentences, to reduce the response time during the interaction phase.

  • Google services In this research, the developed system uses two Google Services. The first is the calendar API, while the second is the speech recognition API.

  • Web resources The Web is full of information that can be retrieved by the robot to improve the user interaction experience. In this implementation, the authors have implemented the weather forecast as an example. The robot uses the HTTP protocol to retrieve the proper information from a dedicated web site.

5 Result

The authors provide here some technical results about the performance of the navigation and speech interactions. During the experiments, KuBo performed 94 navigation tasks. The success and the failure rates are computed analysing navigation log files that were updated every 4 s or on any change in the robot’s state. The number of successes counts the transitions between the RUNNING state and the SUCCEEDED state, while a transition to a FAILURE state increments the total number of failed navigation tasks. Figure 3 shows the positions of KuBo during the navigation tasks that succeeded, for one day of the experimental sessions.

The results, presented in Table 2, show that the success rate of the navigation task is 86.2 % while the failure rate is 13.8 %.

Fig. 3
figure 3

KuBo positions during succeeded navigation tasks for one day of the experiment

Table 2 Results for navigation tasks during the experimental tests in a real environment

For safety reasons, the velocity of KuBo is limited to 0.2 m/s and the effective robot velocity, computed as the mean value during the RUNNING state, is about 0.13 m/s. The velocity is not constant within a single navigation task, due to the complexity of the route and the presence of obstacles on the way.

By means of speech interaction, performed in the Italian language, the user can activate four robot activities. He can move the robot through rooms saying “move” or “go” plus the name of the room. He can confirm a reminder event with the phrase “thank you” and can ask for information about the time, the day, and the weather forecast. The robot is also able to react to general greetings like “hello” or “what’s your name”. The Google Speech Recognition API has produced good results during the use case (see Table 3). It has a perfect recognition rate with words like “thanks” or sentences like “what time/day is it”. Interesting considerations come up if we consider the failure cases. Since this service is intended to be used by smartphones and tablet applications, several utterances are translated with web entities, business company and geographical locations. The utterance “Ciao KuBo” (84 % success rate) has often produced the output “Yahoo”, while asking for the weather forecast produced, on rare occasions, an URL. Some words, like “KuBo” or “Casa” (home), are turned into locations like Cuba, Cannes or Cagliari.

Taking into account these results, the development of a speech recognition module that uses such a cloud resource has to consider these cases to improve and provide a better interaction experience with the user.

Table 3 Results of the Google speech recognition service during the use case experiment

In addition, some qualitative results come to light from the TAL method and the users’ verbalizations have been transcribed and then analysed. The first step was to group the raw text data in order to identify some categories. The outcome of this analysis is the definition of five categories:

  • Aesthetics The aesthetic attractiveness of the robot with respect to the user Regarding the KuBo’s aesthetics, the general impression is positive because according to the elderly’s answers, the colors are judged enjoyable and the size of the robot is small enough to be used in an indoor environment. Furthermore the shape of KuBo reminds older users of a coffee table, a piece of furniture suitable in interior design.

  • Anxiety Negative emotional reactions evoked when the person uses it The participants reported being relaxed during the interaction with KuBo, because it is judged ease to use. Furthermore, the user had no anxious reaction, since they perceive themselves as having control over the robot, which looks like a small piece of furniture. Furthermore this outcome shows that the Effective Robot Velocity is adequate for domestic use by the elderly.

  • Reliability The user’s feeling about the robustness of the system The KuBo robot is judged reliable enough because it seems robust as confirmed by the high success rate. Moreover, according to the elderly people, the appearance of the robot is able to communicate its functions.

  • Ease of use The ease of the interaction modalities About the interaction modalities, the elderly perceive the interfaces based on vocal commands as easier to use than the graphical one. In fact, they have understood well which vocal command to use in order to interact with KuBo. The tablet introduces some difficulties, since they are not used to having it and, in addition, they need to wear prescription glasses. Perhaps the high confidence value of the speech capabilities positively influences the perceived ease of use.

  • Utility Usefulness of the services in daily life The participants consider very useful the robotic capabilities for the improvement of their independence and safety in daily life, because, on getting old, physical and cognitive impairments can arise. Lastly, the participants claim to have fun using KuBo and they are well disposed to using it in the future because they think that the KuBo services could help overcome loneliness. The high number of acteivations of the robotic services shows the willingness of the users to use the system.

Fig. 4
figure 4

Robot positions during the 13 failed navigation tasks

6 Discussion

Considering the performance in the navigation tasks, the quantitative results show a failure rate of 13.8 %. Although this value is not very high, the failures are more concentrated in certain areas, because they are strictly correlated with KuBo self-localization errors. Particularly, the robot positions during FAILURE states are more concentrated in the kitchen (41.67 %) and dining areas (25 %) that are contiguous to the kitchen (see Fig. 4).

The experimental phase in the real environment highlights some computational limitation of the navigation performance. These high error values are due to the presence of tables and chairs that are typical furniture in these rooms, and to the low computational power of the robot’s PC.

The possibility of exploiting cloud solutions allowing the use of more computational power for this task will be investigated in future research. Such a solution is more advisable than replacing the hardware of the robot because the cloud resource is cheaper, shareable, and reliable.

Concerning the cloud functionalities, one of the main constraints depends on the delay sensitivity of the tasks (Hu et al. 2012). The choice between a stand-alone and a cloud architecture should rely on the maximum acceptable delay in the service delivery, which is also strictly correlated with the computational abilities and with the data-rate of the communication technology involved (i.e. the average value for LTE is 45 Mb/s whereas for home ADSL it is 10 Mb/s). So, a cloud robotics architecture should be designed taking into account the optimal trade-off between the distribution of the resources, the computational capabilities, and the performance of the tasks.

In this use-case, the users expressed neither positive nor negative comments about delays in the tasks. This suggests that the technical performance is acceptable from the user’s point of view. Effectively, the speech capabilities have an high success rate, as reported in Sect. 5. Since the Google Speech Recognition is designed to be used mainly in mobile applications, the outputs are sometimes related to web resources, business company and geographical locations. The development of a recognition module for assisted living has to take into account this outcome to provide a better interaction experience with the user.

In addition, the KuBo services meet the users’ needs because they were defined by 19 elderly people in order to develop a robotic system according to the final users’ requirements; in fact the two participants reported that the KuBo system was really useful for them. Regarding the characteristics of the robot, the Aesthetics category is an important acceptability factor because the appearance of KuBo is aesthetically pleasing for the participants. Furthermore, according to the elderly persons’ comments, the prototype could be easily integrated in a domestic environment since it is small and its shape reminds them of a coffee table. Moreover, the users’ feelings about the robot’s capabilities, defined as Reliability, was evaluated positively by older users: in fact they might not use a robot if its functionalities and capabilities are perceived as useless, dangerous, or not well performing (Klamer and Ben Allouch 2010). Concerning the Anxiety category, the participants reported not feeling any negative emotional reactions when using the KuBo system, and according to other studies (Heerink et al. 2009), an high degree of acceptability is correlated with a low level of anxiety. The KuBo system is judged easy to use and elderly participants say to be disposed to use in the future because they think that the service are useful to overcome loneliness.

7 Conclusion

In this paper, the authors described a robotic platform which provides cloud robotics services in a domestic environment.

From a technical point of view, the experiments demonstrate the technical feasibility of a cloud robotics solution to extend the interaction abilities of the robot.

In effect, the cloud robotics approach gives the possibility of increasing the skills of a robot in a modular way and endows the system with text-to-speech and speech recognition abilities for human interactions, smart environments for additional sensing, and access to internet resources.

Additionally, the experience with the couple of elderly users suggests a promising acceptance of the KuBo system. This will enourage us to perform future tests, promoting the use-case approach to a pilot site methodology, which will involve more users.

A video about the use case experiment is available at: https://youtu.be/uMjp8vN4MF8.