Keywords

1 Introduction

The healthcare field is adopting the use of mobile devices to provide innovative healthcare services. Mobile health (mHealth) solutions are developing very fast and since a few years it became “the new edge of healthcare innovation” [1]. This new approach does not intend to replace the health professionals, but rather to include them in the process as a supporting element and a manager [2]. mHealth is changing the way the health professionals are now engaging in their daily work but, on the other hand, these new solutions depend on their acceptance by healthcare customers (e.g., patients and caregivers). mHealth solutions should meet the users’ specific needs and interests in order to become truly integrated into their everyday lives, thus being a major vehicle for their involvement in their healthcare process. The future of healthcare should be one in which the healthcare customer will be involved, which is what s/he has always wanted to be [3], getting an accurate picture of the many factors that affect her/his health. Moreover, health customers are more likely to stick with mobile apps recommended by professionals resulting in a higher involvement in their healthcare [4]. The aforementioned points work as a driver to a new model of health professional-patient collaboration.

Nowadays, people generate large amounts of health-related data, from mHealth apps and personal fitness trackers to electronic medical records and clinical research [5]. However, a big part of these data are underutilized or discarded and the vast majority of healthcare customers do not even have access to their data, but they want personalized, transparent, integrated, and high-quality care [5]. mHealth solutions are likely to allow the collection of data that can contribute to improving evidence-based practice and research, creating conditions to “deliver highly personalized healthcare in general and suitable intervention for patients to manage their chronic conditions in particular” [6]. The integration of cognitive computing into healthcare systems can now help to cope with health data that were previously “inaccessible”, producing a high impact on the healthcare field. The adoption of cognitive computing services brings the model to a new level of collaboration, namely between people and technology with the goal of transforming healthcare on a global scale [5, 7].

The combination of mHealth with cognitive services that understand, reason, and even learn, may help people expand their knowledge base, improving professionals’ productivity, while deepening and redefining a patient’s path to better health. Besides the obvious application on clinical and decision making support, a major focus of cognitive computing has been around the user experience, being applied to “help with patients’ understanding of their conditions, how they can manage their condition and treatment, and potential consequences of procedures” [5]. Motivated by these perspectives, this paper presents a model towards the integration of cognitive-based services into mHealth solutions in order to bring them closer to the patients and their caregivers, leveraging the collaboration with the health professionals, while providing tailored information through appropriate interfaces (see Fig. 1).

Fig. 1.
figure 1

Personalized cognitive mHealth apps provide tailored information.

This work is based on the experience obtained with the implementation of the OnParkinson mHealth solution, which constitutes a case study based on the usage of IBM Watson cognitive services. Starting from a study of the needs of the triad “people with Parkinson’s Disease (PD), their caregivers and health professionals”, ONParkinson appeared with the focus on the empowerment of this triad to help both patients and caregivers to better manage PD.

The paper is organized as follows. Section 2 presents the OnParkinson solution. Section 3 introduces a brief overview of relevant cognitive platforms and the services that can be used and integrated into a mHealth solution to help finding knowledge pertinent to support health consumers’ clinical issues. The experience obtained with the integration of IBM Watson services into OnParkinson is detailed in Sect. 4, while Sect. 5 proposes a model for the integration of cognitive services and discusses the notion of a distributed health-related community as a collaborative cognitive system. Finally, conclusions and future work are presented in Sect. 6.

2 The OnParkinson mHealth Solution

The OnParkinson mHealth solution [8] is a platform, in which a mobile app is the main interface to provide patients and their caregivers with self-management capabilities to help them feel empowered in their ability to find strategies in a more informed and collaborative way (see Fig. 2). The platform includes a server that contains a central repository where all data are stored. The platform also intends to optimize therapy outside the clinical context, with remote support from the health professionals, providing them with an exclusive Web interface, which works as a complement to the mHealth app. The Web app integrates a more complete and interactive dashboard than the mobile interface, allowing better monitoring and remote support of the therapy outside the clinic context. An essential feature of this Web app is an interface through which the health professional can add new therapeutic exercises to create programs based on them towards her/his patients. The app also includes a shared “calendar” with reminders of medication, appointments or other events/notes that are important to the user, enabling a better self-management, which can also be supervised, if required. The first version of the OnParkinson app also integrated a module with frequently asked questions (FAQ) related to the management of the disease and strategies for self-management of symptoms.

Fig. 2.
figure 2

OnParkinson mHealh solution.

The mHealth prototype was tested by different users at different times and for different purposes. A preliminary user study was conducted to validate the OnParkinson’s concept and the potential needs of the triad’s users. User tests were carried out with participants from the Portuguese Parkinson’s Disease Association (literally, Associação Portuguesa de Doentes de Parkinson - APDPk), including people with PD, caregivers, and health professionals (physiotherapists and speech therapists), in order evaluate the acceptance of the modules and functionalities included in this first version [8, 9]. This first evaluation study allowed to understand that the OnParkinson development was on the right path, with a high acceptance by the potential end-users. However, in order to improve the user experience and engagement, the app should provide personalized services. One of these required personalized services is to provide the users with reliable information about PD. The FAQ module was not enough for the users’ needs, and a Question and Answer System that automatically answers questions asked in natural language about PD was required.

The OnParkinson solution, developed within the MAiThE project [10], addresses two important societal challenges, which reflect the policy priorities of the Europe 2020 strategy. The challenge “Health, demographic changes and well-being”, also included as one of the United Nations challenges for sustainable development 2030, is addressed since the project aims to develop personalized mHealth apps towards the specific needs of their end-users (patients, caregivers, and health professionals). A second challenge, “Europe in a changing world - Inclusive, innovative and reflective societies”, is also addressed since the use of apps based on mobile devices naturally enhance the feeling of inclusion in citizens. In particular, they contribute to a more inclusive society by empowering individuals suffering from a pathology and their caregivers. These apps are expected to contribute to a more reflective society by engaging users and contributing to their awareness and understanding of the condition using personalized information.

3 Cognitive Technologies for Question and Answer Solutions

Cognitive technologies attempt to provide a way to reproduce aspects of human thinking, adding the ability to handle large amounts of information without bias [11]. Cognitive solutions use advanced reasoning, predictive modeling and machine learning techniques in order to: (i) search and analyze data from multiple sources, (ii) process natural language, (iii) integrate the feedback/knowledge of distinct users. Cognitive services offer computer vision, speech, translation, text analytics, and data analysis as cloud-hosted APIs. IBM Bluemix Cloud Platform [12], Google Cloud [13], Microsoft Azure [14] and Amazon AWS [15] are good examples of such hosted APIs.

A Question and Answer (Q-A) solution can use cognitive technologies in order to automatically answer a question posed in natural language [16]. The aim of a Q-A system is to deliver short, succinct, answers instead of overloading users with a large number of irrelevant documents. An intensive study of several Q-A systems can be found in [17].

Due to the fast development of cognitive services, it is becoming possible to implement a Q-A solution interconnecting distinct cognitive services. Examples of base services that can be useful to implement a Q-A solution include the following:

  1. (i)

    Cognitive Search: A service that enables to extract insights from large amounts of structured and unstructured data. It ingests, enriches, and indexes massive amounts of data from a variety of sources and offers a powerful query language as well as a natural language query capability to return contextualized, ranked answers at scale. For instance, the Watson Platform provides this service under the name of Discovery. This IBM and AWS platform does not offer this service directly; however they claim that they provide some services (Amazon Elasticsearch Service and Bing Web Search API) that allow implementing this feature [18].

  2. (ii)

    Chatbot: A service that provides components to build conversational bots. The Amazon platform solution offers the possibility to maintain a conversation using voice or text.

  3. (iii)

    Speech to Text: A service that provides a way to transcribe human speech correctly. Google Cloud solution supports 120 languages and variants, while IBM Watson supports just seven languages. However, Watson allows customizing a recognition model to improve the accuracy of language and content.

  4. (iv)

    Text to Speech: A service that converts any written text into spoken words. IBM Watson supports seven distinct languages and 15 voices, while Google offers more than 30 voices, and Microsoft more than 70 voices.

  5. (v)

    Translation: A service that offers a dynamic translation of text between language pairs. Some of these solutions allow customization. For instance, Watson supports three types of customization: forced glossary, parallel phrases and corpus-level customization; and Microsoft enables customers to build a translation system tuned to their terminology and style, using the translator hub.

  6. (vi)

    Natural Language Processing: A service that allows applications to understand what a person wants in their own words. Usually, it uses machine learning to allow developers to build applications that can receive user input in natural language and extract meaning from it. This service is usually integrated into the implementation of chatbots, directly or indirectly. Watson Assistance is an example where this service is offered. This feature is not offered by Azure, but tutorials are available to use LUIS in Boot Framework to build a chatbot (see [19] as an example).

Table 1 summarizes the services provided by various platforms of Cognitive Services.

Table 1. Cognitive service from IBM, Amazon, Google and Microsoft.

4 The Experience with Watson Services

The IBM Bluemix Cloud Platform was selected as the provider of the cognitive services required to implement the Q-A function on the OnParkinson solution. One of the requirements concerning the Q-A feature was to provide a speech user interface, i-e., a patient would be able to put her/his questions orally and listen to the answer. As the OnParkinson solution has been firstly developed for the Portuguese market, the solution should provide interaction in the Portuguese language. Another important issue was the supported programming languages, since we were adding the new module into an existing mobile app. IBM Bluemix Cloud Platform supports Java, node.js, and PHP programming languages. Moreover, Watson services had already been used with success in health research projects [5, 20, 21], which gave us some confidence.

The following IBM Watson Services were used and integrated into the OnParkinson platform: (i) Text to speech; Speech to text, and (iii) Discovery module (see Fig. 3). The Discovery service is the core of the Q-A solution, which allows searching both private and public documents whose response will be attributed through Watson’s cognitive reasoning process. The reasoning system should be trained for more accurate results.

Fig. 3.
figure 3

The OnParkinson’s architecture integrating Watson services.

The Q-A function on the OnParkinson app is called Ask. This feature includes the possibility to write or dictate a question. The obtained results can be read, or the user can opt to listen to the reading of the documents (see Fig. 4). The main element to make the Q-A function available is the Watson-Discovery Service. However, the performance of the system depends on: (i) the corpus of knowledge, where the Discovery will look for the answer, and (ii) the “training” of the Discovery system. As explained before, the Discovery System uses reasoning techniques to look for a suitable answer to the question made. In the case of the healthcare field, the delivered information should be reliable, since we are in a very sensitive area, where incorrect or inappropriate information may compromise patient’s safety.

Fig. 4.
figure 4

OnParkinson’s mobile interfaces: Ask Button; Post Question (oral, or written); Receive answers.

Therefore, in this case study, all documents provided to the Watson system were selected by two experts on Parkinson’s disease. The following methodology was applied in order to train the reasoning system:

  • A set of main questions (180 questions in Portuguese) about Parkinson’s disease was elaborated, in the scope of a research study in physiotherapy.

  • To create the corpus of knowledge, a set of documents with relevant information about Parkinson’s disease was uploaded.

  • The 180 questions were used to question Watson twice. In each iteration, the 180 questions were made and, for each question posted, the system retrieved a set of relevant documents with information that, according to the reasoning system, answer it. The expert scored each retrieved document according to the degree of fitness to the question posed (where a score of 5 indicates maximum relevance).

The “training” process must be performed directly in the Bluemix platform. Initially, the Retrieve and Rank service was used instead of the Discovery service, which allowed to score the answer directly by the end-user, but that service is no longer available.

The end-user testing regarding the ASK function has not yet been performed, since the system tests and the tests with the expert during the training process show that the information retrieved for some pre-defined questions is not satisfactory. From the analysis performed, and comparing with previous studies from other researchers [21,22,23], the poor performance achieved with Watson Discovery may be in the low amount of information loaded into Watson and in the small number of training iterations. Another relevant issue is that the implemented solution does not offer information tailored to each end-user, according to the user profile. Information for a patient should be presented differently from that presented to a health professional. Moreover, in order to have the Reasoning System tuned, both patients and caregivers, as well as health professionals, should be able to score the obtained answers.

5 Collaborative Cognitive Services Model

From experience with the Watson Services in implementing a system that provides tailored information, a set of services were identified as mandatory. In Fig. 5, the ASK process is modeled using BPMN 2.0.

Fig. 5.
figure 5

Ask a question process as a model for tailored information.

The model represents the tasks performed by the user, and the ones executed by the mHealth system. The mHealth tasks are divided in:

  1. 1.

    Cognitive External Tasks – Tasks where cognitive computing is used. Preferably, these tasks are implemented using external services like the ones provided by one of the platforms referred in Sect. 3.

    • Speech To Text – In case the user chooses to dictate the question instead of typing, a service to convert speech to text is called.

    • Translate to English – In order to increase the search space, the question is translated into English.

    • Translate to User Language – The answers obtained that are not in the user language are translated.

    • Discover Answers – The question is submitted, and a set of answers is obtained. To increase the search space, if the user has a language other than English, two questions are sent, one in English and one in the user’s language.

    • Score Answers – It is used to give feedback about the relevance of the answers retrieved by Discover Answers. The score value is used by the machine learning algorithm (of cognitive search service) to improve its performance.

  2. 2.

    Internal Tasks – Tasks implemented directly on mHealth System. Although these tasks are not called cognitive tasks, some of them apply techniques commonly used in cognitive computing, such as reasoning and clustering.

    • Preprocess Questions – This task makes a preliminary treatment to the original question. According to the user profile, the question is “rewritten” in order to make it more standard. Some words are translated, to avoid foreign words and dialect words.

    • Select and format answers - According to the user’s profile, an answer is composed and formatted. (For example, if the user belongs to the short answer profile, the answers is shortened).

    • Preprocess Score - According to the user’s profile, the score set by the user is analyzed. This score reflects the degree of satisfaction of the user with the obtained answer, not just regarding content, but also regarding format. This value will feed the discovery algorithm and the personalization algorithm used in the Select and format answer task.

The use of cognitive services can be the base to create personalized mobile apps that engage collaboration in healthcare in two ways (see Fig. 6):

Fig. 6.
figure 6

Cognitive-based collaborative network for healthcare.

  1. (A)

    Providing an effective method for patients, caregivers and health professionals to collaborate on building personalized healthcare services and to share knowledge and information.

  2. (B)

    Providing a way to build knowledge collaboratively about a health topic, since each user, as a cognitive entity, can contribute efficiently to optimize the performance of the information that is delivered (the score of each end-user should be pre-processed to obtain the general score that feeds the machine learning algorithm).

In fact, we can go further and consider users and the mHealth platform as being part of a cognitive-based collaborative network for healthcare [24, 25], providing an hybrid-augmented intelligence in which the end-users and the cognitive computing based apps are cognitive entities, which work collaboratively to improve health services and create knowledge.

6 Conclusion and Future Work

The study presented by this paper aims to (i) analyze how cognitive services can be integrated into mHealth solutions to provide end-users with tailored information through appropriate interfaces and interaction, and (ii) provide a model to support a collaboration engagement in healthcare, where knowledge about health is built collaboratively between end-users and cognitive computing based apps.

The cognitive services model is proposed following the experience obtained with the integration of Watson cognitive services into the OnParkinson mobile solution. The model identifies which services should be integrated in order to have a cognitive-based tailored Q-A system. Moreover, the model defines a set of “internal tasks” that must be implemented to provide personalized information. Another relevant issue, which was identified with the experience of using Watson, is the importance of the scoring process in order to fine-tune the machine learning algorithm. How the score of each end-user should be pre-processed before being “injected” into the machine learning algorithm is still an open issue in our research. Further developments aim to better explore the notion of a distributed health-related community as a collaborative cognitive system.