Advertisement

User-Centered Design Approaches and Methods for P5 eHealth

  • Stefano TribertiEmail author
  • Eleonora Brivio
Open Access
Chapter

Abstract

As seen throughout this book, eHealth informed by P5 approach gives full recognition to patients’ contexts, needs, desires, and personal characteristics. These aspects should not only be considered as cornerstones for technology evaluation, but as fundamental guidelines for design in the first place. This relates to User-Centered Design, that is, any technology/service design where final users influence how the design itself takes place. In other words, eHealth development should be based on research data gathered among final users about their needs and contexts of use, in order to be specifically tailored on final users even before the realization of low-level prototypes. This methodological contribution presents a critical presentation, description, and evaluation of research tools to be employed not to evaluate technology’s results and effectiveness, but the specific characteristics of users in order to orient design and development. Such an approach should be considered the “gold standard” of P5 eHealth solutions.

Keywords

P5 medicine eHealth User-Centered Design User Experience Data-driven design Chronic diseases 

1 Introduction

As seen throughout this book, the P5 approach to healthcare technology calls for devices that are preventive, predictive, personalized, participatory, and sensitive to psycho-cognitive aspects of both interaction with technology itself and health issues (Gorini and Pravettoni 2011; Pravettoni and Gorini 2011). The previous contributions have deepened these concepts through historical, theoretical, and methodological information: for example, it has been said that innovative devices (e.g., wearable technology, and/or Ambient Intelligence applications) may help to detect and analyze not only the progress of disease, but also patients’ state in terms of emotional activation, observable behavior, and subjectively reported preferences. Moreover, previous contributions explained how technological devices can be based on personal characteristics, both in their interactive physical properties (i.e., to promote effective ergonomics) and in the content of digital stimuli (Vergani et al. 2019).

In general, it has been said that eHealth tools should be tailored on patients’ characteristics in order to be deeply effective and obtain desirable results in an acceptable amount of time (cf. Chap.  1). Such a message could be difficult to understand or to translate into practice for those stakeholders who are interested in the development of eHealth tools, but who are not expert in the design of technology. The idea of “tailoring technology on users” may just look like a “way of saying” with no actual impact (or no practical one) on health technology development and implementation. What does it actually mean to tailor technology on its users? There are at least four possible answers readers could have considered while reading this book and considering the P5 approach to eHealth:
  1. 1.

    Technology developers should keep in mind the fundamental characteristics of the diagnosis and of its consequences on everyday life (both physical and psychological), in order to design tools that could be used effectively and causing no harm.

     
  2. 2.

    The eHealth tools of the future should include personalization features and pleasant/engaging aspects so the users could develop a positive attitude toward use and possibly be driven toward a higher rate of acceptance and adoption in the long term (cf. Chap.  4).

     
  3. 3.

    The eHealth tools of the future should be developed with features allowing modifications at later stages of interventions, according to users’ feedback.

     
  4. 4.

    Technology developers should keep themselves up-to-date with scientific literature on development and effectiveness of eHealth solutions, in order to address issues that are known within the literature, especially for what regards factors that may promote or hinder acceptance and adoption among users.

     

All these possible answers feature directions that are very important and certainly deserve eHealth developers’ and stakeholders’ consideration; however, none of them could be considered sufficient from the point of view expressed by the P5 eHealth approach and its indications for technology design.

Specifically, the first response has merit because it considers the medical characteristics of chronic diseases and the fact that they could have important influences on everyday life, both physical and psychological; however, it is not only the pathology that influences effective human–technology interaction, neither acceptance nor adoption in the long term; patients are not “only patients”; despite an eHealth tool/technology could be designed in order to be used by patients’ whose lifestyle is influenced by the onset and continuous presence of a chronic disease, also personal preferences, habits and behaviors could get in the way of effective usage, or the technology could be inadequate to contextual factors that have nothing to do with users’ health status. Moreover, at an organizational level, it is not possible to expect that technology designers would develop health professionals’ specific knowledge and competences, and neither that health professionals would be fully employed in the design and development of technology.

The second response values psychological aspects of interaction more, but fails in giving specific guidelines to include these in the design/development process; moreover, it seems to not consider that psychological aspects such as preferences and positive emotions are transient and could change over the course of time, possibly making eHealth solutions no more adequate to users in the long term.

The third response takes some steps further by acknowledging the iterative nature that an evolved conception of technology design often features: certainly, eHealth design should take into consideration that modifications could be necessary at every step of implementation and interaction, so that the design process does not end with the final prototype. However, this response does not provide specifications on which kind of information one should consider for modifying or redesigning technology; is “users’ feedback” sufficient? The next pages will show that the response to this last question could not be so simple.

Finally, the fourth answer points out one very important aspect, that is, eHealth development should be based on scientific information, not only on the designers’ skills and creativity; however, it is important to appreciate that not even scientific literature is sufficient to inform design; indeed, scientific results are necessarily based on samples that tend to oversee individual characteristics and differences. Of course, eHealth development needs to be based on general guidelines but, in order to be very effective and to augment its possibilities to be adopted, it should be able to consider specific cases and fine-grained practice information that often could be not included in scientific reports.

In conclusion, “tailoring technologies on users” is a guiding concept that possibly includes all the considerations outlined above, but should go beyond these including specific guidelines for practice. Therefore, this chapter is aimed at outlining methodological consideration to translate such concept into practice, and to give specific information on how this “tailoring” could be enacted in the eHealth project, from the very first steps of design to final implementation. More specifically, the proposal of this chapter relies on the methods of so-called User-Centered Design.

2 From Ergonomics to User-Centered Design

In our opinion, the best way to introduce the concept of User-Centered Design is to locate it in the history of technology evaluation. Indeed, technologies (especially those designed to be used in the healthcare context) always need to be evaluated, which means, it should be demonstrated whether they are able to (help human users to) achieve their aims, or not.

Historically, the so-called Scientific Study of Work emerged during the Industrial Revolution, which can be considered the first organized way to evaluate technologies (Nickerson 1999; Triberti and Brivio 2017): its aims were to analyze human work (often mediated by industrial technologies) in order to divide it in simple actions that could be taught to the workers in order to improve productivity.

Subsequently, ergonomics (mostly in Europe) and Human Factors (mostly in the United States) emerged in 1900, as disciplines devoted to improve physical/anatomical and cognitive aspects of technology-mediated work in order to improve product quality and productivity but also safety and possibly subjective satisfaction (Karwowski 2012; Sharit 2006; Wilson 2000).

In the 1980s, Usability arose as a “simplified” form of ergonomics focused on Industrial Design, namely, as a discipline interested in empowering the interface of common-use objects, products and services so to make it easy to use and immediately comprehensible for customers, stakeholders, and users (Triberti and Brivio 2017): usability experts such as Jakob Nielsen and Donald Norman set the basis for the evaluation of the “things” and tools all of us use every day, and generated the idea that easiness of use is more important for artifacts than other properties such as originality and aesthetic beauty (Nielsen 1999, 2003; Norman 2002).

However, more recent development of the disciplines and methods associated to tools and technologies highlighted that easiness-of-use is not the sole criterion to be taken into consideration when evaluating technology: User Experience (UX) is the umbrella term used nowadays to identify methodological approaches that recognize the role of additional factors such as emotion/affection/pleasure (e.g., people can use nonusable objects if they “love” them) and context (e.g., besides usability of interfaces, tools can be more or less adequate to physical, social, or cultural features of situations of use) (Benyon et al. 2005; Hassenzahl 2008; Lee et al. 2008; Triberti and Brivio 2017).

This development led to the global recognition that evaluation of technology should not take into account “functioning” only, but a multiplicity of criteria to quantify effectiveness: these are at least quality, safety, easiness-of-use, emotions (positive, or if necessarily negative, possible to manage for users), adequacy to context (physical, social, cultural), accessibility (e.g., technology could be used by various populations, people with disability included).

Obviously, evaluating all of these characteristics in technology, taking into account that partial or total redesign could be called for when one or more of the criteria appear insufficient, could be very costly in terms of time and resources (Herstatt and Von Hippel 1992; Pavelin et al. 2012); however, designers, developers, and stakeholders could consider important criteria for technology effectiveness in advance, which means, the design itself could be based on user research data, so not only on preexistent ideas (about users, contexts, related activities, and the issue to be addressed by technology) or the creativity and intelligence of the designers. Exactly this concept is at the core of User-Centered Design (UCD henceforth) (Garrett 2010; Lowdermilk 2013; Triberti and Brivio 2017; Triberti and Liberati 2014), a broad term that encompasses any design project in which users and users research influence how the design itself takes place. According to Garrett (2010), UCD could be depicted as a strategy that (in a “perfect” scenario) allows for any possible issue or variable to not escape the designer’s awareness. In other words, implementing UCD means that users should be involved from the very steps of design in order to provide valuable information for the design itself, not just in the last steps of implementation to evaluate some already-developed prototype (Abras et al. 2004). Although the term UCD is relatively old (Norman and Draper used it for the first time in the 1980s already) (Norman and Draper 1986), it developed into a discipline in more recent times, consistently with the semi-standardization of methods and tools devoted to analyze users’ needs before design.

The next section would be devoted to introduce some typical UCD tools/techniques that could be adapted to serve eHealth design and development: taking into account the importance of the other aspects highlighted by the answers above (i.e., attention to literature, consideration of disease/illness-related issues, user engagement, iterative prototyping), the application of such techniques could help eHealth developers to tailor technologies on their users, in order to assure not only positive functioning, but also the implementation of the P5 as described in this book.

2.1 Interaction Factors

As Hesse and Shneiderman say (Hesse and Shneiderman 2007), during the pioneering days of eHealth, the question was often about what the computer could do; eHealth pioneers posed technical questions about computers, the Internet, and software’s capability to help patients keep trace of their own medications/therapy and of their disease. During the next phase, crucial questions concern what people can do (and, we would add, what they cannot).

This concept is true in many senses, but in this section we focus on interaction aspects only. As previously said, the history of technology evaluation featured Usability as a discipline interested in developing interfaces that are easy to understand and to use for the final users. Certainly, usability is a very important characteristic that is currently taken into consideration when evaluating the adequacy and effectiveness of eHealth solutions: a number of studies have been published focusing on methods and results about medical informatics usability (Gerdes et al. 2014; Goldberg et al. 2011; Vorderstrasse et al. 2016). However, many studies just employ usability questionnaires with final users when the technology is already designed (as a prototype, or even as a final version) and it is often not clear whether these data will influence actual modification or redesign of the evaluated platforms.

Usability questionnaires (such as, e.g., the SUS (Brooke 1996)) certainly are useful tools to get, as the author says, a “quick and dirty” index about the easiness-of-use of some system or application; nevertheless, they are not conceived to give a “full” usability evaluation. Let us say, for example, one obtains a average-to-high value of usability basing on participants filling in a given questionnaire: what does this mean? This is only a general evaluation participants performed by responding to general questions, but no information has been provided about specific, more or less serious, more or less frequent system usage issues. Indeed, typical questions of such usability questionnaires are: “Were you able to use the system without effort?” or “State on a scale from 1 to 10 how much you felt to be able to obtain the system goal by using the system”; obviously, responses to such questions refer to a general evaluation but do not account for specific issues that could prevent users to achieve their own objectives in real-life contexts.

For this reason, the correct way to examine usability is to implement specific research methods, which typically are divided into two categories, namely, usability inspection and testing methods.

Usability inspection refers to those methods performed by evaluators, without the involvement of final users; these are constituted by guidelines and rules to analyze interfaces systematically, in order to account for usability problems that may escape a general, nonspecialized exploration. Currently, the main usability inspection methods still used are the cognitive walkthrough (which is based on exploring each function of the interface in sequence, reporting any possible problems encountered by a hypothetical user) and heuristics analysis (which is a global, holistic analysis of interface).

Cognitive walkthrough (Kushniruk et al. 2015) entails a checklist to be followed by evaluators who put themselves in the shoes of users, accounting for each possible action that the user would take with the interface and signaling any possible mistake or interaction issue. Differently, heuristic evaluation is based on a list of general criteria interfaces should respect in order to guarantee effective usage. A number of heuristics lists are available, such as the generic (and probably most used) by Nielsen (1995) and others for specific technologies or domains (Hermawati and Lawson 2016).

Another option (that could be also used in conjunction with inspection methods) is usability testing, which constitutes any technique for evaluation that involves final users who interact with the interface in systematic and more or less controlled contexts, in order to identify usability issues by a critical evaluation of actual interaction; usability testing could employ a number of methods and tools for registering usability issues, ranging from physiological signals to interviewing the participants to observation of behavior (e.g., counting the number of errors) (Smilowitz et al. 1994).

With regard to the evaluation of interaction factors, the main suggestion coming from the P5 approach is not to “resolve” such issues by basic evaluations such as using a usability questionnaire alone; on the contrary, evaluators should be activated in any phase of the development process. Specifically, usability inspection and testing methods can be applied at different phases of conceptualization of the interface and prototyping, in order to modify interaction issue in itinere.

2.2 Motivation and Emotion

User experience is not limited to usability. As explained in the sections above, User Experience (as a discipline) emerged when the role of additional factors was explicitly recognized. Indeed, it is not enough for an interface to be easy to use, especially if what is expected is to promote long-term usage. If one would plan to use technologies to change patients’ everyday life, in order to positively influence their own lifestyle and care process, then eHealth resources should be also engaging, pleasant, or even self-actualizing.

According to the Positive Technology paradigm (Riva et al. 2016), technologies can be used to structure, augment, or replace users’ experience with digital resources, in order to improve their well-being in terms of emotion, connections, and meaning. More generally, in order to advance technologies’ ability in this sense, it is important to consider motivational and emotional factors. Indeed, it is more frequent for users to use (even at a long term) technologies that are not easy to use but that they love, than the contrary. The following sections will explore motivation and emotional factors as important aspects to be considered both when evaluating and designing technology.

2.2.1 Motivation in Design

“Motivation” generally refers to any mental feature that guides and promotes human goal-directed behavior. Assessing motivation to use technology requires going beyond traditional conceptions of motivation that held that people give more value to strictly physical or safety needs, and only after satisfying them consider relational or self-actualizing motives (e.g., having friends, succeed in one’s own passions, spirituality, etc.). On the contrary, depending on current situations and personal goals, people may put high-level needs before basic ones. According to Self Determination Theory (Ryan and Deci 2000), fundamental life “nutrients” regard creating and maintaining positive relationships with others, feeling competent and autonomous. Hassenzahl and colleagues (Hassenzahl et al. 2010) developed motivation-focused interviewees in order to consider users’ important motivation when designing or evaluating products and artifacts. In the field of user-centered design for eHealth interventions, one should consider that users’ needs may vary depending on the experience and perception of the long-lasting illness, and so do life projects and everyday activities (Triberti and Barello 2016). It is not advisable to design eHealth technologies just considering therapy outcomes and/or desired health states; on the contrary, if technologies are designed to be used effectively, they should be able to communicate their scope as useful in terms of patients’ personal objectives (Triberti and Riva 2016).

2.2.2 Emotion in Design

“Emotional design” is an expression typically used to refer to that design which is implemented to promote a pleasurable sensation in users. Two main approaches can be found in the literature, one more focused on pleasant, funny, creative features added to interfaces or external appearance (Jordan 2002), and the other related to engaging and fluent interaction (Hancock et al. 2005). Recent studies (Triberti et al. 2017) proposed to develop the concept of emotional design through three main lines:
  • The assessment of discrete emotions in ongoing interaction with technology to provide on-line modifications of interfaces (affective computing/affective design).

  • The focus on emotions as discrete cognitive processes instead of generic pleasant states, to promote even complex emotions or emotional nuances.

  • The analysis of users’ “emotional profiles” to tailor technologies on their preexisting emotional traits.

In other words, emotions should not be considered as simply by-products of stimuli, rather they could actively participate in the interaction and influence it. Also, not only positive emotions should be taken into consideration by designers; for example, if one has to design a eHealth platform feature that is meant to signal dangerous situations to the patient (e.g., the need for insulin administration for diabetes), it is not expected to transform an urgent signal into a “positive” experience, which it is not. On the contrary, if previous emotion-focused research is available, designers and evaluators can have important information on how to realize such an alarm so to be recognized, understood, and managed as more effectively as possible by the patient/user.

Emotion aspects research can certainly be conducted by making use of psychophysiological measures, but also qualitative methods are important in order to capture the personal emotional experience of patients, regarding both the illness/treatment experience and the technology itself.

2.3 Context

P5 eHealth must consider context as an important variable for UX and delivery of care. In particular, psycho-social aspects of the P5 approach are involved in this process. Any technological artifact related to eHealth, or otherwise, is made to be used by someone for a particular context, in a particular environment. The user’s purpose guides the artifact’s design first and later its use, which is situated in a precise space-time moment. The user experience is therefore—and perhaps above all—linked to the places and moments in which the user uses the artifact “in vivo.” In this sense, for the purposes of the design and evaluation of the user experience, the context of use must be taken into account.

Context is a difficult concept to define and can indicate: material elements of the environment in which the use takes place; relational elements, when artifacts mediate the relationships between people directly or indirectly; and semiotic and cultural elements (Galimberti 2011). Several of these aspects must be taken into consideration in the design phase; others, on the other hand, are more unpredictable and emerge from the interaction between artifact–user–context (Nardi 1996), and once these aspects are detected, they could be corrected and/or integrated into subsequent releases of the artifact to improve the users’ UX.

The Situated Action Theory (SAT) (Mantovani 1995, 1996; Suchman 1987, 1993) helps understand that behavior, cognition, and higher-level contextual elements, such as cultural, organizational, and group settings, contribute to the interactional process between user and technology—and thus affect UX—at different levels: context of use depends on social context, interpretation of the situation, and local interaction with artifacts. The first level, the social context, is seen as a repertoire of social norms, within which actors must act. The second level considers daily experience, where the social context refers to specific situations in which the interests of social actors interact with the opportunities presented by the environment. The last level relates to local interaction between man and artifact, which occurs in everyday situations, whenever the user uses the artifact to achieve their goals.

eHealth technologies are considered technological artifacts, therefore they are bound by cultural and social rules, the situation in which they are used, and by how the aims and scopes of the users interweave with the previously mentioned aspects and the material feature of the artifact itself (e.g., the interface).

Norman (1993) wrote that it is not enough to focus individually on the situation, artifact, environment, or person: the users are not in a vacuum, but are located in a specific context. The symbolic order is achieved through action, which allows the interpretation of the situation, which in turn allows actors to use certain artifacts in certain situations. Context is built on a cultural and symbolic order (rules, laws, social habits, cultural norms) that preexists the user in interaction, but also contributes to the users’ activities and directs their goals: context helps the users make sense of their actions and interactions. Within a context, users are not alone, but interact with other actors and artifacts in order to accomplish their goals and plans. New meaning is generated by the interaction between subjects and artifacts. These meanings become part of and they partially modify the symbolic order, generating new meanings.

Material artifacts—such as objects, technologies, etc.—are readily available to people, but they are not exempt from important psychological and signification processes: if a technological artifact is inadequate for use within the rules governing the context, and does not meet the objectives of the users, it will produce an unsatisfactory user experience and will soon be abandoned; it is almost impossible to determine in advance in a univocal way the use that users will make of an artifact, it will therefore be necessary to carry out in-progress checks in order, if possible, to adapt the artifact to the emerging practices resulting from the interaction with social actors.

Robinson (1994) postulated the existence of three reference frameworks that people use to interpret artifacts. This interpretative process involves a user’s assumption, knowledge, bias, and past experiences shaping their understanding of the world. There are three frameworks that drive this process (Robinson 1994): (a) individual level: frameworks constitute, at this level, personal constructs: our way of being has an influence on the way we interface with and interpret the world; (b) social level: our interpretative schemes are built with the people we interact with, in the different contexts (e.g., work, family) and these references also contribute to our understanding of artifacts; (c) cultural level: this level refers to language, religion, visions of science and the world, conceptions of space and time (e.g., seasons, cyclicality), which permeate our way of interacting with people and objects around us.

In conclusion, context can be defined by three dimensions that can be ascribed to both Robinson’s frameworks and to the Situated Action Theory: individual and psycho-social aspects, cultural and relational aspects, material/physical aspects. The following paragraphs outline each of these dimensions and how eHealth may be affected.

2.3.1 Individual and Psychosocial Dimensions of Context

Gender, age, and culture are aspects that affect the user’s acceptance—in terms of adoption and speed of adoption—of technological artifacts. The Technology Acceptance Model (TAM) (Davis 1989) theorizes the process from the user’s intention to use the technology to its final adoption. The TAM explains the intention to use an artifact on the basis of perceived utility (i.e., the perception of obtaining a benefit from the use of that artifact) and perceived ease of use (i.e., the perception that using that artifact will require few resources). If there is a high level of perceived utility and a high level of perceived ease of use, the user will be likely to develop the intention to adopt that particular artifact. External variables such as system characteristics, training, etc. do not affect directly on the intention to adopt the system, but their effect is mediated by the two variables previously explained. Utility is influenced by perceived ease of use: the easier the artifact is to use, the more useful it is perceived to be (Venkatesh and Davis 2000). More recently, other factors have been added as antecedents to the intention of adoption and to the variables of perceived utility and ease of use, such as social influence (subjective norms, image, etc.), cognitive processes (relevance of the task, quality of the outputs, demonstrability of the results), and experience (Venkatesh and Davis 2000).

There are clear differences in gender with regard to the TAM. Venkatesh and Morris (2000) found that men and women differ in decision-making processes for adoption and use of technologies (cf. Chap.  4); in particular, the perceived user-friendliness variable is more important for women, despite identical initial training and identical experience with technology.

Men and women also differ in access and use, interest and ease of use of technological artifacts (Anderson et al. 1995), even though this gap is rapidly closing. One of the most important gender differences is related to the perceived self-efficacy (Bandura 1997) in the use of technologies, that is, the belief of being able to use a technology to perform a certain action (Durndell and Haag 2002; Vekiri and Chronaki 2008): in particular, even in the simple use of computers, males, compared to females, feel more secure and able to address the problems that may arise during the use of technology in a flexible and creative way (Brivio and Ibarra 2010). It can therefore be assumed that in the event of a technology breakdown, men and women will have very different experiences, and experience different emotions, with effects on their self-esteem and self-efficacy. The processes of breakdown and consequent troubleshooting for the recovery of technology functionality should probably be investigated in more detail from a gender point of view, to make them more accessible to all. In other respects, the user experience between men and women does not differ, although there are sometimes artifacts that try to exaggerate these differences. Self-efficacy (along with the variables of TAM) affects the use of technologies by older people, who have never had to deal with complex technologies (e.g., smartphone and tablet), and often do not feel confident in interacting with these artifacts, which very often are not designed with the elderly in mind, which makes the interaction difficult and the experience sometimes frustrating (Brivio et al. 2016; Strada et al. 2013).

Research with eHealth technologies show that age can still impact adoption and use, especially with middle-old (66–84 year olds) and old-old (85 years and over) people, who need more training and support (Hunsaker and Hargittai 2018; Millard and Fintak 2002); only after training and support are they more likely to adopt and use eHealth systems, as the interactional experience is less frustrating. Young old people (55–65 year old) have fewer issues with use and adoption (Tavares and Oliveira 2016). The gender gap in eHealth adoption seems to be slowly closing (Tavares and Oliveira 2016). A variable specific to adoption and use of eHealth technologies is self-perception, that is, the awareness of having a health problem: if a person is aware that they have a health issue they are more likely to use eHealth technologies (Tavares and Oliveira 2016; Yuan et al. 2015; Venkatesh et al. 2012). eHealth applications and technologies should therefore include in-system trainings for target populations, and enhance self-perception, in order to provide a good UX, which in turn ensures higher levels of intention to use, adoption, and sustained use.

2.3.2 Cultural and Relational Dimensions of Context

A cultural aspect that influences the User Experience is the semiotic dimension that underlies the artifacts. Semiotics deals with the study of signs and how they take on meaning. Every aspect of an interface can be considered a representation of a certain functionality to which it gives access: both the interface and the functionality to which it refers are systems of signs (Goguen 1999). Sustained use of an artifact depends on the user’s understanding of the metaphor underlying the design of an artifact (Barr et al. 2005): designers must choose a sign system that helps the user understand the artifact’s functionalities. If there is no correspondence between the systems of signs of the artifact and that of the user, the artifact will not be understandable and will provide an insufficient UX.

The most general distinction between cultural contexts that has proved to be relevant to UX is the one between Western and Eastern, individualistic or collectivistic cultures. Culture is “the collective programming of the mind that distinguishes the members of one group from another” (Hofstede 1991, 2001): the term “collective planning of the mind” suggests that culture can be conceived as a set of shared characteristics within a group, whose members behave similarly because of the shared norms. User Experience is also influenced by these shared characteristics: for an artifact to be successful at an international level, it is necessary to have an excellent knowledge of the variability of the cultures of the target markets, since sometimes an international version of an artifact may not be sufficient or may not work at all in terms of needs and cultural appropriateness (Marcus and Gould 2000). To have artifacts that adapt to different cultures, two aspects must be taken into account (Honold 1999, in Walsh et al. 2010): (1) objective aspects: language, date and time format, numbers, direction of written text, etc.; (2) subjective aspects: value, behavioral, and intellectual systems of the groups that use technological artifacts in the different cultural contexts. These aspects can be integrated into the design of artifacts from the beginning, using the studies and tables (created by country) of Hofstede (1991), taking into account that the indications given are always trends and always relative. Not all aspects of interaction with a technological artifact are, however, subject to cultural differences: for example, it seems that gestures—spontaneously generated to obtain a result on a portable device—are not culturally connoted and resemble each other for all cultures (Mauney et al. 2010).

To provide excellent UX and delivery of care, eHealth technologies and application must take these aspects into consideration, and go through a cultural and contextual adaptation process for “culturally sensitive elements” (e.g., language, values, concepts, content) (Lal et al. 2018). For the relational dimension—as Battarbee points out (2003)—the User Experience models used often focus on individual experience and its constituent elements; most people though experience collective, not individual, contexts and thus create experiences with artifacts together with other users (Battarbee 2003, 2004; Battarbee and Koskinen 2005). Artifacts must of course be functional, usable, and provide a good individual user experience; but, with the advent of personal ICT, it is important that these artifacts give users the opportunity to have optimal user experiences together, or to have a co-experience. Battarbee identifies some characteristics that the co-experience given by an artifact must have:
  • Social: co-experience is based on communication, which allows proposals, opinions, evaluations to be put into play, agreements to be negotiated, ideas modified, etc. These communicative exchanges give meaning to the experience. It is therefore essential that the artifacts support agile communication exchanges and that they reflect the communication needs of users in a given context.

  • Multimodal: co-experience can be augmented by technology, which provides different modes of interaction (e.g., audio, video, image, text) and allows switching between them seamlessly, while continuing to have the same experience with the same people or with different people.

  • Creative: when people use technological artifacts together, they get more interesting and creative results than those obtained with individual use. Co-experience is a resource of social and symbolic innovation, because interaction generates new meanings, both related to technology and not.

  • Fun: co-experience is a source of fun and pleasure, as well as it strengthens social ties. Fun promotes the use of artifacts. For designers, therefore, it is essential to think about the dimension of enjoyment and pleasure in social terms, because it is a fundamental motivation for the adoption and sustained use of an artifact.

Any environment that involves co-presence of multiple people needs individuals to feel that the other is present: in other words, to perceive a good sense of social presence, which is “the degree of salience of the other person in the interaction and the consequent salience of the interpersonal relationship” (Short et al. 1976). An adequate level of social presence allows higher levels of involvement with the artifact, that is, the user experience will be more satisfying and positive (Gunawardena and Zittle 1997). According to Riva and colleagues (2008), social presence depends on the fact that the information available within the context, even if limited, allows the users to grasp the others’ intentions and actions (Riva 2008).

In eHealth, several technologies and applications successfully harnessed social communications and exchange between users to deliver their intervention, for example in smoking cessation (Khalil et al. 2017) and in psychosis recovery (Williams et al. 2018). This made possible not only delivery of care, but also a good UX that could potentially support adherence and sustained use of the eHealth system.

2.3.3 Physical and Material Dimension of Context

In designing or evaluating an artifact, technological or otherwise, it is important to know where the user will use the artifact itself. The physical environment produces effects on UX in different ways. For example, placement of a technology within a busy and noisy environment may influence the user’s ability to hear acoustic feedback (Lowdermilk 2013). Mobile technologies complicate this matter further: they allow user to cross different contexts; and indeed, the use of some applications persists from one context—digital and material—to another, and therefore the experience persists across contexts, and designers must take these contextual changes into consideration.

Literature is lacking in identifying and describing the effect of the physical contextual elements on eHealth delivery and UX. One possible concern is related to privacy: mHealth makes it easier to access health services and care, but at the same time make it easier for private and sensitive information to be available, accessible or overheard by non-authorized people, or to use the technology in inappropriate settings for delivery of care (e.g., a patient using a telepsychotherapy service in a public setting). Consequence of accidental sharing of information and/or use in appropriate settings may be an inefficient delivery of care and/or a reduced UX, which in turn may push people to abandon the eHealth service.

3 User-Centered Design and P5 eHealth

As hinted at in the introduction, User-Centered Design should be a fundamental methodological approach within P5 eHealth: in order to design health technologies that are really able to help people, it is paramount to analyze users/patients’ needs, intentions, abilities, and contexts in advance, and use these data as the main source for design ideas and implementation.

As a conclusion, it is interesting to consider the utility of a UCD mindset when designing the P5 properties within health technologies:
  • For designing technologies with prevention features, developers should be informed not only about healthy behaviors to be promoted in the users, but also of users’ own habits, preferences, characteristics, and typical behaviors in order to identify risks and/or opportunities.

  • Personalized technology requires up-to-date information about users’ characteristics; the exact concept of personalization of application features implies to consider and understand users, instead of basing on prototypical representations.

  • The predictive power of any computational model programmed within software is notably empowered by adding information on users’ preferences and behaviors; not only medical/diagnostical data should be used.

  • Participatory technologies require a deep understanding of users’ social context; going beyond the mere analysis of users, methods should be able to capture important social relations (e.g., caregivers, different health professionals) to be assigned a role within the technology-based intervention.

  • Psycho-cognitive technologies, as previously said, are able to consider users’ cognitive abilities, decision making, and behavior; this could be done by implementing the UCD mindset/approach within the technology itself, which means, the technology should be responsive to users’ actions in order to tailor its outcomes (feedback) on individual characteristics.

With this background, P5 eHealth could develop as a truly patient-centered approach, by basing its own design on the application of research methods to the measurement and understanding of final users’ irreducible characteristics.

References

  1. Abras, C., Maloney-krichmar, D., & Preece, J. (2004). User-centered design. In Encyclopedia of human-computer interaction (pp. 1–14). New York: Sage.Google Scholar
  2. Anderson, R., Bikson, T., Law, S., & Mitchell, B. (1995). Universal access to e-mail: Feasibility and social implications. Main: RAND Corporation.Google Scholar
  3. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W H Freeman/Times Books/Henry Holt & Co..Google Scholar
  4. Barr, P., Biddle, R., & Noble, J. (2005). A semiotic model of user-interface metaphor. In K. Liu (Ed.), Virtual, distributed and flexible organisations (pp. 189–215). Dordrecht: Springer.  https://doi.org/10.1007/1-4020-2162-3_13.CrossRefGoogle Scholar
  5. Battarbee, K. (2003). Co-experience: The social user experience. CHI 2003 Extended Abstracts on Human Factors in Computing Systems, 730–731.  https://doi.org/10.1145/765891.765956.
  6. Battarbee, K. (2004). Co-experience: Understanding user experiences in social interaction. Helsinki: UIAH Press.Google Scholar
  7. Battarbee, K., & Koskinen, I. (2005). Co-experience: User experience as interaction. CoDesign, 1(1), 5–18.  https://doi.org/10.1080/15710880412331289917.CrossRefGoogle Scholar
  8. Benyon, D., Turner, P., & Turner, S. (2005). Designing interactive systems: People, activities, contexts, technologies. Addison-Wesley. Retrieved from https://books.google.com/books?id=iWe7VkFW0zMC&pgis=1.
  9. Brivio, E., & Ibarra, F. C. (2010). Nuove Tecnologie e Autoefficacia percepita: Influenze del genere e delle modalità d’ uso. Qwerty, 5(1), 44–59.Google Scholar
  10. Brivio, E., Serino, S., Galimberti, C., & Riva, G. (2016). Efficacy of a digital education program on Life Satisfaction and digital self efficacy: A mixed method study. Annual Review of Cybertherapy and Telemedicine.Google Scholar
  11. Brooke, J. (1996). SUS - A quick and dirty usability scale. In P. Jordan, B. Thomas, B. McClelland, & I. Weerdmeester (Eds.), Usability evaluation in industry (pp. 189–194). London: Taylor & Francis.  https://doi.org/10.1002/hbm.20701.CrossRefGoogle Scholar
  12. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319.  https://doi.org/10.2307/249008.CrossRefGoogle Scholar
  13. Durndell, A., & Haag, Z. (2002). Computer self efficacy, computer anxiety, attitudes towards the Internet and reported experience with the Internet, by gender, in an East European sample. Computers in Human Behavior, 18(5), 521–535.  https://doi.org/10.1016/S0747-5632(02)00006-7.CrossRefGoogle Scholar
  14. Galimberti, C. (2011). Segui il coniglio bianco: La costruzione della soggettività nelle interazioni mediate. In E. Marta & C. Regalia (Eds.), Identità in Relazione: Le sfide odierne dell’essere adulto (pp. 73–127). Milano: McGraw-Hill.Google Scholar
  15. Garrett, J. J. (2010). The elements of user experience: User-centered design for the web and beyond. (Pearson Ed). Pearson Education.Google Scholar
  16. Gerdes, M., Smaradottir, B., & Fensli, R. (2014). End-to-end infrastructure for usability evaluation of eHealth applications and services. Scandinavian Conference on Health Informatics, 53.Google Scholar
  17. Goguen, J. (1999). An introduction to algebraic semiotics, with applications to user interface design. Computation for Metaphors, Analogy and Agents, 242–291.Google Scholar
  18. Goldberg, L., Lide, B., Lowry, S., Massett, H. A., O’Connell, T., Preece, J., et al. (2011). Usability and accessibility in consumer health informatics: Current trends and future challenges. American Journal of Preventive Medicine, 40(5 SUPPL. 2), S187–S197.  https://doi.org/10.1016/j.amepre.2011.01.009.CrossRefPubMedGoogle Scholar
  19. Gorini, A., & Pravettoni, G. (2011). P5 medicine: A plus for a personalized approach to oncology. Nature Reviews Clinical Oncology.  https://doi.org/10.1038/nrclinonc.2010.227-c1.CrossRefGoogle Scholar
  20. Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. American Journal of Distance Education, 11(3), 8–26.  https://doi.org/10.1080/08923649709526970.CrossRefGoogle Scholar
  21. Hancock, P. A., Pepe, A., & Murphy, L. L. (2005). Hedonomics: The power of positive and pleasurable ergonomics. Design, 13, 8–14.  https://doi.org/10.1177/106480460501300104.CrossRefGoogle Scholar
  22. Hassenzahl, M. (2008). User experience (UX): Towards an experiential perspective on product quality. In IHM’08 (pp. 11–15).  https://doi.org/10.1145/1512714.1512717.CrossRefGoogle Scholar
  23. Hassenzahl, M., Diefenbach, S., & Göritz, A. (2010). Needs, affect, and interactive products–Facets of user experience. Interacting with computers, 22(5), 353–362.CrossRefGoogle Scholar
  24. Hermawati, S., & Lawson, G. (2016). Establishing usability heuristics for heuristics evaluation in a specific domain: Is there a consensus? Applied ergonomics, 56, 34–51.CrossRefGoogle Scholar
  25. Herstatt, C., & Von Hippel, E. A. (1992). From experience: Developing new product concepts via the lead user method: A case study in a “low tech” field. Journal of Product Innovation Management.  https://doi.org/10.1016/0737-6782(92)90031-7.CrossRefGoogle Scholar
  26. Hesse, B. W., & Shneiderman, B. (2007). eHealth research from the user’s perspective. American Journal of Preventive Medicine, 32(5 Suppl).  https://doi.org/10.1016/j.amepre.2007.01.019.CrossRefGoogle Scholar
  27. Hofstede, G. H. (1991). Cultures and organizations: Software of the mind. McGraw-Hill.Google Scholar
  28. Hofstede, G. H. (2001). Culture’s consequences: Comparing values, behaviors, institutions, and organizations across nations. Sage.Google Scholar
  29. Hunsaker, A., & Hargittai, E. (2018). A review of Internet use among older adults. New Media & Society, 20(10), 3937–3954.  https://doi.org/10.1177/1461444818787348.CrossRefGoogle Scholar
  30. Jordan, P. W. (2002). Designing pleasurable products—An induction to the new human factors. London: Taylor and Francis.Google Scholar
  31. Karwowski, W. (2012). The discipline of human factors and ergonomics. In G. Salvendy (Ed.), Handbook of human factors and ergonomics. New York: Wiley.  https://doi.org/10.1002/0470048204.ch27.CrossRefGoogle Scholar
  32. Khalil, G. E., Wang, H., Calabro, K. S., Mitra, N., Shegog, R., & Prokhorov, A. V. (2017). From the experience of interactivity and entertainment to lower intention to smoke: A randomized controlled trial and path analysis of a web-based smoking prevention program for adolescents. Journal of Medical Internet Research, 19(2), e44.  https://doi.org/10.2196/jmir.7174.CrossRefPubMedPubMedCentralGoogle Scholar
  33. Kushniruk, A. W., Monkman, H., Tuden, D., Bellwood, P., & Borycki, E. M. (2015). Integrating heuristic evaluation with cognitive walkthrough: Development of a hybrid usability inspection method. Studies in Health Technology and Informatics, 208, 221–225.PubMedGoogle Scholar
  34. Lal, S., Gleeson, J., Malla, A., Rivard, L., Joober, R., Chandrasena, R., & Alvarez-Jimenez, M. (2018). Cultural and contextual adaptation of an eHealth intervention for youth receiving services for first-episode psychosis: Adaptation framework and protocol for horyzons-canada phase 1. JMIR Research Protocols, 7(4), e100.  https://doi.org/10.2196/resprot.8810.CrossRefPubMedPubMedCentralGoogle Scholar
  35. Lee, I., Choi, G. W., Kim, J., Kim, S., Lee, K., Kim, D., ... & An, Y. (2008, September). Cultural dimensions for user experience: cross-country and cross-product analysis of users’ cultural characteristics. In Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction-Volume 1 (pp. 3–12). British Computer Society.Google Scholar
  36. Lowdermilk, T. (2013). User-centered design: A developer’s guide to building userfFriendly applications. Sebastopol: O’Reilly.Google Scholar
  37. Mantovani, G. (1995). Comunicazione e identità : dalle situazioni quotidiane agli ambienti virtuali. Il mulino.Google Scholar
  38. Mantovani, G. (1996). Social context in HCl: A new framework for mental models, cooperation, and communication. Cognitive Science, 20(2), 237–269.  https://doi.org/10.1207/s15516709cog2002_3.CrossRefGoogle Scholar
  39. Marcus, A., & Gould, E. W. (2000). Crosscurrents: Cultural dimensions and global Web user-interface design. Interactions, 7(4), 32–46.  https://doi.org/10.1145/345190.345238.CrossRefGoogle Scholar
  40. Mauney, D., Howarth, J., Wirtanen, A., & Capra, M. (2010). Cultural similarities and differences in user-defined gestures for touchscreen user interfaces. In Extended abstracts of the 28th international conference on human factors in computing systems (CHI’10) (pp. 4015–4020).  https://doi.org/10.1145/1753846.1754095.CrossRefGoogle Scholar
  41. Millard, R. W., & Fintak, P. A. (2002). Use of the internet by patients with chronic illness. Disease Management and Health Outcomes, 10(3), 187–194.  https://doi.org/10.2165/00115677-200210030-00006.CrossRefGoogle Scholar
  42. Nardi, B. A. (1996). Context and consciousness: Activity theory and human-computer interaction. MIT Press.Google Scholar
  43. Nickerson, R. (1999). Chapter 1 - Engineering psychology and ergonomics. In P. Hancock (Ed.), Human performance and ergonomics.  https://doi.org/10.1016/B978-012322735-5/50002-6.CrossRefGoogle Scholar
  44. Nielsen, J. (1995). 10 usability heuristics for user interface design. Nielsen Norman Group, 1(1).Google Scholar
  45. Nielsen, J. (1999). User interface directions for the Web. Communications of the ACM.  https://doi.org/10.1145/291469.291470.CrossRefGoogle Scholar
  46. Nielsen, J. (2003). Usability 101: Introduction to usability. All Usability, 9, 1–10.Google Scholar
  47. Norman, D. A. (2002). The design of everyday things. Human factors and ergonomics in manufacturing (Vol. 16). Basic Books. doi: https://doi.org/10.1002/hfm.20127CrossRefGoogle Scholar
  48. Norman, D. A., & Draper, S. W. (1986). User centered system design: New perspectives on human-computer interaction. Lawrence Erlbaum Associates.Google Scholar
  49. Pavelin, K., Cham, J. A., de Matos, P., Brooksbank, C., Cameron, G., & Steinbeck, C. (2012). Bioinformatics meets user-centred design: A perspective. PLoS Computational Biology.  https://doi.org/10.1371/journal.pcbi.1002554.CrossRefGoogle Scholar
  50. Pravettoni, G., & Gorini, A. (2011). A P5 cancer medicine approach: Why personalized medicine cannot ignore psychology. Journal of Evaluation in Clinical Practice, 17(4), 594–596.  https://doi.org/10.1111/j.1365-2753.2011.01709.x.CrossRefPubMedGoogle Scholar
  51. Riva, G. (2008). Enacting interactivity: The role of presence. In F. Morganti, A. Carassa, & G. Riva (Eds.), Enacting intersubjecitvity: A cognitive and social perspective on the study of interactions (pp. 97–114). Amsterdam: IOS press.Google Scholar
  52. Riva, G., Villani, D., Cipresso, P., Repetto, C., Triberti, S., Di Lernia, D., et al. (2016). Positive and transformative technologies for active ageing. Studies in Health Technology and Informatics, 220, 308.PubMedGoogle Scholar
  53. Robinson, R. E. (1994). Making sense of making sense: Frameworks and organizational perception. Design Management Journal, 5(1), 8–15.  https://doi.org/10.1111/j.1948-7169.1994.tb00611.x.CrossRefGoogle Scholar
  54. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68.CrossRefGoogle Scholar
  55. Sharit, J. (2006). Human error. In G. Salvendy (Ed.), Handbook of human factors and ergonomics.  https://doi.org/10.1002/0470048204.ch27.CrossRefGoogle Scholar
  56. Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. Wiley.Google Scholar
  57. Smilowitz, E. D., Darnell, M. J., & Benson, A. E. (1994). Are we overlooking some usability testing methods? A comparison of lab, beta, and forum tests. Behaviour & Information Technology, 13(1-2), 183–190.CrossRefGoogle Scholar
  58. Strada, C., Brivio, E., & Galimberti, C. (2013). Digital education for older generation non native users: a focused ethnography study of a pilot project. Qwerty-Open and Interdisciplinary Journal of Technology, Culture and Education, 8(2), 44–57.Google Scholar
  59. Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. New York: Cambridge University Press.Google Scholar
  60. Suchman, L. (1993). Response to vera and simon’s situated action: A symbolic interpretation. Cognitive Science, 17(1), 71–75.  https://doi.org/10.1207/s15516709cog1701_5.CrossRefGoogle Scholar
  61. Tavares, J., & Oliveira, T. (2016). Electronic health record patient portal adoption by health care consumers: An acceptance model and survey. Journal of Medical Internet Research, 18(3), e49.  https://doi.org/10.2196/jmir.5069.CrossRefPubMedPubMedCentralGoogle Scholar
  62. Triberti, S., & Barello, S. (2016). The quest for engaging AmI: Patient engagement and experience design tools to promote effective assisted living. Journal of Biomedical Informatics, 63.  https://doi.org/10.1016/j.jbi.2016.08.010.CrossRefGoogle Scholar
  63. Triberti, S., & Brivio, E. (2017). User experience. Milan: Maggioli.Google Scholar
  64. Triberti, S., & Liberati, E. G. (2014). Patient centered virtual reality: An opportunity to improve the quality of patient’s experience. In P. Cipresso & S. Serino (Eds.), Virtual reality: Technologies, medical applications and challenges. Nova Science.Google Scholar
  65. Triberti, S., & Riva, G. (2016). Being present in action: a theoretical model about the “interlocking” between intentions and environmental affordances. Frontiers in Psychology, 6, 2052.CrossRefGoogle Scholar
  66. Triberti, S., Chirico, A., La Rocca, G., & Riva, G. (2017). Developing emotional design: Emotions as cognitive processes and their role in the design of interactive technologies. Frontiers in Psychology, 8, 1773.CrossRefGoogle Scholar
  67. Vekiri, I., & Chronaki, A. (2008). Gender issues in technology use: Perceived social support, computer self-efficacy and value beliefs, and computer use beyond school. Computers and Education, 51(3), 1392–1404.  https://doi.org/10.1016/j.compedu.2008.01.003.CrossRefGoogle Scholar
  68. Venkatesh, V., & Davis. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204.  https://doi.org/10.1287/mnsc.46.2.186.11926.CrossRefGoogle Scholar
  69. Venkatesh, V., & Morris, M. G. (2000). Why don’ t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Quarterly, 24(1), 115–139.CrossRefGoogle Scholar
  70. Vergani, L., Marton, G., Pizzoli, S. F. M., Monzani, D., Mazzocco, K., & Pravettoni, G. (2019). Training cognitive functions using mobile apps in breast cancer patients: Systematic review. JMIR mHealth and uHealth, 7(3), e10855.CrossRefGoogle Scholar
  71. Vorderstrasse, A., Lewinski, A., Melkus, G. D., & Johnson, C. (2016). Social support for diabetes self-management via eHealth interventions. Current Diabetes Reports.  https://doi.org/10.1007/s11892-016-0756-0.
  72. Walsh, T., Nurkka, P., & Walsh, R. (2010). Cultural differences in smartphone user experience evaluation. Mum’10.Google Scholar
  73. Williams, A., Fossey, E., Farhall, J., Foley, F., & Thomas, N. (2018). Recovery after psychosis: Qualitative study of service user experiences of lived experience videos on a recovery-oriented website. JMIR Mental Health, 5(2), e37.  https://doi.org/10.2196/mental.9934.CrossRefPubMedPubMedCentralGoogle Scholar
  74. Wilson, J. R. (2000). Fundamentals of ergonomics in theory and practice. Applied Ergonomics, 31(6), 557–567.  https://doi.org/10.1016/S0003-6870(00)00034-X.CrossRefPubMedGoogle Scholar
  75. Yuan, S., Ma, W., Kanthawala, S., & Peng, W. (2015). Keep using my health apps: Discover users’ perception of health and fitness apps with the UTAUT2 model. Telemedicine and E-Health, 21(9), 735–741.  https://doi.org/10.1089/tmj.2014.01.CrossRefPubMedGoogle Scholar
  76. Norman, D. A. (1993). Cognition in the head and in the world: An introduction to the special issue on situated action. Cognitive Science, 17(1), 1–6.CrossRefGoogle Scholar
  77. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178.Google Scholar

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Department of Oncology and Hemato-OncologyUniversity of MilanMilanItaly
  2. 2.Applied Research Division for Cognitive and Psychological ScienceEuropean Institute of Oncology (IEO) IRCCSMilanItaly
  3. 3.Department of PsychologyUniversità Cattolica del Sacro CuoreMilanItaly

Personalised recommendations