Introduction

User engagement with a digital health intervention is considered to be a necessary condition for an intervention’s success. Clearly, an intervention cannot influence individuals who are not using it. Unfortunately, over the past 15 years, researchers, practitioners, and policy-makers have observed poor user engagement with unguided digital mental health interventions in real-world settings (e.g., Baumel et al., 2019ab; Christensen et al., 2009; Fleming et al., 2018). As this phenomenon limits the potential of unguided digital interventions to improve public health, attempts have been made to conceptually define and understand user engagement and advance the ways it should be investigated and reported (e.g., Doherty & Doherty, 2018; Lalmas et al., 2014; Perski et al., 2016; Yardley et al., 2016).

Conceptualizing and investigating the relationship between user engagement with the digital program, the performance of therapeutic activities as a result of program use, and improvement in clinical outcomes could help to shed light on how digital health interventions work (e.g., Yardley et al., 2016; Zhang et al., 2019). Uncovering this relationship is particularly important in digital mental health intervention research because the therapeutic activities meant to achieve beneficial outcomes are not always explicit. In other words, in the mental health domain, important therapeutic activities—such as cognitive reframing, self-talk, or exercising gratitude—may occur in one’s mind without any direct behavioral foot print. Critically, one of the main challenges involves determining whether the use of the digital program itself is tied to a therapeutic activity. Lack of clarity around this point could create confusion in how we conceptualize the results of empirical studies in the digital mental health domain and limit the impact of research.

I propose that using an analytic approach to distinguish between different user actions and experiences makes it feasible to conceptually differentiate between user engagement and the incorporation of therapeutic activities. Such a framework enables a better understanding of how digital mental health interventions work and is better able to address design failure points. I begin with a brief review of the connection between therapeutic activities and clinical outcomes, as this provides an important background to the proposed framework. I then present the suggested framework for reporting and an example to illustrate the importance of such a framework when trying to investigate how interventions work.

Therapeutic Activities and Clinical Outcomes

Behavioral and mental health interventions are developed based on a theoretical understanding that the incorporation of targeted therapeutic activities can function as a mechanism of change leading to clinical outcomes (e.g., Abraham & Michie, 2008; Mohr et al., 2013; Ritterband et al., 2009). In cognitive behavioral therapy (CBT) targeting people with generalized anxiety disorder, for example, a developer may lean on a theory that links changes in cognitive and behavioral components to a change in symptoms of anxiety. Therapeutic activities may include cognitive components, such as cognitive restructuring exercises and the incorporation of problem-solving techniques, and behavioral components, such as daily muscle relaxation training and exposure (Khodarahimi & Pole, 2010; Stanley et al., 2014). Users who engage in the required therapeutic activities according to the “prescription” are those who adhere to the treatment plan. Since it is simple to document digital footprints, program usage is easily obtained and has been commonly reported in research on digital health interventions. However, program usage is not necessarily equal to the incorporation of a therapeutic activity. In what follows, I present the user pathway framework for understanding the relationship between program usage, therapeutic activities, and clinical outcomes.

The User Pathway Framework

Program Usage and Experience of Engagement

Figure 1 presents an analytic illustration of the user pathway framework, which begins in program usage and ends in clinical outcomes. The literature suggests that the first two components—program usage and experience of engagement with the digital program—can be viewed as two indicators of user engagement (e.g., Perski et al., 2016, 2020). Program usage refers to behavioral measures that describe when the program is used and how often. Measures of usage depend on the delivery medium (e.g., mobile app, website) and include metrics such as the number of program logins, distinct days in which the program was accessed, longevity (number of days between the first and last login), screen time, user retention, and number of modules completed (e.g., Baumel & Kane, 2018; Baumel et al., 2019b; Miller et al., 2019; Rahman et al., 2017). There is no single concrete definition for engagement as an experience and how it can be precisely measured because engagement depends on the clinical target and the way the intervention is designed. A common ground, however, is the idea that the experience of engagement is related to the quality of attention, involvement, and immersion during program usage (for a review see (Glas & Pelachaud, 2015; Perski et al., 2016)).

Fig. 1
figure 1

Analytic illustration of the user pathway framework

Minimal program usage is a necessary condition for a user to experience engagement because one cannot be involved or pay attention to a program if it is not being used. However, details regarding users’ usage do not necessarily suffice to inform us about the quality of their engagement with the program. For example, one user could view educational materials without really concentrating on reading them, whereas another user could be highly concentrated and involved. These two users’ ability to remember what they learned and act based on that learning experience would be completely different.

Nonetheless, a more sensitive examination of program usage patterns could reveal important details that shed light on the quality of engagement; in other words, researchers need to have access to a dataset that describes usage in detail. For example, in their novel study, Zhang and colleagues categorized user behaviors to identify clinically meaningful use of mental health apps (Zhang et al., 2019). This work required the investigation of system use data that included user activities such as changing reminders, creating mantras, and scheduling of therapeutic activities. These recorded usage events capture far more nuanced user attention and involvement than the number of user logins. Most importantly, the quality of user engagement can be directly documented based on user responses to questions, success in embedded exercises, and any other event that requires a certain quality of engagement for successful completion. User engagement with educational materials, for example, could be measured if users are not passive learners and must demonstrate their knowledge gain in exercises during completion of the learning phase.

User Engagement and Therapeutic Activities

In digital mental health interventions, a “high quality of engagement” does not necessarily mean that the user engages in the therapeutic activity that is directly related to the targeted mechanism of change. Therefore, investigators should strive to reveal how engagement translates into a therapeutic activity. Yardley and colleagues suggested differentiating between engagement with the program and engagement with behavior change -- while these types of engagement are intimately linked, they are not the same (Yardley et al., 2016). As an illustration, consider an app that promotes behavioral activation for users with depression. In behavioral activation, the mechanism of change involves an increase in positive activities that expose the individual to reinforcing environmental contingencies, which in turn produce beneficial changes in thoughts and mood (Dahne et al., 2017; Hopko et al., 2003). Consequently, when a user pays appropriate attention to the psychoeducational materials within the digital app, which explain the importance of increasing physical activity levels, this learning event is not the same as the targeted therapeutic activity, which is the physical activity itself. In this case, the intervention developers have assumed that user engagement is necessary for the user to eventually conduct this targeted activity. However, if the user does not eventually conduct the desired activity, we cannot expect an improvement in his/her symptoms of depression; moreover, we cannot expect the documented metrics of engagement with the digital program to be linked to clinical outcomes.

In other words, an effective intervention creates a link between user engagement and adherence to therapeutic activities that foster the mechanism of change. This notion has conceptual overlap with Yardley and colleagues’ notion of “effective user engagement,” which is defined as sufficient engagement with the program to achieve the desired outcomes (Yardley et al., 2016). Similarly, the investigator should ask what kind of engagement with the digital program is needed for the successful incorporation of therapeutic activities.

It is important to note that, in some cases, user engagement is equal to the performance of a therapeutic activity, for example, when users appropriately engage with a mindfulness routine to decrease anxiety or stress—assuming that the mindfulness exercise by itself is directly related to the desired clinical outcome (Pascoe et al., 2017)—or when users engage in a daily cognitive restructuring exercise incorporated within an app as part of CBT for anxiety (Neal-Barnett et al., 2019). These instances do not diminish the importance of the framework, but rather stress the importance of carefully understanding each aspect of the pathway. In such cases, the investigator is required to differentiate between user engagement in parts of the program that are not equal to the targeted therapeutic activity and engagement in parts of the program that are equal to such an activity.

Subsequently, it is important to be able to conceptually distinguish between (1) digital intervention components that are used to promote the targeted therapeutic activities and (2) digital intervention components that can be considered equal to the incorporation of the key targeted therapeutic activity. This is because these two types of components are not supposed to invoke beneficial outcomes in the same way. This distinction can also be helpful in creating a language that is similar to that used in different yet related fields. Using a simple example, a psychiatrist will likely differentiate between a patient’s attendance at meetings (usage), attention and involvement (experience of engagement), and adherence to a treatment plan, which is evaluated based on whether the patient actually takes his/her medications according to the prescription (targeted therapeutic activity). Clearly, the psychiatrist will not expect beneficial clinical outcomes in the absence of the targeted therapeutic activity.

A Clarifying Example: Implementing the User Pathway Framework to Illustrate the Moderating Impact of Product Design on Clinical Outcomes

Implementing the user pathway framework enables researchers to more precisely investigate how different phenomena in digital health interventions relate to different metrics and targeted outcomes. Figure 2 makes use of this framework to present a hypothetical model illustrating the moderating impact of product design on clinical outcomes, and more in particular how changes in design are expected to impact different aspects along the pathway. The two quality domains incorporated into this hypothetical model were identified and constructed through the Enlight quality assessment tool (Baumel et al., 2017a, b). These domains are based on a comprehensive systematic review that was performed to identify relevant quality criteria related to eHealth and mHealth products, emerging from 84 identified sources of information (for a review, see (Baumel et al., 2017a, b)).

Fig. 2
figure 2

Illustrating the moderating impact of product design on clinical outcomes

The systematic review showed that aspects of product design impact the extent to which users want to increasingly utilize a program and become more involved with it (user engagement features). We expect that programs that are of high quality in these aspects will yield higher program usage and better user experience through involvement and attention. In the hypothetical model, however, the link to clinical outcomes rests on whether the user’s increase in engagement translates into engagement in a therapeutic activity. Consider a user of an online CBT program targeting social anxiety disorder (SAD) who receives psychoeducation on the importance of exposure and how it could be exercised. A program that presents the user with captivating, interactive, and well-targeted materials is expected to increase the user’s involvement and attention during this learning experience. This is an important step in the pathway toward incorporating exposure as a therapeutic activity, but it does not directly provoke an exposure event.

Going back to the hypothetical model, the literature points to other aspects of product design that are meant to directly provoke positive changes in users’ lives by embedding a persuasive system design focused on incorporating behavior change techniques (therapeutic persuasivness features) (Baumel et al., 2017a, b). We expect that a CBT program for SAD that provides users with calls to action, such as triggers embedded in people’s lives (e.g., setting relevant exposure goals, reminding and inspiring users to utilize the exposure technique), load reduction of therapeutic activities (e.g., providing exposure opportunities), and a real data-driven approach (e.g., the user’s therapeutic activities impact the next steps) will increase the chances of highly engaged users feeling more accountable and incorporating beneficial therapeutic activities into their lives. From the perspective of behavior change theories, the absence of any of these features cannot ideally support users’ self-management of desired and undesired behaviors (Abraham & Michie, 2008; Doshi et al., 2003; Michie et al., 2008). Accordingly, addressing different aspects of the pathway will enable researchers and developers to more precisely consider the importance of product design in reaching the desired outcomes.

Discussion

The proposed user pathway framework presents a simple approach to improve our understanding of how digital mental health interventions work. Addressing the differentiation in key aspects along the pathway is helpful for identifying the impact of programs’ manipulations and failure points. When the correlation between program usage and clinical outcomes is not high, we must ask: (1) What are users engaged with? Are there any measures that might reveal the quality of user attention and involvement in the program? (2) What therapeutic activities do users conduct? Are these directly/indirectly related with user engagement? (3) Are the therapeutic activities related to clinical outcomes?

As research on digital mental health interventions progresses, scholars are required to unveil potential black boxes. Special attention should be given to the documentation of targeted therapeutic activities during program use, as these activities are conceptually linked to the achievement of desired clinical outcomes. Dismantling studies or factorial study designs that document and examine such therapeutic activities could help moving the field forward by enabling to better understand mechanisms of change associated with clinical outcomes under the different conditions participants were randomized to.

It is worth noting that some issues related to the incorporation of the user pathway framework need to be resolved. First, in some programs, the incorporation of therapeutic activities is already embedded and automatically obtained using passive sensing—for example, in a digital behavioral activation intervention promoting physical activity that adapts based on the levels of the targeted activity (e.g., step counts). In other programs, especially within the mental health domain, therapeutic activities are not always explicit or automatically documented. Second, developers sometimes lean on a sequence of events meant to provoke one therapeutic activity (e.g., psychoeducation → goal setting → activity), which complicates the analytical investigation.

Nevertheless, expanding the framework of our reporting by disentangling different conceptual aspects could be key in moving the translational field of digital mental health interventions forward. The user pathway framework is one example of such an effort. I hope that it will help to expand reporting and subsequently improve research on the relationship between user engagement and clinical outcomes.