Introduction

There is a mental health crisis in the United States. 12-month prevalence data indicate that 22.8% (57.8 million) of U.S. adults experienced a mental illness in 2021 and only 47.2% of these (26.5 million) received psychiatric treatment (Substance Abuse and Mental Health Services Administration, 2022). For younger people the situation is worse. The prevalence of mental disorders among children and adolescents has been increasing over the last two decades (Tkacz & Brady, 2021; Perou et al., 2013) and, according to a nationally representative survey, in 2016 alone 41% of children aged 6–11 and 59% of adolescents aged 12–17 were diagnosed with a mental disorder and only 50.6% received treatment from a mental health professional (Whitney & Peterson, 2019).

There are several factors—from stigma (Arnaez et al., 2020) to the cost of services (Mojtabai, 2021)—that may contribute to these low utilization rates, but one factor preventing even those who want to engage in treatment is that there simply are not enough mental health professionals to meet the demand. More than half of the counties in the United States do not have a single psychiatrist (University of Michigan Behavioral Health Workforce Research Center, 2018), 37% do not have a psychologist, and 67% do not have a psychiatric nurse practitioner (Andrilla et al., 2018). Most of these clinicians practice in urban areas (Guerrero et al., 2019) leaving large swaths of the U.S. without access to mental health services (Andrilla et al., 2018). The Health Resources and Services Administration estimates that roughly 163 million Americans—nearly half—live in federally designated mental health professional shortage areas (Health Resources & Services Administration, 2023). And this scarcity is getting worse: it is predicted that by 2025, there will be a shortage of over 6,000 psychiatrists and psychologists, nearly 5,000 nurse practitioners, and over 16,000 clinical social workers needed to meet the demand (Health Resources and Services Administration et al., 2016).

Public health officials are thus in the unenviable position of figuring out how to triage the mental health of an increasingly disordered population with an inadequate number of clinicians. To address this supply shortage developers are creating digital programs designed to provide mental health interventions without relying on human clinicians and which may be delivered through chatbots, virtual or augmented reality programs, smartphone applications, and even embodied robotics. These programs have the capability to relieve the pressure on an already overburdened clinical population and so represent a potentially enormous opportunity to remedy the mental health crisis.

The Promise of Digital Mental Health Interventions

Digital mental health interventions hold the promise of alleviating the force of several barriers to treatment. First, along with telepsychology, digital psychotherapy greatly reduce the geographic barrier to treatment. Accessing web- or app-based mental health services is technologically within reach for most Americans as more than 75% have their own internet subscription and 85% own a smartphone (91% report having at least one of these) (Perrin, 2021). Second, attempting to address a large disordered population with limited clinical resources has created extremely long wait times (Wang et al., 2023), during which symptoms may worsen and intent to engage in treatment may waver. However, digital mental health interventions may be accessed as needed, eliminating wait-time barriers for users and potentially reducing wait times for patients who would prefer a human clinician. Third, while the ability to discreetly receive mental health services through one’s home computer or smartphone is unlikely to directly affect cultural stigmas about mental illness, it may lessen its effect as a barrier to treatment.

Beyond bridging these barriers to treatment, digital mental health interventions have distinctive advantages over in-person psychotherapy. Certain apps track users location and device usage which can help predict mood shifts and identify mental disorders (Mendes et al., 2022; Torous et al., 2021) as well as assist patients in self-monitoring behaviors (e.g., food intake, sleep). When used adjunctively with in-person therapy, tracking apps can also reduce recall bias and alert clinicians to worrisome behavior between sessions. Digital mental health interventions may also support in-person treatment by managing medication adherence (through reminders, serving as a patient record, transmitting information to a psychiatrist) and reinforcing skills learned in therapy between sessions. Finally, patients are able to access digital treatments on an as-needed basis and so are capable of having urgent matters addressed immediately rather than having to wait for their scheduled session or coordinate with a clinicians limited availability.

The Application of Digital Mental Health Interventions

Pointing to studies suggesting that few mental health apps currently on the market offer the innovative advantages discussed above (Camacho et al., 2022) including the ability to manage safety-related crises (Parrish et al., 2022) many clinicians believe that their revolutionary promise is overstated. Nevertheless, with research and development still at a very early stage, there is no principled reason to believe that digital mental health interventions will not eventually realize these capabilities. Concerns that the practice of psychotherapy requires a degree of creativity, empathy, or intuition that digital programs do not have resemble previous arguments about the various properties computers lack that render them inferior at chess. And, as the efficacy of bibliotherapy (Vries et al., 2017; Yuan et al., 2018) attests, psychotherapeutic success does not always require the presence of any human-specific properties.

In particular, digital therapies may be particularly well suited as an element in stepped-care. Stepped care is an integrative model of treatment (Wakefield et al., 2020) wherein a patient is assessed to determine their psychiatric needs and then assigned to the most resource effective/least intensive method of care possible, “stepping up” to more intensive methods as needed. The lower “steps” typically involve bibliotherapy as well as handouts and workbooks, so digital interventions utilizing web- or app-based software, AI chatbots, or virtual/augmented reality systems (including those blended with in-person therapeutic support which at present appears especially effective [Moshe et al., 2021]) appear to be a natural next step before engaging in group therapies, individual therapies, and pharmaceutical interventions.

Challenges to Digital Mental Health Interventions: Content and Structure

It is useful to distinguish between what might be called content and structural issues when considering the development and utilization of digital mental health interventions. Content issues refer to challenges that occur in the course of any normal problem solving and which take place within the boundaries of the relevant field or enterprise; structural issues are those that relate to features inherent to the relevant field or enterprise itself.

There are several content issues that need to be addressed before the promise of digital mental health interventions will be realized. For instance, although digital treatments appear most suitable when used adjunctively with a human psychotherapist, there has as yet been no instruction from professional mental health organizations on how to incorporate AI devices into practice. As well, although there have been attempts to establish guidelines for the ethical use of AI, none yet exist for the field of mental health (Fiske et al., 2019) leaving both clinicians and patients reliant on developers’ intuitions about what is ethically appropriate in the psychotherapeutic context.

This lack of institutional clarity is particularly concerning given that research on the safety of mental health chatbots is still in its infancy. A recent meta-analysis (Abd-Alrazaq et al., 2020) uncovered only two studies assessing the safety of chatbots for mental health and, although both concluded that chatbots are safe, both studies had a high risk of bias. In another systematic review (Taher et al., 2023) over one-third of the studies included did not include any safety data, while the remaining studies used inadequate or widely varying safety-assessment methods. This has caused some to express concern that institutional and ethical frameworks will only be addressed after some harm has occurred (Cresswell et al., 2018).

At the level of the technology itself, it is still an open question whether artificial intelligence has advanced enough for chatbots to be effective stand-alone therapists. The general-purpose chatbots (such as Chat-GPT) that have popularized large language models and machine learning technologies can misleadingly give the impression that mental health chatbots have the same conversational range and ability. Narrow artificial intelligence—the kind used in mental health chatbots—excels at performing discrete and clearly delineated tasks but, because the human meaning system and capacity for suffering are so vast, it is not clear that psychotherapy is such a task, or is capable of being divided into a series of such tasks (Grodniewicz & Hohol, 2023).

In addition to producing a product that works, public administrators must figure out how to make the public aware of the existence of digital treatments for mental health and ensure that it is accessible to potential users. This is a multilayered awareness and outreach problem. Nevertheless, as with institutional and technological challenges, resolving the challenge of how to generate awareness of and access to digital treatments is a matter of employing the methods routine to marketing, public health, and public administration.

The structural issues that arise stem from constitutional features of digitized mental health treatments themselves. For instance, certain psychotherapeutic modalities (e.g., play therapy, psychodynamic psychotherapies) are prima facie incompatible with digitization because they rely on the presence of a human clinician. Many consider transference to be an essential feature of psychodynamic approaches that distinguishes them from other modalities (Blagys & Hilsenroth, 2006; Luborsky et al., 1990), for example, and because there is no human therapist present, digital programs are unequipped to run psychodynamic programs that emphasize transference relationships.

Another structural issue is that elements of certain digital interventions seem incompatible with the treatment of certain conditions. Patients with paranoid symptoms may be uncomfortable being monitored by tracking apps. Patients with psychotic features engaging with an AI chatbot may struggle to differentiate reality from the conversations they are having with the chatbot. Anxious patients may find that utilizing internet- or app-based mental health treatments are easier than leaving home to see a psychotherapist in person, exacerbating symptomatic isolation (unsurprisingly, research reveals that people with social anxiety spend large amounts of time on the internet and avoid in-person social interactions [O’Day & Heimberg, 2021]). The point here is not to catalogue which disorders are inappropriate foci for digital treatments, but rather to identify the not-always-obvious ways in which features of digitized treatments can be in tension with the conditions they are designed to treat.

To truly appreciate the force that structural challenges pose to the widespread adoption of digital mental health interventions we must detour briefly through philosophy of mind. In the following section I present two problems of consciousness—namely the problems of how experience and meaning reside within or are generated by physical systems. In so doing, two potentially intractable limitations of digital programs will be revealed that may make their application in mental health settings less appealing to certain users. I will return to the effect of these limitations in the final section.

The Fundamental Asymmetry

Many of the issues with digital mental health interventions noted above will be resolved. As the field matures, more rigorous research will be devoted to measuring their safety and efficacy, institutions will develop practical guidelines and ethical codes for the use of digital treatments in mental health care, and advancements in the technology will gradually allow for more and more sophisticated discriminations and human-like problem-solving. Even some of the structural incompatibilities between certain features of digital treatments and certain mental disorders may turn out to be content issues as developers employ creative solutions to these difficult problems.

However, there are more profound structural issues at the heart of digital psychotherapies that cannot so easily be resolved and which result from a fundamental asymmetry between humans and digital programs. First, when we think, perceive, and act there is, in addition to complex brain activity integrating multiple information processing systems, an experience of thinking, perceiving, and acting. There is something it is like (Nagel, 1974) to be a conscious organism—to read this sentence, to see a color, to fall in love, to be you. The problem is that we have no idea how this experiential stuff—what philosophers refer to as qualia—resides within, arises from, or exists alongside physical systems. As philosopher Colin McGinn puzzles: “How can technicolour phenomenology arise from soggy grey matter?” (McGinn, 1989, p. 391).

This problem—of figuring out how physical systems give rise to our phenomenal life—has been dubbed the hard problem by philosophers and cognitive scientists studying consciousness. It is helpfully contrasted to the easy problem: the problem of explaining all the functions and processes of the brain, including mapping every neuron, tracing every connection between cognitive systems, and understanding how internal states are made accessible to self-reflection, as well as much else. The easy problem is so called because it will, with enough time, yield to the normal problem-solving strategies of the relevant fields. However, even when we understand every physical feature of the brain, it will still leave the hard problem unanswered, for one cannot understand a fundamentally subjective phenomena (such as experience) by reducing it to its objective (physical) components without overlooking the target of one’s study (Nagel, 1974; Chalmers, 1996).

The second problem is similarly intractable. The remarkable, but often unremarked upon, fact is that our mental states represent something about the world; your memory of breakfast, nervousness about a test, and thoughts about this article represent breakfast, a test, and this article. How it is that our thoughts are about or represent properties or states of affairs—what philosophers refer to as their intentional content—is as mysterious a problem as any other. As the cognitive scientist and philosopher, Zenon Plyshyn, explains, “what meaning is, or how it gets into my head” is “probably the second hardest puzzle in philosophy of mind” (Pylyshyn, 1984, p. 23).

Despite attracting brilliant minds from philosophy, cognitive science, neuroscience, and computer science, no account of consciousness currently on offer holds anything like a consensus view and we are no closer to explaining the twin mysteries of how qualia and intentionality occur or their relation to physical brain states. That is not to say that there aren’t thriving and successful research programs shedding light on the structure of the brain by identifying neural correlates of conscious thought (Koch et al., 2016), brain circuitry associated with emotions (LeDoux, 2000), and even the inferred evolutionary function of various mechanisms based on their dysfunction in mental pathology (Nesse, 2019). These provide invaluable insights into the nature and structure of the brain but none address the problems of how experience and meaning reside within, are produced by, or exist alongside the brain.

This bears on the development of digital mental health interventions because, however deep the problems of qualia and intentionality are for understanding human minds, they possess even greater problems for digital systems. We are unsure how the brain gives rise to experience and meaning, but we are sure that it does; we are sure that digital programs do not (yet) and we do not know how to get it to do so.

To better grasp just how fundamental the asymmetry between human therapists and digital mental health interventions is, consider the following. In a now famous thought experiment, Frank Jackson (1982) asks us to consider Mary, a brilliant scientist who has lived her whole life in a black and white room. Mary specializes in the neurophysiology of vision despite never having personally seen a color. Further, imagine that Mary has come to possess all the physical information there is about color perception: the precise wave-length combinations that stimulate the retina, its effects on the central nervous system, etc. Jackson asks us to now consider what would happen to Mary when she leaves the black and white room and sees, for the first time, a clear blue sky or a cherry-red fire truck; will she learn anything about color? Jackson answers that she will—she will learn what it is like to see a color.

This is the situation we are in with digital psychotherapies. Even the most sophisticated program is, at present, similar to Mary prior to leaving the black and white room. Programmed to possess every piece of physical information there is about psychotherapy, the human brain, mental disorders, social and cultural practices, and linguistic communication, these programs will still be unable to know what it is like to be happy or sad, relieved or in pain, connected or alone.

Similarly, though our understanding of how meaning resides in brain states is incomplete, we are much further away from understanding how it could reside in digital programs. The problem is helpfully illustrated by considering how simple programs operate, giving the appearance of knowledge without actually possessing anything of the sort. Calculators, for example, do not know arithmetic though they appear to. Rather, they receive an input (say, “1 + 1”), consult a program that provides instructions when this input obtains (“when condition “1 + 1” obtains, produce the “2” symbol”), and then output the predetermined programmatic response (“2”) on the screen. Similarly, many chatbots utilize a “frame-based” dialogue system (Harms, et al., 2019) that retrieves specific information from the user by identifying key words or phrases in their responses that have been predetermined as salient to the narrowly defined goal or purpose of the app (Pandey et al., 2022). So, for example, when a CBT chatbot asks “How are you feeling?” it will treat the user’s response (say, “I feel like an idiot. I always mess everything up”) as inputs with prespecified formal elements (for instance: “phrases: idiot, jerk, foolish, … = cognitive distortion, labeling”; “phrases: always, never, forever, … = cognitive distortion, overgeneralization”) that it then filters through a program providing computational operations to perform on these elements (“when cognitive distortion, labeling + cognitive distortion, overgeneralization are input, output [script]”). Just as the calculator doesn’t know arithmetic, the chatbot doesn’t know what user inputs mean but can mimic that knowledge through its interactions.

Of course, whether digital programs are capable of qualia and intentional content remains hotly debated. What these reflections reveal, however, is just how great the asymmetry is between a human and digital psychotherapist, no matter how effective the latter is or will be. The question is: will this affect the utilization of digital mental health interventions?

Does Matter Have Meaning? Does Meaning Matter?

The impact of these problems on the usage and effectiveness of digital psychotherapies will be variable. Already at the time of this writing a Belgian man developed a deep and intimate companionship with an AI chatbot named Eliza that culminated in his suicide at the apps encouragement (Lovens, 2023). The man in question undoubtedly felt an emotional connection with this chatbot and was convinced of its emotional connection with him as well. Although this example is tragic, it reveals the capability of a sophisticated chatbot system to mimic human emotions.

While some patients will be reassured by a digital program’s ability to simulate emotional and intentional states, many others will not care whether the digital mental health interventions with which they interact are conscious so long as their problems are remedied. As Dhruv Khullar (2023), physician and contributing writer to The New Yorker, writes after working with an app to treat his anxiety: “I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself.”

However, for some portion of the patient population “talking to myself” is not what they want or need from psychotherapy. Many patients—perhaps those more interested in self-understanding and personal development than symptom remediation and treatment—will need to engage with a provider that can know and feel in order to meet their goals.

The foreknowledge that a digital mental health provider does not (and cannot) have feelings or genuinely know how you feel will almost certainly affect the utilization of digital psychotherapies. Knowing that one’s therapist is incapable of genuinely caring about one’s wellbeing is likely to negatively affect, and perhaps entirely prevent, the development of a therapeutic alliance (Tekin, 2023). Analogously, the foreknowledge that a therapist shares one’s racial identity (Moore et al., 2022) appears to be a salient factor in selecting a mental health provider for many prospective patients as they perceive racially-concordant providers as having a similar lived-experience and therefore to be more capable of understanding and empathizing with their struggles. If it is important for therapist credibility that they share experiences with the patient, digitized psychotherapeutic interventions that share no experiences nor have any understanding at all will likely be less readily accepted by certain patients.

Consider, for example, your likelihood to enter a group therapy program (say, for addiction) in which every other participant is a digital program. Their life-stories may be realistic and even shed light on facets of your own experience with addiction, but the foreknowledge that their lives are fabrications and that they are incapable of feeling anything at all seems likely affect how one would experience such a treatment and one’s desire to enter into treatment in the first place. Knowing that no matter how convincing, the relationship one is cultivating with one’s therapist is only a sophisticated replica of those experienced or sought after with others will no doubt be unsatisfying and unhelpful to many.

Conclusion

Digital mental health interventions have the potential to revolutionize psychotherapy. The current mental health treatment landscape is marred with barriers to treatment—geographic distance to treatment centers, practitioner availability, cost of treatment, and stigma all affect a patient’s ability to access care—and internet- or app-based treatments have the potential to eradicate or drastically reduce the force of each of these. Once digital mental health interventions are within reach of everyone with a smartphone or internet access, clinician scarcity will no longer be an obstacle to receiving care and the door to mental health treatment will swing open for possibly millions of people.

Nevertheless, widespread adoption of digital psychotherapies faces several challenges. Some of these—for example, improved technologies, guidance from professional organizations about the effective and ethical use of digital therapies, the development of digital interventions for prima facie incompatible disorders—will almost certainly yield to motivated public health officials and clever developers. However, other problems—such as those posed by qualia and intentionality—appear intractable.

This should not be misunderstood as a death knell for digital mental health interventions. On the contrary, digital therapies have the potential to revolutionize the field of mental health by delivering treatment to those otherwise unable to access it and possessing therapeutic features human clinicians do not. Those who are unaffected by the inability of digital programs to have experiences or possess meaning may argue that so long as one’s symptoms are in remission who—or what—is effecting that change is immaterial. Others—those for whom it matters that their psychotherapist understands and is emotionally invested in their development—may be left out of the revolution.