Digital technologies are increasingly intertwined with lived experiences of well-being. The ways in which we use technologies, and the ways in which technologies use our data, affect our psychological, social, economic, medical, and safety-related well-being. For example, being able to check in on the well-being of others during natural disasters can bolster the strength of our physical-world communities and enhance our personal feelings of safety (Redmiles et al. 2019). In the health space, there is growing excitement and promising evidence for prescribing technologies to aid in the management of chronic illness (Byambasuren et al. 2018).

Despite their potential to improve our well-being, these same technologies can also perpetrate harm. Much of the dialogue regarding the technological harms of well-being technologies focuses specifically on data privacy risks: how the misuse of user data can create psychological, social, economic, or safety-related harms (Vitak et al. 2018; Redmiles et al. 2019).

Privacy has been shown to be a key, and growing, concern for users when considering whether to adopt new technologies, including well-being-related technologies. However, privacy is far from the only consideration that effects whether a user will adopt a new technology. Here, I argue that we have developed a reductionist focus on privacy in considering whether people will adopt a new technology. This focus has prevented us from effectively achieving adoption of beneficial technologies and risks perpetuating and magnifying inequities in technology access, use, and harms.

By focusing exclusively on data privacy, we fail to fully capture user’s desire for respectful technologies: systems that respect a user’s expectations for how their data will be used and a user’s expectations for how the system will influence their life and the contexts surrounding them. I argue that user’s decisions to adopt a new technology are driven by their perception of whether that technology will be respectful.

A large body of research shows that user’s technology-adoption behavior is often misaligned with their expressed privacy concerns. While this phenomena, the privacy paradox, is explained in part by the effect of many cognitive biases including endowment and ordering effects (Acquisti et al. 2013), it should perhaps not be such a surprise that people’s decision to adopt or reject a technology is based on more than just the privacy of that technology.

Privacy calculus theory (PCT) agrees, going beyond considering just privacy to also consider benefits, arguing that “individuals make choices in which they surrender a certain degree of privacy in exchange for outcomes that are perceived to be worth the risk of information disclosure” (Dinev and Hart 2006). However, as I illustrate below, placing privacy as the sole detractor from adopting a technology and outcomes (or benefits) on the other remains too reductionist to fully capture user behavior, especially in well-being-related settings.

The incompleteness of a privacy-only view toward designing respectful technologies was exemplified in the rush to create COVID-19 technologies. In late 2020 and early 2021, technology companies and researchers developed exposure notification applications that were designed to detect exposures to coronavirus and notify app users of these exposures. These apps were created to replace and/or augment manual contact tracing, which requires people to call those who have been exposed to trace their networks of contact.

In tandem with the push to design these technologies was a push to ensure that these designs were privacy preserving (Troncoso et al. 2020). While ensuring the privacy of these technologies was critically important for preventing government misuse and human rights violations, and addressing user’s concerns, people rarely adopt technologies just because they are private (Abu-Salma et al. 2017). Indeed, after many of these apps were released, a minority of people adopted them. Missing from the conversation was a discussion of user’s other expectations for COVID-19 apps.

Privacy calculus theory posits that users trade off privacy against benefits and, in so doing, make decisions about what technologies to adopt. However, empirical research on people’s adoption considerations for COVID-19 apps finds a more complex story (Li et al. 2020; Redmiles 2020; Simko et al. 2020). People consider not only the benefits of COVID-19 apps—whether the app can notify them of a COVID exposure, for example—but also how the efficacy of the app, how many exposures it can detect, might erode those benefits. Indeed, preliminary research shows that efficacy considerations may be far more important in user’s COVID-19 app adoption decisions than benefits considerations (Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps 2020). On the other hand, privacy considerations are not the only potential detractors; people also consider costs of using the system both monetary (e.g., cost of mobile data used by the app) and usability-related (e.g., erosion of phone battery life from using the app).

People’s adoption considerations for COVID-19 apps exemplify the idea of respectful technologies: those that provide a benefit with a sufficient level of guarantee (efficacy) in exchange for using the user’s data—with the potential privacy risks resulting from such use—at an appropriate monetary and usability cost. While COVID-19 apps offered benefits and protected user privacy, app developers and jurisdictions initially failed to evaluate the efficacy and cost of what they had built and failed to be transparent to users about both the efficacy and costs of these apps. As a result, people were unable to evaluate whether these technologies were respectful and the adoption rate of a technology that had the potential to significantly benefit individual and societal well-being during a global pandemic remained low.

Examining the full spectrum of people’s respectful technology-related considerations is especially critical for well-being-related applications for two reasons.

First, there are a multitude of types of well-being that are increasingly being addressed by technology—from natural disaster check-in solutions through mental health treatment systems—each with a corresponding variety of different harms, costs, and risks that users may consider. If we focus strictly on the privacy-benefit tradeoffs of such technologies, we may miss critical adoption considerations such as whether the user suspects they might be harassed while using, or for using, a particular technology (Redmiles et al. 2019). Failing to design for and examine these additional adoption considerations can be a significant barrier to increasing adoption of commercially profitable and individually, or societally, beneficial technologies.

Second, different aspects of respectful technologies are prioritized by different sociodemographic groups (Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps 2020). For example, older adults focus more on the costs of COVID-19 apps than do younger adults; younger adults focus more on the efficacy of these apps than do older adults. Ignoring considerations aside from privacy, and benefits, can perpetuate inequities in whose needs are designed for in well-being technologies and, ultimately, who adopts those technologies. Such equity considerations are especially important for well-being technologies for which equitable access is critical and for which inequitable distribution of harms can be especially damaging.

Thus, to ensure commercial viability and adoption of well-being technologies, and to avoid perpetuating and magnifying well-being inequities through the creation of such technologies, it is critical to build respectful well-being technologies. Technology creators and researchers must not only consider the privacy risks and protections of such technologies—and the technology’s benefits—but also the contextual, cost, and efficacy considerations that together make up a potential user’s view of whether a well-being technology is respectful of them and their data. To do so, two approaches are necessary: first, direct measurement of the cost and efficacy of technologies produced, in line with expectations for evidence from other fields such as health (Burns et al. 2011), and second, direct inquiry with potential users to understand contextual and qualitative costs. By combining these two approaches to empirical measurement, we can better create well-being technologies that are both effective and respectful.