Current Treatment Options in Psychiatry

, Volume 6, Issue 3, pp 232–242 | Cite as

Towards the Design of Ethical Standards Related to Digital Mental Health and all Its Applications

  • Til WykesEmail author
  • Jessica Lipshitz
  • Stephen M. Schueller
Open Access
Technology and its Impact on Mental Health Care (J Torous and T Becker, Section Editors)
Part of the following topical collections:
  1. Topical Collection on Technology and its Impact on Mental Health Care


Purpose of the review

Digital health technologies offer tremendous potential in increasing access to services and augmenting existing services. Utilizing these technologies, however, poses new ethical considerations for clinicians, researchers, and healthcare organizations. These issues have been particularly apparent recently with several public instances of misuse of digitally available personal data. Responsibility for ethics is distributed among creators, end users, and purveyors which has meant that this aspect of digital technology production and use tends to be thought of as someone else’s problem.

Recent findings

In this overview, we discuss key ethical issues and dilemmas in order to drive ethical implementation, future technology development, and potential formal and informal regulation. Key considerations discussed include risk-benefit ratios, privacy and data security, ethical development of digital mental health tools, ethical research processes, and informed consent. Concrete recommendations are made for different stakeholders in digital mental health.


Digital mental health tools come with ethical considerations for the public, patients, clinicians, and health services to feel confident in their use. It will be essential for all groups to recognize their responsibilities and begin to shape frameworks for ethical development and implementation.


Digital psychiatry eHealth mHealth Ethics Apps and smartphones 


The promise of technology to revolutionize mental health is surrounded by hype. As an example, the Academy of Medical Sciences described digital health as a fast-developing technology that will “transform the way that health and social care is delivered” [1]. This hype is understandable as such a revolution is warranted. It is recognized that it will not be possible in the near future to address the burden of mental health through training professionals alone, and even if we could, some people might desire (or require) alternative modalities to receive mental health support. However, this hype also serves to detract from potential drawbacks of the application of technology to mental health. Mental health technology comes with a responsibility to determine appropriate ethical standards in the development, research, and integration of these technologies into society and clinical care. Doing so will require a careful review of which existing standards are relevant for these technologies and should balance each issue or problem alongside relevant benefits, cultural norms, and values.

In this treatment of ethics, we adopt both an aspirational as well as a pragmatic approach. We are not ethicists, but rather are mental health and mental health services researchers who have extensive experience in the digital mental health space and have worked with consumers, developers, clinicians, healthcare organizations, evaluators, and payers.

Our pragmatism starts with defining what we are referring to by digital mental health. A narrow definition would consider only technologies intended for those who have a mental health diagnosis. Many products, however, attempt to avoid liability by stating that they do not provide medical advice, diagnosis, or treatment and then proceed to offer content and tools that address mental health issues. We, therefore, adopt a comprehensive definition of digital mental health that includes all technologies that provide treatment and management of mental health problems. This includes technologies that address social, psychological, and biological factors that contribute to mental health, such as apps, wearable devices, and virtual reality products. We also do not define mental health as only the presence or absence of a mental health diagnosis but include emotional, psychological, and social well-being that spans a continuum from flourishing to languishing [2]. Therefore, we are pragmatic in that we recognize that the technologies being advertised to people on the promise of mental health benefits are much broader than those that directly address mental health diagnoses, so our ethical guidelines must address this broader array of technologies.

We are aspirational in believing that ethical guidelines developed today could guide the development and uses of technologies in the future. If we develop ethical guidelines only in response to technology, we will likely always be developing guidelines in the midst of disasters—ineffective technologies pushed on uninformed consumers, data breaches, and failures of people to seek effective care in the face of flashy alternatives.

Multiple ethical codes are relevant to this space including those drawing from health professionals as well as technology developers. First, ethical codes support the development and application of digital mental health. For example, the American Medical Association’s principles of medical ethics require that physicians “support access to medical care for all people.” Similarly, the General Principles of the American Psychological Association’s Ethics Code [3] includes the principle of Justice, which indicates that services be made accessible to all. Insofar as digital mental health meaningfully extends the reach of services to those with more limited access, incorporation of these technologies into practice is inherently part of our ethical obligation. Most importantly, however, these codes define acceptable behavior to protect the client, especially in the context of power differentials inherent in the client-clinician relationship. We need to be careful not to develop and spread technologies because we think it is the right thing to do as the “experts” without consideration of the most critical issues for those who will be affected by these technologies. In light of this, we have previously suggested four simple Transparency for Trust (T4T) principles [4••] which were published in May 2019. These are based on patient and regulatory perspectives, recent systematic reviews, and experimental studies (e.g., [1, 5, 6, 7, 8•, 9, 10•, 11, 12•]. These principles include privacy and data security, development characteristics, feasibility data, and benefits. They were developed in order to fill the void on information to the consumer available at the point of download where we know that consumers trade-off information to make choices; e.g., they may want strict privacy or they may choose apps with more efficacy information.

The T4T principles are a starting point, but in this paper, we comment on additional ethical concerns that expand on these principles by highlighting other areas that need to be addressed when considering the design and use of technologies for mental health purposes.

Regulatory issues

If clinicians are prescribing or recommending digital treatments, they need to understand their efficacy and safety. Professional bodies such as the Food and Drug Administration (FDA) in the USA and the National Institute for Care and Health Excellence (NICE) in the UK have traditionally been a resource to clinicians by evaluating efficacy and safety of pharmaceuticals and medical devices and. In recent years, there has been movement towards these bodies offering the same evaluations for digital mental health treatments.

But, digital mental health treatments challenge traditional medical regulatory structures. Digital mental health tools change regularly which raises the question of what is being approved. For example, reSET, the first behavioral health app approved by the FDA was approved based on evidence collected on the web-based version of the treatment not the app itself [13]. Indeed, the content within both platforms might be the same, but in general, people use mobile apps differently than websites, with more frequent, shorter and potentially less focused contacts [14]. New digital mental health tools are regularly being released and some existing tools might disappear from the market quickly [15]. Some attempts to address this issue has been independent app review. The UK NICE curates a health app library with in-depth reviews of the evidence, but there are few apps available, and even fewer aimed at mental health. Review organizations such as the non-profit PsyberGuide, a project of One Mind, and the for-profit ORCHA, also provide reviews and guidance [16, 17], but even though they can work at a faster pace, they still produce relatively few balanced recommendations. An alternative approach to regulation is education of stakeholders of key aspects such as the American Psychiatric Association’s App Evaluation Framework [18•]. However, it is unclear whether education will be sufficient to guide stakeholders to safe and effective products. We need widely accepted and available information about what products work as well as standardizations around issues such as expected use.

Privacy and data security

Privacy and data security have become a focus for us all but have always been of interest to mental health service users [19]. We now know that there is no such thing as a free Internet search or a free app. Even the founder of the worldwide web, the Cambridge professor Sir Tim Berners-Lee, spoke out about this concern on the web’s 28th birthday in 2018. He pointed out that companies allow us to use free content in exchange for our personal data [20]. Ethical codes for dealing with personal data have been developed into laws such as the European Union General Data Protection Regulation (GDPR) which is based on the idea that individuals own their data and must give consent to any uses. In the USA, data that reaches covered entities falls under the Health Insurance Portability and Accountability Act (HIPAA), but data that an individual maintains outside of a health system does not. Breaches of the GDPR rules can result in large fines. Specific regulations are likely necessary to protect health data coming from personal devices and, specifically, to create standards around data storage and transmission to ensure that people’s mental health data are not used for purposes that they do not intend.

In many cases, we have no access to these data and cannot pick and choose who has access to it. Information about these rules is presented to individuals in long and complex Terms and Conditions statements and you need to accept them before you can access the service. Although we are sometimes given the option of deciding whether advertisers have our data, this is a blanket question. We do not have the opportunity to allow some and not others. Service users have always raised privacy as an issue for providing data that will be available to others [21, 22, 23]. The questions from the T4T principles [4••] are as follows: (1) what data leaves the device?; (2) how is that data stored? (e.g., de-identified, encrypted); and (3) who will have access to that data? It should be clear what, if any, data is being sold, to whom, and what steps are taken to ensure that users cannot be identified by that data.

Monitoring and assessing risks and benefits

The recommendation of health products is often based on a determination of the anticipated benefit relative to the potential risk. Traditionally, benefits and risks have been determined based on clinical trials and clinical evidence. Empirical studies are used to determine that an intervention works and to help identify potential concerns that might contraindicate its use. Once deployed, post-market surveillance is used to monitor the safety of a product. Digital tools, however, provide both new challenges and offer new affordances. As mentioned earlier, digital tools evolve over time. Compared with drug formularies which are consistent, apps undergo iterative changes to address bugs and make feature updates. In fact, app updates might be one way to maintain the quality and benefits from a product [24]. Benefits and harms may then change over time which will require continuous data collection and digital tools can offer this monitoring.

In any assessment, the benefits must be balanced against the risks, so health benefits should be clear before the user downloads a digital tool. This is not always the case and some advertising is certainly misleading. Any digital tool should undergo safety testing and if it purports to provide a benefit, it should be clear what that benefit is and who it has benefitted. For a balanced understanding, information on whether the technology has been used in its intended population with user (clinician and/or patient) satisfaction and engagement, whether it has been used in its intended setting (i.e., commercial marketplace, adjunct to in-person specialty treatment, primary care) and what types of support have been required to implement it in research. This last issue is vital as paying participants for use, providing back-up services, or a coach may not be available in the wider world but may be acceptable in research practice. Most interventions will go through at least one randomized controlled trial and typically, they report an effect size. This is probably not understood by many and so designers should provide clarity so that users and clinicians can understand the number of people who benefit and by how much, as well as the number of people who do not benefit. No digital tool will help everyone and if individuals do not receive benefit then they should at least understand that others had the same experience. They will then not attribute the problem to some individual failing which may add to the severity of their mental health problems. The three T4T questions for apps from Wykes and Schueller [4••] related to risks and benefits are (1) what is the impact on the health condition, (2) what percentage of users received either no benefit or deteriorated, and (3) are there specific benefits that outweigh any costs?

Including diverse panels of end users in development

The intended end users should be present at the beginning of the development and design process. Often this step is completely missed until the tool has been developed and designers rely at best on current evidence on intervention gaps and clinician views. It is vital, however, that ethical and responsible developmental practices include the target audience in development if the digital tool is to provide the most benefit. This principle would also be likely to increase engagement which is a particular problem and which means that fewer people actually receive a benefit. For instance, some tools meant to support those with depression are tested on groups with low-level symptoms (e.g., distressed undergraduates). The range and depth of problems will not be the same and so this design process risks potential harm (see [25] for a feasibility and usability model study). T4T principles suggest three simple questions: (1) how were target users involved in the initial design, (2) how were target users involved in usability evaluations, (3) has usability been independently evaluated?


Research ethics boards oversee research carried out on digital health. Despite differences across countries, these ethics boards generally do not have much expertise in overseeing the ethical issues of this type of research. Given the newness of the field and the unique ethical considerations that exist, appropriate representation of digital health researchers on these boards will be essential to supporting and directing ethically sound research advancements.

As an example, one common ethical concern in digital mental health research is the appropriate response to data collected outside of the context of study visits. In a standard clinical trial of cognitive-behavioral therapy or an antidepressant medication, most data is collected in study visits that occur during workday hours when study staff are adequately prepared to respond to indications of participant risk. For example, if a participant reports suicidal ideation at a study visit, the study team can and should do a thorough, in-person risk assessment to determine whether voluntary or involuntary hospitalization is necessary.

In digital mental health research, study assessments are often administered remotely via a smartphone or an online portal and participants can often choose when they want to complete these assessments. If a participant completes such an assessment at 11 pm on a Saturday evening and indicates suicidal ideation, should the researcher be expected to initiate a follow-up assessment that evening? If so, what is the appropriate course of action if the researcher cannot reach the participant via phone? Neither of these questions would come up in a clinical trial in which assessments were only conducted during study visits; however, just because we are not collecting assessments between study visits does not mean that participants were not experiencing suicidal ideation at these times. Assessing more frequently and remotely using technology has the benefit of offering a complete and more nuanced picture of the participant’s struggles with mental illness and can also introduce questions about risk management and participant safety.

All too often, these questions are dealt with by simply limiting assessments to in-person or phone visits or excluding items involving issues such as suicidality. While this skirts the ethical concerns around follow-up assessment during non-workday hours, it could be considered unethical insofar as it involves missing an opportunity to understand how the technology may be affect participants and how it will have an impact on future patients. It also involves missing an opportunity to assess and intervene on that participant’s suicidal ideation, whether that intervention is an automated message with a helpline number or a follow-up call from a study clinician. Another option for dealing with these questions has been to require an on-call clinician to respond to any assessment alerts (e.g., reported suicidal ideation (SI)) within some time period, say 30 min. While this may seem appropriate, it can also make the technology less feasible to use in real-world contexts and, for the same reasons, it can make conducting digital mental health research prohibitively expensive. Again, the limits this can place on relevant scientific discovery may also raise ethical questions.

Risk monitoring is just one example of ways in which digital mental health research may differ from standard clinical trials. Other issues related to data security (e.g., the fact that in digital health research emails are often used for correspondence instead of mail or phone calls or that third parties, like the company who developed the app, will likely have access to personal health information (PHI) like participant phone numbers); end-user license agreements (which are often required for use of products developed by companies, but may need to be waived for a research study); and communicating the degree of clinician monitoring (e.g., participants may assume that because a study is being conducted at a clinic where they are receiving care that the data will be transmitted to and monitored by their clinician) also pose challenges specific to digital health research. Establishing guidelines for Ethical Review Boards such that these challenges are managed ethically in ways that protect research participants and allow for scientific innovation will be important as research in this domain expands.

Informed consent, terms of services, and end-user license agreements

In clinical research and clinical practice, potential consumers or participants complete a consent process to become informed participants in the decision to complete the research project or treatment. For software, consumers often agree to either end-user license agreements or terms of services to indicate their rights for using the software and rules they agree to for its use. It is worth noting that the goals of these processes are quite different with informed consent requiring individuals to be able to understand the information to make an informed decision. Terms of services and end-user license agreements are legal documents that serve to protect the developer and their products and ensure the consumer uses the products in expected ways.

Although informed consent documents are often reviewed and approved by regulatory bodies such as NHS Ethics or Institutional Review Boards, similar processes do not exist for terms of services and end-user license agreements for digital mental health products. As legal documents, the primary question is whether they are sufficient to stand up in legal processes. Such documents could be improved by being shorter and simpler and having checks on relevant understanding. A recent review found that many digital mental health products lack such policies and those that exist are often unacceptable (O’Loughlin, Neary, Adkins, & Schueller, 2019). As such, better attempts to inform potential consumers about the limits, risks, and potential benefits should be provided in these products.


  1. 1.

    Developers need to design systems that are safe, trustworthy, and aligned with the values and preferences of those who are influenced by their actions. They need to be aware of their hidden assumptions when they build systems on available data. The resulting digital tool may have unintended consequences, particularly, the potential to amplify gaps in healthcare. They therefore need to be able to abide by the Heston criterion that whatever is built is “best for people” and we would add that this is best for health systems in the long run too. Service user/consumer involvement should be present from the beginning and as such, developers need to be more collaborative in their efforts to engage their target audiences.

  2. 2.

    Researchers ought to consider not just their university or health services ethics committee rules but also the standards more broadly across the field. Establishing broad guidelines for ethics committees around issues such as data security standards, risk assessment, and response, conducting informed consent, and involving digital health researchers on institutional review boards will be essential to performing ethical, cutting-edge research. All research ethics boards will need help to develop the expertise to evaluate and advise on such projects and ultimately that help will fall on the researchers to educate these boards. The Connected and Open Research Ethics (CORE) Initiative at the University of California, San Diego, is one example of a consolidated effort related to digital health research, but we need more efforts to facilitate moving the field forward.

  3. 3.

    Clinicians must have clear guidance from professional and regulatory bodies on what technologies are safe and effective as well as when they are appropriate to use. Increasingly, opportunities will have to be made available for clinicians to receive training and education on the uses of technology in mental health care. This has been highlighted in a number of recent reports [1, 26, 27]. Effective treatments may be standalone web-based or app-based technology and obviously clinicians need to be aware of their effectiveness of and the groups of individuals for whom they are appropriate. Other digital technologies may be blended with traditional care and, if so, clinicians need to be trained to integrate technology effectively into their work.

  4. 4.

    Regulators and governments. We know that technology in health is disruptive. It will take time to transform the way we work and the hype to attract investors or an audience will need some regulation. The Federal Trade Commission has stepped in to remove misleading advertising such as that for Lumosity [28] which was marketed as helping people improve their brain power. Clinician-researchers in digital mental health, especially those not motivated by a specific commercial product, should be involved in developing regulatory guidelines, approving products, and setting forth models of appropriate integration of these products into clinical care.

  5. 5.

    Individuals must be made aware of where and how they can access technologies that may be helpful for them in managing their mental health. Additionally, there must be plain language explanations of possible benefits, how these have been tested, and what risks to privacy or health may be involved in using each technology. Individuals are ultimately responsible for the use of such tools, but such use needs to be predicated on developers, clinicians, and systems ensuring the proper supports exist for successful use.

  6. 6.

    Digital platforms such as Apple iTunes store and GooglePlay have so far taken a libertarian approach to selling apps but they too will probably come under some pressure to adopt ethical principles for selling something described as a “health or mental health app.” We have described our T4T principles in an earlier paper, which we think they should adopt at the point of download. Platforms also need to consider their ethics for developing and monitoring digital technologies. Social media platforms have now begun to address their responsibility for content on their platform and one important consideration is its mental health impact.



The very fact that this is a fast-developing area suggests that we do not know what the problems are going to be in the future. But we can specify in advance what we expect digital technology to do in healthcare so that their design options are transparent. A number of issues need to be considered by everyone in the digital technology field and, in particular, for the fast-growing field of digital mental health care and services.
  1. 1.

    Social and cultural issues may be subtle and affect how technology is used in different social strata and different countries. We must be sure that developments or use of digital technology does not accentuate existing digital divides [1, 19, 29].

  2. 2.

    Data manipulation which includes the use of personally identifiable data, but most often is about algorithms for Big Data can affect the service, the intervention, or the interactions between them. For instance, a non-transparent application of an algorithm might have unintended (or intended) effects on the provision of services by providing more face-to-face time with those likely to recover quickly rather than harder cases which will absorb more clinic time. This would have the intended consequence of allowing a service to meet targets for efficiency and may even suggest that it was more effective, but neither would be true.

  3. 3.

    Complexity occurs because there are multiple producers and users who may never be in contact, and on top of this, there are new uses for old technology that may never have been envisaged by the original producer. This distributes responsibility through a network with no clear understanding of what each node is responsible for. It is like Homer Simpson’s manifesto when he ran as Commissioner for the Sanitation Department, “Can someone else do it?” The assumption that it is someone else’s job has the potential for allowing important problems to fall through the net.


In sum, the interdisciplinary nature of the field of digital mental health introduces challenges of colliding ethical traditions and responsibilities. We identified and discussed some of these issues and emphasize the need for stakeholders to work together to address these issues. We do not pretend that this will be simple, but we believe that much more can be done now to ensure that ethical guidelines are followed.



TW acknowledges the support of the National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London and support from her NIHR Senior Investigator Award. The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.

Compliance with Ethical Standards

Conflict of Interest

Til Wykes declares that she has no conflict of interest.

Jessica Lipshitz reports research funding from Actualize Therapy LLC and personal fees from Pear Therapeutics, Inc.

Stephen M. Schueller is funded by One Mind for overseeing one of their projects, PsyberGuide.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.


The views expressed are those of the author and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care.

References and Recommended Reading

Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

  1. 1.
    Academy of Medical Sciences. Our data-driven future in healthcare: people and partnerships at the heart of health related technologies. 2018.
  2. 2.
    Keyes CLM. The mental health continuum: from languishing to flourishing in life. J Health Soc Behav. 2002;43(2):207–22.CrossRefPubMedGoogle Scholar
  3. 3.
    American Psychological Association. Ethical principles of psychologists and code of conduct. https://www.apaorg/ethics/code/. 2017.
  4. 4.
    •• Wykes T, Schueller SM. Why reviewing apps is not enough: Transparency for Trust (T4T): the Principles of Responsible Health App Marketplaces. J Med Internet Res. 2019; In Press. Introduces the Transparency for Trust (T4T) Principles suggested to help ethical deployment of health apps within app marketplaces.Google Scholar
  5. 5.
    Pai A. Survey: 58 percent of smartphone users have downloaded a fitness or health app. 2015.
  6. 6.
    Research2Guidance. 325,000 mobile health apps available in 2017 - android now the leading mHealth platform. Available from: 2017.
  7. 7.
    Holmes EA, Ghaderi A, Harmer CJ, Ramchandani PG, Cuijpers P, Morrison AP, et al. The Lancet Psychiatry Commission on psychological treatments research in tomorrow’s science. Lancet Psychiatry. 2018;5(3):237–86.CrossRefPubMedGoogle Scholar
  8. 8.
    • Bhuyan SS, Kim H, Isehunwa OO, Kumar N, Bhatt J, Wyant DK, et al. Privacy and security issues in mobile health: current research and future directions. Health Policy Technol. 2017;6(2):188–91. A solid review of privacy and security issues in mobile health and consideration of issues relevant to different stakeholder groups. CrossRefGoogle Scholar
  9. 9.
    Castell SRL, Ashford H. Future data-driven technologies and the implications for use of patient data: dialogue with public, patients and healthcare professionals. 2018.
  10. 10.
    • Hollis C, Sampson S, Simons L, Davies EB, Churchill R, Betton V, et al. Identifying research priorities for digital technology in mental health care: results of the James Lind Alliance priority setting partnership. Lancet Psychiatry. 2018;5(10):845–54. A stakeholder-driven process to identify priorities in digital mental health research highlighting concerns such as safety and efficacy as a primary concern. CrossRefPubMedGoogle Scholar
  11. 11.
    Federal Trade Commission. Interactive Mobile Health apps tool: developing a mobile health app? 2016.
  12. 12.
    • Simblett S, Greer B, Matcham F, Curtis H, Polhemus A, Ferrao J, et al. Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findingsJ Med Internet Res. 2018;20(7). A systematic review that uncovered major barriers including perceived utility and value and demonstrate that many studies were short with inconsistent reporting of usage. Google Scholar
  13. 13.
    Campbell ANC. Internet-delivered treatment for substance abuse: a multisite randomized controlled trial (vol 171, pg 683, 2014). Am J Psychiat. 2014;171(12):1338.CrossRefGoogle Scholar
  14. 14.
    Oulasvirta ATS, Roto V, Kuorelahti J. Interaction in 4-second bursts: the fragmented nature of attentional resources in mobile HCI. ACM DL Digital Library 2019;
  15. 15.
    Larsen ME, Nicholas J, Christensen H. Quantifying app store dynamics: longitudinal tracking of mental health apps. JMIR Mhealth Uhealth. 2016;4(3):e96.CrossRefPubMedPubMedCentralGoogle Scholar
  16. 16.
    Psyberguide. Looking for a mental health app? 2018.
  17. 17.
    Orcha. Unlocking the power of Digital Health for the population. 2018.
  18. 18.
    • Torous JB, Chan SR, Gipson SYMT, Kim JW, Nguyen T, Luo J, et al. A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care. Psychiatr Serv. 2018;69(5):498–500. Presents the evaluation framework for mobile apps supported by the American Psychiatric Association. CrossRefPubMedGoogle Scholar
  19. 19.
    Robotham D, Mayhew M, Rose D, Wykes T. Electronic personal health records for people with severe mental illness; a feasibility study. BMC Psychiatry. 2015;15:192.CrossRefPubMedPubMedCentralGoogle Scholar
  20. 20.
    World Wide Web Foundation. Three challenges for the web, according to its inventor. 2017.
  21. 21.
    Simblett S, Matcham F, Siddi S, Bulgari V, di San Pietro CB, Lopez JH, et al. Barriers to and facilitators of engagement with mHealth Technology for remote measurement and management of depression: qualitative analysis. JMIR Mhealth Uhealth. 2019;7(1):e11325.CrossRefPubMedPubMedCentralGoogle Scholar
  22. 22.
    Ennis L, Rose D, Callard F, Denis M, Wykes T. Rapid progress or lengthy process? Electronic personal health records in mental health. BMC Psychiatry. 2011;11.Google Scholar
  23. 23.
    Ennis L, Robotham D, Denis M, Pandit N, Newton D, Rose D, et al. Collaborative development of an electronic Personal Health Record for people with severe and enduring mental health problems. BMC Psychiatry. 2014;14:305.CrossRefPubMedPubMedCentralGoogle Scholar
  24. 24.
    Wisniewski H, Liu G, Henson P, Vaidyam A, Hajratalli NK, Onnela JP, et al. Understanding the quality, effectiveness and attributes of top-rated smartphone health apps. Evid Based Ment Health. 2019;22(1):4–9.CrossRefPubMedGoogle Scholar
  25. 25.
    Drake G, Csipke E, Wykes T. Assessing your mood online: acceptability and use of Moodscope. Psychol Med. 2013;43(7):1455–64.CrossRefPubMedGoogle Scholar
  26. 26.
    The Royal Society. Machine learning: the power and promise of computers that learn by example. royalsocietyorg/machine-learning. 2017.Google Scholar
  27. 27.
    NHS Health Education England. Topol review: exploring how to prepare the healthcare workforce, through education and training, to deliver the digital future. 2019.
  28. 28.
    Federal Trade Commission. Lumosity to pay $2 million to settle FTC deceptive advertising charges for its “Brain Training” Program USA:; 2016
  29. 29.
    Ennis L, Rose D, Denis M, Pandit N, Wykes T. Can’t surf, won’t surf: the digital divide in mental health. J Ment Health. 2012;21(4):395–403.CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Til Wykes
    • 1
    Email author
  • Jessica Lipshitz
    • 2
    • 3
  • Stephen M. Schueller
    • 4
  1. 1.Institute of Psychiatry, Psychology and NeuroscienceKing’s College London and South London and Maudsley NHS Foundation TrustLondonUK
  2. 2.Department of PsychiatryBrigham and Women’s HospitalBostonUSA
  3. 3.Department of PsychiatryHarvard Medical SchoolBostonUSA
  4. 4.Department of Psychological ScienceUniversity of California IrvineIrvineUSA

Personalised recommendations