Skip to main content
Log in

Potential and pitfalls of conversational agents in health care

  • Comment
  • Published:

From Nature Reviews Disease Primers

View current issue Sign up to alerts

Conversational agents (CAs) are computer programs designed to engage in human-like conversations with users. They are increasingly used in digital health applications, for example medical history taking. CAs have potential to facilitate health-care processes when designed carefully, considering quality aspects and are integrated into health-care processes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1: Factors in safe health-care conversational agents.

References

  1. Jovanović, M., Baez, M. & Casati, F. Chatbots as conversational healthcare services. IEEE Internet Comput. 25, 44–51 (2021).

    Article  Google Scholar 

  2. Denecke, K. & May, R. Developing a technical-oriented taxonomy to define archetypes of conversational agents in health care: literature review and cluster analysis. J. Med. Internet Res. 25, e41583 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Durden, E. et al. Changes in stress, burnout, and resilience associated with an 8-week intervention with relational agent “Woebot”. Internet Interv. 33, 100637 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  4. Jungmann, S. M., Klan, T., Kuhn, S. & Jungmann, F. Accuracy of a chatbot (Ada) in the diagnosis of mental disorders: comparative case study with lay and expert users. JMIR Form. Res. 3, e13863 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  5. Gilbert, S. et al. How accurate are digital symptom assessment apps for suggesting conditions and urgency advice? A clinical vignettes comparison to GPs. BMJ Open 10, e040269 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  6. Sarbay, İ., Berikol, G. B. & Özturan, İ. U. Performance of emergency triage prediction of an open access natural language processing based chatbot application (ChatGPT): a preliminary, scenario-based cross-sectional study. Turk. J. Emerg. Med. 23, 156–161 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  7. Denecke, K. Framework for guiding the development of high-quality conversational agents in healthcare. Healthcare 11, 1061 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  8. Denecke, K., May, R., Gabarron, E. & Lopez-Campos, G. H. Assessing the potential risks of digital therapeutics (DTX): the DTX risk assessment canvas. J. Pers. Med. 13, 1523 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  9. Stanley, J., ten Brink, R., Valiton, A., Bostic, T. & Scollan, R. in Proceedings of Sixth International Congress on Information and Communication Technology (eds Yang, X.-S., Sherratt, S., Dey, N. & Joshi, A.) 919–942 (Springer, 2022).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kerstin Denecke.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Denecke, K. Potential and pitfalls of conversational agents in health care. Nat Rev Dis Primers 9, 66 (2023). https://doi.org/10.1038/s41572-023-00482-x

Download citation

  • Published:

  • DOI: https://doi.org/10.1038/s41572-023-00482-x

  • Springer Nature Limited

Navigation