Abstract
Over the past decade, artificial intelligence (AI) has begun to transform Canadian organizations, driven by the promise of improved efficiency, better decision-making, and enhanced client experience. While AI holds great opportunities, there are also near-term impacts on the determinants of health and population health equity that are already emerging. If adoption is unregulated, there is a substantial risk that health inequities could be exacerbated through intended or unintended biases embedded in AI systems. New economic opportunities could be disproportionately leveraged by already privileged workers and owners of AI systems, reinforcing prevailing power dynamics. AI could also detrimentally affect population well-being by replacing human interactions rather than fostering social connectedness. Furthermore, AI-powered health misinformation could undermine effective public health communication. To respond to these challenges, public health must assess and report on the health equity impacts of AI, inform implementation to reduce health inequities, and facilitate intersectoral partnerships to foster development of policies and regulatory frameworks to mitigate risks. This commentary highlights AI’s near-term risks for population health to inform a public health response.
Résumé
Au cours de la dernière décennie, l’intelligence artificielle (IA) a commencé à transformer les organismes canadiens en leur promettant une plus grande efficience, de meilleurs processus décisionnels et une expérience client enrichie. Bien qu’elle recèle d’immenses possibilités, l’IA aura des effets à court terme – qui se font d’ailleurs déjà sentir – sur les déterminants de la santé et sur l’équité en santé des populations. Si son adoption n’est pas réglementée, il se peut très bien que les iniquités en santé continuent d’être exacerbées par les préjugés, intentionnels ou non, ancrés dans les systèmes d’IA. Les nouvelles possibilités économiques pourraient être démesurément exploitées par les travailleurs et les travailleuses déjà privilégiés et par les propriétaires des systèmes d’IA, renforçant ainsi la dynamique de pouvoir existante. L’IA pourrait aussi nuire au bien-être des populations en remplaçant les interactions humaines au lieu de favoriser la connexité sociale. De plus, la mésinformation sur la santé alimentée par l’IA pourrait réduire l’efficacité des messages de santé publique. Pour relever ces défis, la santé publique devra évaluer et communiquer les effets de l’IA sur l’équité en santé, en modérer la mise en œuvre pour réduire les iniquités en santé, et faciliter des partenariats intersectoriels pour éclairer l’élaboration de politiques et de cadres réglementaires d’atténuation des risques. Le présent commentaire fait ressortir les risques à court terme de l’IA pour la santé des populations afin d’éclairer la riposte de la santé publique.
Similar content being viewed by others
Data availability
Not applicable
Code availability
Not applicable
References
Angelis, L. D., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizz, C. (2023). ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Frontiers in Public Health, 11. https://doi.org/10.3389/fpubh.2023.1166120
Artificial Intelligence for Health Task Force. (2020). Building a learning health system for Canadians. Canadian Institute for Advanced Research.
Canadian Public Health Association. (2017). Public health: A conceptual framework. Canadian Public Health Association.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. From Reuters: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
European Union Agency for Fundamental Rights. (2022). Bias in algorithms: Artificial intelligence and discrimination. Vienna.
Government of Canada. (2022). The Artificial Intelligence and Data Act (AIDA). From Innovation, Science and Economic Development Canada: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
Greenfield, D., & Bhavnani, S. (2023). Social media: Generative AI could harm mental health. Nature. https://doi.org/10.1038/d41586-023-01693-8
Guo, L. N., Lee, M. S., Kassamali, B., Mita, C., & Nambudiri, V. E. (2022). Bias in, bias out: Underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection-A scoping review. Journal of the American Academy of Dermatology, 157‒159. https://doi.org/10.1016/j.jaad.2021.06.884
Hold-Lunstad, J. (2022). Social connection as a public health issue: The evidence and systemic framework for prioritizing the “social” in social determinants of health. Annual Reviews of Public Health, 193‒213. https://doi.org/10.1146/annurev-publhealth-052020-110732
Klein, N. (2023). AI machines aren’t ‘hallucinating’. But their makers are. From The Guardian: https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
Norori, N., Hu, Q., Aellen, F. M., Dalia Faraci, F., & Tzovara, A. (2021). Addressing bias in big data and AI: A call for open science. Patterns, 2(10). https://doi.org/10.1016/j.patter.2021.100347
National Collaborating Centre for Determinants of Health. (2013). Public health roles for improving health equity.
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. ArXiv. From https://arxiv.org/pdf/1906.02243.pdf
Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of health misinformation on social media: Systematic review. Journal of Medical Internet Research, 23(1). https://doi.org/10.2196/17187
World Health Organization. (2023). WHO calls for safe and ethical AI for health. From World Health Organization: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethics approval
Not applicable
Consent to participate
Not applicable
Consent for publication
Not applicable
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kamyabi, A., Iyamu, I., Saini, M. et al. Advocating for population health: The role of public health practitioners in the age of artificial intelligence. Can J Public Health (2024). https://doi.org/10.17269/s41997-024-00881-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.17269/s41997-024-00881-x