Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics. Oxford University Press.
Google Scholar
Benjamens, S., Dhunnoo, P., & Meskó, B. (2020). The state of artificial intelligence-based FDA-approved medical devices and algorithms: An online database. NPJ Digital Medicine, 3(1), 1–8. https://doi.org/10.1038/s41746-020-00324-0
Article
Google Scholar
Bickler, P. E., Feiner, J. R., & Severinghaus, J. W. (2005). Effects of skin pigmentation on pulse oximeter accuracy at low saturation. The Journal of the American Society of Anesthesiologists, 102(4), 715–719.
Google Scholar
Biddle, J. (2016). Inductive risk, epistemic risk, and overdiagnosis of disease. Perspectives on Science, 24(2), 192–205. https://doi.org/10.1162/POSC_a_00200
Article
Google Scholar
Biddle, J. (2020). Epistemic risks in cancer screening: Implications for ethics and policy. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 79, 101200. https://doi.org/10.1016/j.shpsc.2019.101200
Article
Google Scholar
Biddle, J. B., & Kukla, R. (2017). The geography of epistemic risk. In K. C. Elliott & T. Richards (Eds.), Exploring inductive risk: Case studies of values in science (pp. 215–237). Oxford University Press.
Google Scholar
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society. https://doi.org/10.1177/2053951715622512
Article
Google Scholar
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721–1730).
Creel, K. A. (2020). Transparency in complex computational systems. Philosophy of Science, 87(4), 568–589.
MathSciNet
Article
Google Scholar
Dotan, R. (2020). Theory choice, non-epistemic values, and machine learning. Synthese. https://doi.org/10.1007/s11229-020-02773-2
MathSciNet
Article
Google Scholar
Durán, J. M. (2021). Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare. Artificial Intelligence, 297, 103498. https://doi.org/10.1016/j.artint.2021.103498
MathSciNet
Article
MATH
Google Scholar
Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28(4), 645–666. https://doi.org/10.1007/s11023-018-9481-6
Article
Google Scholar
Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335. https://doi.org/10.1136/medethics-2020-106820
Article
Google Scholar
Engel, P. J. H. (2008). Tacit knowledge and visual expertise in medical diagnostic reasoning: Implications for medical education. Medical Teacher, 30(7), e184–e188.
Article
Google Scholar
Esteva, A., Kuprel, B., Novoa, R., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115–118. https://doi.org/10.1038/nature21056
Article
Google Scholar
Friedman, B., & Hendry, D. G. (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press.
Book
Google Scholar
Garcia de Jesús, E. (2021). People with rare blood clots after a COVID-19 jab share an uncommon immune response. Retrieved from https://www.sciencenews.org/article/covid-vaccine-blood-clot-immune-astrazeneca-johnson-johnson
Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S. J., Lermer, E., Coughlin, J. F., Guttag, J. V., Colak, E., & Ghassemi, M. (2021). Do as AI say: Susceptibility in deployment of clinical decision-aids. NPJ Digital Medicine, 4(31), 1–8. https://doi.org/10.1038/s41746-021-00385-9
Article
Google Scholar
Genin, K., & Grote, T. (2021). Randomized controlled trials in medical AI: A methodological critique. Philosophy of Medicine. https://doi.org/10.5195/philmed.2021.27
Article
Google Scholar
Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9
Article
Google Scholar
Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205–211. https://doi.org/10.1136/medethics-2019-105586
Article
Google Scholar
Heaven, W. D. (2020). Google’s medical AI was super accurate in a lab. Real life was a different story. Retrieved October 22, 2021, from https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/
Heaven, W. D. (2021). Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review. Retrieved October 6, 2021, from https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/
Holzinger, A., Biemann, C., Pattichis, C., & Kell, D. (2017). What do we need to build explainable AI systems for the medical domain? https://arxiv.org/1712.09923
Johnson, G. M. (2020). Algorithmic bias: On the implicit biases of social technology. Synthese. https://doi.org/10.1007/s11229-020-02696-y
Article
Google Scholar
Khetpal, V., & Shah, N. (2021). How a largely untested AI algorithm crept into hundred of hospitals. Retrieved June 17, 2021, from https://www.fastcompany.com/90641343/epic-deterioration-index-algorithm-pandemic-concerns.
Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31–57.
Article
Google Scholar
London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21. https://doi.org/10.1002/hast.973
Article
Google Scholar
Nissenbaum, H. (2011). A contextual approach to privacy online. Daedalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113.
Nyrup, R., & Robinson, D. (2022). Explanatory pragmatism: A context-sensitive framework for explainable medical AI. Ethics and Information Technology. https://doi.org/10.1007/s10676-022-09632-3
Article
Google Scholar
Polanyi, M. (1958). Personal knowledge. University of Chicago Press.
Google Scholar
Price, W. N., II. (2019). Medical AI and Contextual Bias. Harvard Journal of Law and Technology., 33, 66.
Google Scholar
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
Article
Google Scholar
Sand, M., Durán, J. M., & Jongsma, K. R. (2022). Responsibility beyond design: Physicians’ requirements for ethical medical AI. Bioethics, 36(2), 162–169. https://doi.org/10.1111/bioe.12887
Article
Google Scholar
Sarwar, S., Dent, A., Faust, K., Richer, M., Djuric, U., Van Ommeren, R., & Diamandis, P. (2019). Physician perspectives on integration of artificial intelligence into diagnostic pathology. NPJ Digital Medicine, 2, 28. https://doi.org/10.1038/s41746-019-0106-0
Article
Google Scholar
Singh, K., Valley, T. S., Tang, S., Li, B. Y., Kamran, F., Sjoding, M. W., Wiens, J., Otles, E., Donnelly, J. P., Wei, M. Y., McBride, J. P., Cao, J., Penoza, C., Ayanian, J. Z., & Nallamothu, B. K. (2020). Evaluating a widely implemented proprietary deterioration index model among hospitalized covid-19 patients. Annals of the American Thoracic Society. https://doi.org/10.1513/AnnalsATS.202006-698OC
Article
Google Scholar
Sjoding, M. W., Dickson, R. P., Iwashyna, T. J., Gay, S. E., & Valley, T. S. (2020). Racial bias in pulse oximetry measurement. New England Journal of Medicine, 383(25), 2477–2478.
Article
Google Scholar
Staff. (2021). How FDA regulates artificial intelligence in medical products. Pew Charitable Trusts.
Google Scholar
Sullivan, E. (2019). Understanding from machine learning models. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axz035
Article
Google Scholar
Umbrello, S., & van de Poel, I. (2021). Mapping value sensitive design onto AI for social good principles. AI and Ethics. https://doi.org/10.1007/s43681-021-00038-3
Article
Google Scholar
Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy and Technology, 34, 265–288. https://doi.org/10.1007/s13347-019-00382-7
Article
Google Scholar
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). (2019) Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy and Technology, 32, 661–683. https://doi.org/10.1007/s13347-018-0330-6
Article
Google Scholar