Skip to main content

Advertisement

Log in

Surrogates and Artificial Intelligence: Why AI Trumps Family

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm (AA). Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as surrogate decision-maker in cases where the patient has not issued a medical power of attorney. We argue that in such cases, and against the standard practice of vesting familial surrogates with decision making authority, the AA should have sole decision-making authority. This is because the AA will likely be better at predicting what treatment option the patient would have chosen. It would also be better at avoiding bias and, therefore, choosing in a more patient-centered manner. Furthermore, we argue that these considerations override any moral weight of the patient’s special relationship with their relatives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Although we believe that the AA should have sole decision-making authority, we also believe that this authority is contingent on regulatory oversight. Moreover, we believe that the actual process of implementing the practice of AA deferral should proceed gradually. We discuss these issues later in our article.

  2. Other types of incapacitated patients include the intoxicated, the mentally ill, seniors with dementia, and patients with little to no consciousness.

  3. For non-capacitated persons who never previously had decision-making capacity, the AA would be unable to apply the substituted judgment standard. The same, of course, is true of human surrogates.

  4. Many states designate a priority of surrogate decision-makers (typically with spouses given top priority, parents second, adult children third, etc.) in the absence of an advance directive and that such surrogates should make decisions based on these principles. However, these principles apply even when the patient has established an advance directive, as advance directives cannot specify a course of action for all possible scenarios. As Allen Buchanan and Dan Brock state, “[s]ince instructional advance directives can neither cover every contingency nor be fully self-explanatory, someone must be identified as having principal responsibility for interpreting and applying the instructional advance directive as choices arise” (1989, p. 135).

  5. The conventional view is that the SJP ensures that the patient’s autonomy is respected. Others argue that the principle is responding to the patient’s authenticity or dignity (Brudney 2009). It makes no difference for our argument what the moral basis of the SJP is. All that matters for our purposes is that the principle itself is correct.

  6. See Allen Buchanan and Dan Brock (1989, pp. 31–34).

  7. We say primary because the BIS is the other, secondary, ground for appointing surrogates. Again, our focus is only on the SJP.

  8. Of course, the rationale for appointing intimates is also partly practical: intimates are usually more readily available to responsibility make choices for the patient.

  9. See “Algorithm Appointed Board Director,” BBC News, May 16, 2014, www.bbc.com/news/technology-27426942; and Sophie Brown, “Could Computers Take Over the Boardroom?,” CNN Business, October 1, 2014, www.cnn.com/2014/09/30/business/computers-ceo-boardroom-robot-boss/index.html.

  10. https://www.wired.co.uk/article/ibm-watson-medical-doctor (accessed 5/9/2019).

  11. We thank an anonymous reviewer for pointing out this study.

  12. Relatedly, another study showed that mining digital footprint data “can effectively predict consumers decision-making styles” (Chen et al. 2019).

  13. We thank three anonymous reviewers for pressing us to address this issue.

  14. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  15. Relatedly, the algorithm should be as transparent as possible. For example, it would be reasonable for family members to inquire about how the AA made its recommendation. Families who disagree with the AA’s recommendation would need to provide evidence for their view. Moreover, our proposal to use AA over families is consistent with employing the AA in this fashion only after an extensive period of public commentary and involvement. Transparency is also important for building public trust in the AA. While deep learning systems are notorious for being ‘black boxes,’ work is being done on making its processes explainable (Montavon et al. 2018). We thank an anonymous reviewer for raising these issues.

References

  • Azucar, D., Marengo, D., & Settanni, M. (2018). Predicting the Big 5 personality traits from digital footprints on social media: A meta-analysis. Personality and Individual Differences, 124, 150–159.

    Article  Google Scholar 

  • Back, M. D., Stopfer, J. M., Vazire, S., Gaddis, S., Schmukle, S. C., Egloff, B., et al. (2010). Facebook profiles reflect actual personality, not self-idealization. Psychological Science, 21(3), 372–374.

    Article  Google Scholar 

  • Barrio-Cantalejo, I. M., Molina-Ruiz, A., Simón-Lorda, P., Cámara-Medina, C., Toral Lopez, I., del Mar Rodriguez del Aguila, M., et al. (2009). Advance directives and proxies’ predictions about patients’ treatment preferences. Nursing Ethics, 16(1), 93–109.

    Article  Google Scholar 

  • Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics. New York: Oxford University Press.

    Google Scholar 

  • Berg, J. (2012). Surrogate decision making in the internet age. The American Journal of Bioethics, 12(10), 28–33.

    Article  Google Scholar 

  • Brudney, D. (2009). Beyond autonomy and best interests. Hastings Center Report, 39(2), 31–37.

    Article  Google Scholar 

  • Buchanan, A. E., & Brock, D. W. (1989). Deciding for others: The ethics of surrogate decision making. Cambridge: Cambridge University Press.

    Google Scholar 

  • Chen, Y. J., Chen, Y. M., Hsu, Y. J., & Wu, J. H. (2019). Predicting consumers’ decision-making styles by analyzing digital footprints on facebook. International Journal of Information Technology and Decision Making, 18(02), 601–627.

    Article  Google Scholar 

  • Chen, J., Hsieh, G., Mahmud, J. U., & Nichols, J. (2014, February). Understanding individuals’ personal values from social media word use. In Proceedings of the 17th ACM conference on Computer supported cooperative work and social computing (pp. 405–414).

  • Courtland, R. (2018). Bias detectives: The researchers striving to make algorithms fair. Nature, 558(7710), 357.

    Article  Google Scholar 

  • Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.

    Article  Google Scholar 

  • Golbeck, J., Robles, C., & Turner, K. (2011). Predicting personality with social media. In CHI’11 extended abstracts on human factors in computing systems (pp. 253–262).

  • Gou, L., Zhou, M. X., & Yang, H. (2014, April). KnowMe and ShareMe: understanding automatically discovered personality traits from social media and user sharing preferences. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 955–964).

  • Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402–2410.

    Article  Google Scholar 

  • Houts, R. M., Smucker, W. D., Jacobson, J. A., Ditto, P. H., & Danks, J. H. (2002). Predicting elderly outpatients’ life-sustaining treatment preferences over time: The majority rules. Medical Decision Making, 22(1), 39–52.

    Article  Google Scholar 

  • Jent, J. F., Eaton, C. K., Merrick, M. T., Englebert, N. E., Dandes, S. K., Chapman, A. V., et al. (2011). The decision to access patient information from a social media site: What would you do? Journal of Adolescent Health, 49(4), 414–420.

    Article  Google Scholar 

  • Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802–5805.

    Article  Google Scholar 

  • Lamanna, C., & Byrne, L. (2018). Should artificial intelligence augment medical decision making? The case for an autonomy algorithm. AMA Journal of Ethics, 20(9), 902–910.

    Article  Google Scholar 

  • Lattie, E. G., Asvat, Y., Shivpuri, S., Gerhart, J., O’Mahony, S., Duberstein, P., et al. (2016). Associations between personality and end-of-life care preferences among men with prostate cancer: A clustering approach. Journal of Pain and Symptom Management, 51(1), 52–59.

    Article  Google Scholar 

  • Marengo, D., & Settanni, M. (2019). Mining facebook data for personality prediction: An overview. In H. Baumeister & C. Montag (Eds.), Digital phenotyping and mobile sensing (pp. 109–124). New York: Springer.

    Chapter  Google Scholar 

  • Marks, M. A., & Arkes, H. R. (2008). Patient and surrogate disagreement in end-of-life decisions: Can surrogates accurately predict patients’ preferences? Medical Decision Making, 28(4), 524–531.

    Article  Google Scholar 

  • Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.

    Article  Google Scholar 

  • Pope, T. M. (2009). Surrogate selection: an increasingly viable, but limited, solution to intractable futility disputes. Saint Louis University Journal of Health Law and Policy, 3(2), 183–252.

    Google Scholar 

  • Rid, A., & Wendler, D. (2014). Use of a patient preference predictor to help make medical decisions for incapacitated patients. Journal of Medicine and Philosophy, 39(2), 104–129.

    Article  Google Scholar 

  • Shalowitz, D. I., Garrett-Mayer, E., & Wendler, D. (2006). The accuracy of surrogate decision makers: A systematic review. Archives of Internal Medicine, 166(5), 493–497.

    Article  Google Scholar 

  • Shalowitz, D. I., Garrett-Mayer, E., & Wendler, D. (2007). How should treatment decisions be made for incapacitated patients, and why? PLOS Medicine, 4(3), e35. https://doi.org/10.1371/journal.pmed.0040035.

    Article  Google Scholar 

  • Siddiqui, S., & Chuan, V. T. (2018). In the patient’s best interest: Appraising social network site information for surrogate decision making. Journal of Medical Ethics, 44(12), 851–856.

    Article  Google Scholar 

  • Smucker, W. D., Houts, R. M., Danks, J. H., Ditto, P. H., Fagerlin, A., & Coppola, K. M. (2000). Modal preferences predict elderly patients’ life-sustaining treatment choices as well as patients’ chosen surrogates do. Medical Decision Making, 20(3), 271–280.

    Article  Google Scholar 

  • Turkle, S. (1997). Life on the screen: identity in the age of the internet. New York: Touchstone.

    Google Scholar 

  • Vazire, S., & Gosling, S. D. (2004). e-Perceptions: Personality impressions based on personal websites. Journal of Personality and Social Psychology, 87(1), 123.

    Article  Google Scholar 

  • Warshaw, J., Matthews, T., Whittaker, S., Kau, C., Bengualid, M., & Smith, B. (2015) CHI’15: Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 797–806).

  • Wendler, D., Wesley, B., Pavlick, M., & Rid, A. (2016). A new method for making treatment decisions for incapacitated patients: What do patients think about the use of a patient preference predictor? Journal of Medical Ethics, 42(4), 235–241.

    Article  Google Scholar 

  • Yarkoni, T. (2010). Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of Research in Personality, 44(3), 363–373.

    Article  Google Scholar 

  • Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4), 1036–1040.

    Article  Google Scholar 

  • Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2, 719–731.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Hubbard.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hubbard, R., Greenblum, J. Surrogates and Artificial Intelligence: Why AI Trumps Family. Sci Eng Ethics 26, 3217–3227 (2020). https://doi.org/10.1007/s11948-020-00266-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00266-6

Keywords

Navigation