Skip to main content

Military Medical Enhancement and Autonomous AI Systems: Requirements, Implications, Concerns

  • Chapter
  • First Online:
Ethics of Medical Innovation, Experimentation, and Enhancement in Military and Humanitarian Contexts

Part of the book series: Military and Humanitarian Health Ethics ((MHHE))

  • 320 Accesses

Abstract

Inspired by the recent development of autonomous artificial intelligence (AI) systems in military and medical applications I envision the use of one such system, an AI empowered exoskeleton smart-suit called the Praetor Suit, to question the important ethical issues stemming from its use. The Praetor Suit would have the ability to monitor the service member’s physiological and psychological state, report that state to medical experts surveilling its operation through teleoperation and autonomously administer medical treatments based upon its evaluations. Doing so, it would effectively enhance the user’s operational capacity on the military mission field. The important ethical issues which are engaged and stem from the suit’s Monitor, Evaluation and Administration AI modules are primarily connected with issues of data privacy, human and AI autonomy, transparent automation processes and automation biases, AI explainability and trust. Lastly, in light of the medical automation worry, a positive portrayal of military human-AI partnership is given through the framework of human-AI symbiosis to engage the question does the introduction of the Praetor Suit inadvertently change the military role of the future medic from a purely non-combative to that of a hybrid or combative one.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The Russian military rushes forward to develop their own third-generation Ratnik 3 suit which aims to integrate different important systems of life support, enhanced communication, protection and movement enhancement into their smart warrior suits (Ritsick 2018)

  2. 2.

    The Praetor Suit is the armored suit worn by the Doom Slayer character in the popular game Doom (published in 2016). The suit is given to the player at the very beginning and is worn for the entirety of the game. Praetor Suit covers the Doom Marine’s whole body, including his arms. The suit is described as being made from nearly impenetrable material, and may be responsible for what appears to be the Doom Marine’s superhuman abilities (Praetor Suit 2018).

  3. 3.

    The possibility of medical error is something which cannot be excluded from the medical profession as errors in medical diagnosis and treatment application are, unfortunately, a fact of medical life. Both humans and AI could err in the field of medical work, even though the reasons why the err may differ and differ drastically. For instance, even though the AI may never get tired or emotionally upset and may operate optimally and constantly without rest it, at least for now, has no capacities to improvise or adapt to unforeseen situations if it becomes necessary for the success of the medical operation.

  4. 4.

    I stand inspired by Klein’s patient-machine-medic relation (Klein 2015) with the noted difference I purposefuly dislike using the term “machine” when describing human-AI relations as this term, engrained in popular Western culture, tends to produce discomforting connotations which tend to negatively impact the attitude towards the formation of human-AI relations. (Coeckelbergh 2014)

  5. 5.

    AI Bias is recognized by leading institutions and experts (AI and bias – IBM Research – US 2018; Campolo et al. 2017) as one of the two biggest issues (the other problem is the “black box” or “explainable AI” issue) hindering further development and implementation of autonomous AI systems in the foreseeable future.

  6. 6.

    “In other words, commanders do have the legal right to require service members to undergo certain medical procedures such as vaccinations and periodic medical examinations for fitness of duty.” (McManus et al. 2005, 1124)

  7. 7.

    Such pre-mission obtained consent could be attained transparently and fully if, for instance, the suit’s operation would be field tested either in virtual space (through VR simulation) or real space. Such tests would not only psychologically accommodate the user to the suit’s use but would also fulfill the ethical necessity of informing the user on the suit’s beneficial operational capacities and the possible harmful consequences resulting from its use.

  8. 8.

    Although there are different types of system failures which could result in such harmful consequences, it is paramount that they are not produced from design or system administration incompetence which results in system vulnerabilities or failures. Unfortunately, there will always exist a type of rare and unpredictable high impact events, “Black Swans” (Taleb and Chandler 2007), which cannot ever be fully excluded from manifesting even with best possible system design.

  9. 9.

    This is especially important if the suit could have the capacity to administer or provide the user with stimulants or other enhancement drugs such as “stimulants to attenuate the effects of sleep loss (often referred to as ‘go pills’) and hypnotics to improve the ability to sleep (‘no-go pills’).” (Liivoja 2017, 5). It is paramount that such (combat) enhancement is not done autonomously without the knowledge of the user. In this regard one might even remember the 2001’s Space Odyssey HAL 9000, where HAL’s secret decision for the missions’ success results in an ethical disaster and loss of human life.

  10. 10.

    Such a scenario also raises the question, are AIs then included into the chain of, medical, command?

  11. 11.

    This could be achieved by the suit taking over or restricting movement autonomy and removing the service member from the harm’s way (or from inflicting harm to oneself or others), or, if necessary, by administering simple sedatives.

  12. 12.

    In gaming worlds, the „DPS“ abbreviation is a colloquial term used to design a wide variety of player classes, playing styles or character builds aimed to fulfill a single specific goal – to deal as much damage as possible to the enemy. As such, those players who build their characters to become damage dealers usually forego all other character traits, for instance strong defense, in order to ensure their characters maximum offensive power.

  13. 13.

    It is often the case how players have to pass through the same mission more than once as combat usually occurs in stages with different enemy behavior or environment rules occurring for each separate stage. For this reason, experienced medic players are highly sought for especially by inexperienced, or first-time, players venturing into the same mission.

  14. 14.

    In this regard, I agree with Liivoja’s (2017) lucid analysis of the reasons why medical personnel engaged in the biomedical enhancement of soldiers would suffer a “loss of special protection that such personnel and units enjoy” (Liivoja 2017, 27). Still, I take that the prospect of AI empowered enhancement has the potential to generate more fundamental changes to the role and purpose of the military medic than the ones exemplified in cases of biomedical enhancement.

References

  • Acemoglu, Daron, and Pascual Restrepo. 2018. Artificial intelligence, automation and work. Cambridge, MA: National Bureau of Economic Research. Ssrn. https://doi.org/10.2139/ssrn.3098384.

    Book  Google Scholar 

  • AI and bias – IBM Research – US. 2018. Research.ibm.com

  • Agrawal, Ajay, Joshua S. Gans, and Avi Goldfarb. 2018. Exploring the Impact of Artificial Intelligence: Prediction Versus Judgment.. Rotman School of Management, https://doi.org/10.2139/ssrn.3177467.

    Book  Google Scholar 

  • Angwin, Julia, and Surya Mattu. 2018. Machine Bias — ProPublica. ProPublica.

    Google Scholar 

  • Azevedo, Carlos R.B., Klaus Raizer, and Ricardo Souza. 2017. A vision for human-machine mutual understanding, trust establishment, and collaboration. 2017 IEEE conference on cognitive and computational aspects of situation management, CogSIMA 2017, 9–11. https://doi.org/10.1109/COGSIMA.2017.7929606.

  • Balkin, Jack M. 2017. Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. SSRN Electronic Journal. Elsevier BV. https://doi.org/10.2139/ssrn.3038939.

  • Bennett, C.C., and K. Hauser. 2013. Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach. Artificial Intelligence in Medicine 57: 9–19. https://doi.org/10.1016/j.artmed.2012.12.003.

    Article  Google Scholar 

  • Brynjolfsson, Erik, and Tom Mitchell. 2017. What can machine learning do? Workforce implications: Profound change is coming, but roles for humans remain. Science 358: 1530–1534. https://doi.org/10.1126/science.aap8062.

    Article  Google Scholar 

  • Casner, Stephen M., Edwin L. Hutchins, and Don Norman. 2016. The challenges of partially automated driving. Communications of the ACM 59: 70–77. https://doi.org/10.1145/2830565.

    Article  Google Scholar 

  • Coeckelbergh, Mark. 2014. The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics. Philosophy and Technology 27: 61–77. https://doi.org/10.1007/s13347-013-0133-8.

    Article  Google Scholar 

  • Cohn, Jeffrey F., Tomas Simon Kruez, Iain Matthews, Ying Yang, Minh Hoai Nguyen, Margara Tejera Padilla, Feng Zhou, and Fernando De La Torre. 2009. Detecting depression from facial actions and vocal prosody. Proceedings – 2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009. https://doi.org/10.1109/ACII.2009.5349358.

  • Campolo, Alex, Madelyn Sanfilippo, Meredith Whittaker, and Kate Crawford. 2017. AI Now 2017 Report. https://ainowinstitute.org/AI_Now_2017_Report.pdf.

  • Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2014. Algorithm aversion: People erroneously avoid algorithms after seeing them err. SSRN Electronic Journal. Elsevier BV. https://doi.org/10.2139/ssrn.2466040.

  • EU Declaration on Cooperation on Artificial Intelligence – JRC Science Hub Communities – European Commission. 2018. JRC Science Hub Communities.

    Google Scholar 

  • Fischer, Alastair J., and Gemma Ghelardi. 2016. The precautionary principle, evidence-based medicine, and decision theory in public health evaluation. Frontiers in Public Health 4: 1–7. https://doi.org/10.3389/fpubh.2016.00107.

    Article  Google Scholar 

  • Gao, Wei, Sam Emaminejad, Hnin Yin Yin Nyein, Samyuktha Challa, Kevin Chen, Austin Peck, Hossain M. Fahad, Hiroki Ota, Hiroshi Shiraki, Daisuke Kiriya, Der-Hsien Lien, George A. Brooks, Ronald W. Davis, and Ali Javey. 2016. Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 529 (7587): 509–514. https://doi.org/10.1038/nature16521.

    Article  Google Scholar 

  • Gunning, David. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA).

    Google Scholar 

  • Hengstler, Monika, Ellen Enkel, and Selina Duelli. 2016. Technological forecasting & social change applied arti fi cial intelligence and trust — The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change 105: 105–120. https://doi.org/10.1016/j.techfore.2015.12.014.

    Article  Google Scholar 

  • Lawless, W.F., Ranjeev Mittu, Stephen Russell, and Donald Sofge. 2017. Autonomy and artificial intelligence: A threat or savior? Cham: Springer. https://doi.org/10.1007/978-3-319-59719-5.

    Book  Google Scholar 

  • Lee, Chien-Cheng, Cheng-Yuan Shih, Wen-Ping Lai, and Po-Chiang Lin. 2012. An improved boosting algorithm and its application to facial emotion recognition. Journal of Ambient Intelligence and Humanized Computing 3: 11–17. https://doi.org/10.1007/s12652-011-0085-8.

    Article  Google Scholar 

  • Liivoja, Rain. 2017. Biomedical enhancement of warfighters and the legal protection of military medical personnel in armed conflict. Medical Law Review 0: 1–28. https://doi.org/10.1093/medlaw/fwx046.

    Article  Google Scholar 

  • Kalantar-zadeh, Kourosh, Nam Ha, Jian Zhen Ou, and Kyle J. Berean. 2017. Ingestible Sensors. ACS Sensors 2 (4): 468–483. https://doi.org/10.1021/acssensors.7b00045.

    Article  Google Scholar 

  • Klein, Eran. 2015. Models of the Patient-Machine-Clinician Relationship in Closed-Loop Machine Neuromodulation. In Machine Medical Ethics. Intelligent Systems, Control and Automation: Science and Engineering, ed. S. van Rysewyk and M. Pontier, vol. 74. Cham: Springer. https://doi.org/10.1007/978-3-319-08108-3_17.

    Chapter  Google Scholar 

  • McManus, John, Sumeru G. Mehta, Annette R. McClinton, Robert A. De Lorenzo, and Toney W. Baskin. 2005. Informed consent and ethical issues in military medical research. Academic Emergency Medicine 12: 1120–1126. https://doi.org/10.1197/j.aem.2005.05.037.

    Article  Google Scholar 

  • Neumann, Peter G. 2016. Risks of automation. Communications of the ACM 59: 26–30. Association for Computing Machinery (ACM). https://doi.org/10.1145/2988445.

    Article  Google Scholar 

  • Ng, Andrew. 2017. Andrew Ng: Artificial intelligence is the new electricity. YouTube.

    Google Scholar 

  • Praetor Suit. 2018. Doom Wiki.

    Google Scholar 

  • Rajpurkar, Pranav, Jeremy Irvin, Robyn L. Ball, Kaylie Zhu, Brandon Yang, Hershel Mehta, and Tony Duan et al. 2018. Deep Learning For Chest Radiograph Diagnosis: A Retrospective Comparison Of The Chexnext Algorithm To Practicing Radiologists. PLOS Medicine 15 (11): e1002686. doi:https://doi.org/10.1371/journal.pmed.1002686.

  • Reidsma, D., V. Charisi, D.P. Davison, F.M. Wijnen, J. van der Meij, V. Evers, et al. 2016. The EASEL project: Towards educational human-robot symbiotic interaction. In Proceedings of the 5th international conference on living machines, Lecture notes in computer science; Vol. 9793, ed. N.F. Lepora, A. Mura, M. Mangan, P.F.M.J. Verschure, M. Desmulliez, and T.J. Prescott, 297–306. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-42417-0_27.

    Chapter  Google Scholar 

  • Ritsick, Colin. 2018. Ratnik 3 – Russian Combat Suit | Future infantry exoskeleton combat system. Military Machine.

    Google Scholar 

  • Rosenthal, Stephanie, J. Biswas, and M. Veloso. 2010. An effective personal mobile robot agent through symbiotic human-robot interaction. Proceedings of the 9th international conference on autonomous agents and multiagent systems (AAMAS 2010), 915–922.

    Google Scholar 

  • Ross, Casey, and Ike Swetlitz. 2017. IBM pitched Watson as a revolution in cancer care. It’s nowhere close. STAT.

    Google Scholar 

  • Sim, Jai Kyoung, Sunghyun Yoon, and Young-Ho Cho. 2018. Wearable sweat rate sensors for human thermal comfort monitoring. Scientific Reports, 8. Springer Nature. https://doi.org/10.1038/s41598-018-19239-8.

  • Taleb, Nassim, and David Chandler. 2007. The black swan. Rearsby: W.F. Howes.

    Google Scholar 

  • Tavani, Herman T. 2015. Levels of trust in the context of machine ethics. Philosophy and Technology 28: 75–90. https://doi.org/10.1007/s13347-014-0165-8.

    Article  Google Scholar 

  • Taylor, Earl L. 2017. Making sense of “algorithm aversion”. Research World 2017: 57. Wiley. https://doi.org/10.1002/rwm3.20528.

    Article  Google Scholar 

  • The United States Special Operations Command. 2018. SOFIC 2018 conference program & exhibits guide 2018. In 2018 United States special operations forces industry conference (SOFIC) and exhibition. https://www.sofic.org/-media/sites/sofic/documents/sofic_2018_final-low-res3.ashx.

  • Warrick, Philip A, and Masun Nabhan Homsi. 2018. Ensembling convolutional and long short-term memory networks for electrocardiogram arrhythmia detection. Physiological Measurement. IOP Publishing. https://doi.org/10.1088/1361-6579/aad386.

  • White, Stephen. 2008. Brave new world: Neurowarfare and the limits of international humanitarian law. Cornell International Law Journal 41 (1): 177–210.

    Google Scholar 

  • Wortham, Robert H., Andreas Theodorou, and Joanna J. Bryson. 2016. What does the robot think? Transparency as a fundamental design requirement for intelligent systems. IJCAI-2016 ethics for artificial intelligence workshop.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Miletić, T. (2020). Military Medical Enhancement and Autonomous AI Systems: Requirements, Implications, Concerns. In: Messelken, D., Winkler, D. (eds) Ethics of Medical Innovation, Experimentation, and Enhancement in Military and Humanitarian Contexts. Military and Humanitarian Health Ethics. Springer, Cham. https://doi.org/10.1007/978-3-030-36319-2_11

Download citation

Publish with us

Policies and ethics