Skip to main content
Log in

Addressing military AI risks in U.S.–China crisis management mechanisms

  • Original Paper
  • Published:
China International Strategy Review Aims and scope Submit manuscript

Abstract

Both the U.S. and Chinese militaries have been committed to investing in emerging technologies, in particular artificial intelligence (AI), to increase effectiveness and efficiency in command and control, weapons systems, semiautonomous and autonomous vehicles, intelligence, surveillance, reconnaissance (ISR), and logistics. Strategic communities in both countries have increasingly cautioned about the potential ramifications of military AI for future crisis dynamics. Building on existing scholarship, this article explores how the growing usage of military AI may impact U.S.–China crisis prevention and management against the backdrop of heightened geopolitical tensions in the Western Pacific.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. The term “autonomy” refers to the ability of a machine to perform a task without human input. This is conceptually different from two other terms: “automatic” (systems that have very simple, mechanical responses to environmental inputs, e.g., trip wires and mines) and “automated” (more complex, rule-based systems, e.g., self-driving cars and modern programmable thermostats). There is no internationally agreed-upon definition for “autonomous weapon.” This study adopts the definition provided in U.S. DODD 3000.09. An autonomous weapon system is a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This is conceptually different from “human-supervised autonomous weapon system” (an autonomous weapon system designed to provide human operators with the ability to intervene and terminate engagements) and “semi-autonomous weapon system (a weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator). Paul Scharre and Michael Horowitz, “An Introduction to Autonomy in Weapon Systems,” Project on Ethical Autonomy Working Paper, Center for a New American Security, February 2015, https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/Ethical-Autonomy-Working-Paper_021015_v02.pdf?mtime=20160906082257; United States Department of Defense, Autonomy in Weapon Systems, DOD Directive 3000.09 (Washington, DC: Department of Defense, 2012).

  2. Congressional Research Service, Artificial Intelligence and National Security, R45178 (2020).

  3. United States Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy (Washington, DC: Department of Defense, 2019), https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.

  4. All current AI systems fall under the Narrow AI category, which refers to algorithms that address specific tasks. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by driving their own procedures through analysis of large training data sets. It will take much longer to develop General AI, which refers to more complicated systems capable of human-level intelligence across a broad arrange of tasks. Congressional Research Service, Artificial Intelligence and National Security.

  5. China State Council, “Guowuyuan Guanyu Yinfa Xin Yidai Rengong Zhineng Fazhan Guihua de Tongzhi” 国务院关于印发新一代人工智能发展规划的通知 [China’s Next Generation Artificial Intelligence Development Plan (2017)], July 8, 2017, http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.

  6. China State Council, “Zhonghua Renmin Gongheguo Guomin Jingji He Shehui Fazhan Di Shisi Ge Wu Nian Guihua He 2035 Nian Yuanjing Mubiao Gangyao” 中华人民共和国国民经济和社会发展第十四个五年规划和2035年远景目标纲要 [Outline of the 14th Five-Year Plan (2021–2025) for National Economic and Social Development and Long-Range Objectives Through the 2035], March 13, 2021, http://www.gov.cn/xinwen/2021-03/13/content_5592681.htm.

  7. United States Department of Defense, Military and Security Developments Involving the People’s Republic of China 2021 (Washington, DC: Office of the Secretary of Defense, 2021), https://media.defense.gov/2021/Nov/03/2002885874/-1/-1/0/2021-CMPR-FINAL.PDF.

  8. Gregory C. Allen, Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security (Washington, DC: Center for a New American Security, 2019), https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.

  9. United States Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy.

  10. United States Department of Defense. Unmanned Systems Integrated Roadmap, 2017–2042 (Washington, DC: Office of the Assistant Secretary of Defense for Acquisition, 2018).

    Google Scholar 

  11. United States Department of Defense, Autonomy in Weapon Systems.

  12. United States Department of Defense, “DOD Adopts Ethical Principles for Artificial Intelligence,” February 24, 2020, https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

  13. China Ministry of Science and Technology, “Xin yidai rengong zhineng yuanze—fazhan fuze ren de rengong zhineng” 新一代人工智能原则—发展负责任的人工智能。 [Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence], June 17, 2019, http://www.most.gov.cn/kjbgz/201906/t20190617_147107.html.

  14. United States Indo-Pacific Command, “U.S. Indo-Pacific Command Representatives Meet with Chinese Counterparts at Military Maritime Consultative Agreement Working Group,” December 17, 2021, https://www.pacom.mil/Media/News/News-Article-View/Article/2877542/us-indo-pacific-command-representatives-meet-with-chinese-counterparts-at-milit/.

  15. China Ministry of Defense, “Regular Press Conference of the Ministry of National Defense on December 30,” January 7, 2022, http://eng.mod.gov.cn/news/2022-01/07/content_4902674.htm.

  16. For details on existing crisis prevention and management mechanisms, see Tuosheng Zhang, “Strengthening Crisis Management, the Most Urgent Task in Current China-US and China-Japan Security Relations,” China International Strategy Review 3, no. 1 (2021): 34–55, https://doi.org/10.1007/s42533-021-00067-x.

  17. Congressional Research Service, U.S.–China Military Contacts: Issues for Congress, RL32496 (2014); Bo Hu, “Systemic Obstacles and Possible Solutions to Crisis Management Between China and the US,” China International Strategy Review 3, no. 2 (2021): 261–277, https://doi.org/10.1007/s42533-021-00088-6.

  18. International Crisis Group, “Risky Competition: Strengthening U.S.–China Crisis Management,” May 20, 2022, https://d2071andvip0wj.cloudfront.net/324-us-china-crisis-management.pdf.

  19. Ying Fu and John Allen, “Together: The U.S. and China Can Reduce the Risks from AI,” Noema, December 17, 2020, https://www.noemamag.com/together-the-u-s-and-china-can-reduce-the-risks-from-ai/.

  20. A recent report by the United States Institute of Peace includes brief exchanges between Chinese and American experts on the potential implications of AI for U.S.–China strategic stability. See United States Institute of Peace, Enhancing US-China Strategic Stability in an Era of Strategic Competition (Washington, DC: United States Institute of Peace, 2021), https://www.usip.org/publications/2021/04/enhancing-us-china-strategic-stability-era-strategic-competition.

  21. Gregory C. Allen, “One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk,” Commentary, Center for Strategic and International Studies, May 20, 2022, https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk.

  22. Author’s email correspondence with Gregory Allen, June 1, 2022. That China refused to discuss military AI risk reduction in the 2021 DPCT was confirmed in a separate private conversation the author had with two current DOD officials at a conference in June 2022.

  23. A good example is the U.S. Navy’s littoral combat ship (LCS) program. Jonathan Panter and Jonathan Falcone, “The Unplanned Costs of an Unmanned Fleet,” War on the Rocks, December 28, 2021, https://warontherocks.com/2021/12/the-unplanned-costs-of-an-unmanned-fleet/.

  24. Michael C. Horowitz and Paul Scharre, AI and International Stability: Risks and Confidence-building Measures  (Washington, DC: Center for a New American Security, 2021).

    Google Scholar 

  25. Andrew Imbrie and Elsa B. Kania, “AI Safety, Security, and Stability Among Great Powers: Options, Challenges, and Lessons Learned for Pragmatic Engagement,” Center for Security and Emerging Technology, December 2019, https://doi.org/10.51593/20190051.

  26. Paul Scharre, “Debunking the AI Arms Race Theory,” Texas National Security Review 4, no. 3 (2021): 121–132.

  27. Paul Scharre, “A Million Mistakes a Second,” Foreign Policy, Sept. 12, 2018, https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/.

  28. Ben Buchanan, “A National Security Research Agenda for Cybersecurity and Artificial Intelligence,” Center for Security and Emerging Technology, May 2020, https://doi.org/10.51593/2020CA001.

  29. For sure, not just hackers. Basic military deception techniques can be used to mess up the datasets early. See for example, Andrew Pfau, “The Navy Must Learn to Hide from Algorithms,” Proceedings, U.S. Naval Institute, May 2022, https://www.usni.org/magazines/proceedings/2022/may/navy-must-learn-hide-algorithms.

  30. Congressional Research Service, Artificial Intelligence and National Security.

  31. United States Department of the Navy, Unmanned Campaign Framework (Washington, DC: Department of the Navy, 2021).

    Google Scholar 

  32. Haotian Qi, “‘Smart’ Warfare and China-U.S. Stability: Strengths, Myths, and Risks,” China International Strategy Review 3, no.2 (2021): 278–299, https://doi.org/10.1007/s42533-021-00094-8.

    Article  Google Scholar 

  33. Alexander George, ed., Avoiding War: Problems of Crisis Management  (Boulder, CO: Westview Press Inc, 1991).

  34. Horowitz and Scharre, AI and International Stability.

  35. Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat,” Center for Security and Emerging Technology, July 2021, https://doi.org/10.51593/20200072.

  36. Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin, “A High-reward, Low-risk Approach to AI Military Innovation,” Foreign Affairs, May/June 2022; Paul Scharre, Arm of None: Autonomous Weapons and the Future of War (New York: W.W. Norton & Company, 2018), 143 (Kindle version).

  37. Maria Cramer, “A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says,” New York Times, June 3, 2021, https://www.nytimes.com/2021/06/03/world/africa/libya-drone.html.

  38. Sinan Tavsan, “Turkish Defense Company Says Drone Unable to Go Rogue in Libya,” Nikkei Asia, June 20, 2021, https://asia.nikkei.com/Business/Aerospace-Defense/Turkish-defense-company-says-drone-unable-to-go-rogue-in-Libya.

  39. Hu, “Systemic Obstacles.”

  40. Congressional Research Service, U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Consideration for Congress, R45392 (2018).

  41. Congressional Research Service, Navy Large Unmanned Surfaced and Undersea Vehicles: Background and issues for Congress, R45757 (2021).

  42. United States Department of the Navy, Unmanned Campaign Framework.

  43. The U.S. Navy’s new operational concept that envisions a more distributed force structure is known as Distributed Maritime Operations (DMO), and a supporting Marine Corps operational concept is known as Expeditionary Advanced Base Operation (EABO).

  44. Office of the chief of naval operations, Report to Congress on the Annual Long-Range Plan for Construction of Naval Vessels for Fiscal Year 2023 (Washington, DC: Office of the Secretary of Navy, 2022), https://media.defense.gov/2022/Apr/20/2002980535/-1/-1/0/PB23%20SHIPBUILDING%20PLAN%2018%20APR%202022%20FINAL.PDF.

  45. Congressional Research Service, Navy Large Unmanned Surfaced and Undersea Vehicles.

  46. Erik Lin-Greenberg, “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation,” Journal of Conflict Resolution (June 2022) https://doi.org/10.1177/00220027221106960.

    Article  Google Scholar 

  47. Sam LaGrone, “Navy: Large USV Will Require Small Screws for the Next Several Years,” USNI News, August 3, 2021, https://news.usni.org/2021/08/03/navy-large-usv-will-require-small-crews-for-the-next-several-years.

  48. The author thanks Jonathan Panter for discussing and sharing insights on the scenarios explored in this section.

  49. Jonathan Panter and Jonathan Falcone, “Feedback Loops and Fundamental Flaws in Autonomous Warships,” War on the Rocks, June 24, 2022, https://warontherocks.com/2022/06/feedback-loops-and-fundamental-flaws-in-autonomous-warships/.

  50. Emissions control refers to “the selective and controlled use of electromagnetic, acoustic, or other emitters to optimize command and control capabilities while minimizing, for operations security: a. detection by enemy sensors, b. mutual interference among friendly systems, and/or enemy interference with the ability to execute a military deception plan.” Office of the Chairman of the Joint Chiefs of Staff, DOD Dictionary of Military and Associated Terms (Washington, DC: The Joint Staff, 2021).

  51. Loyal wingman concepts pair a crewed fighter jet with a team of uncrewed low-cost aircraft. The wingman drones are envisioned to be able to fly autonomously with manned fighter jets, respond to new information on the battlefield, use electronic warfare capabilities to jam enemy signals, and launch their own missiles to carry out airstrikes and destroy targets. Stephen Losey, “How Autonomous Wingmen will Help Fighter Pilots in the Next War,” Defense News, February 15, 2022, https://www.defensenews.com/air/2022/02/13/how-autonomous-wingmen-will-help-fighter-pilots-in-the-next-war/.

  52. Valerie Insinna, “Australia Makes Another Order for Boeing’s Loyal Wingman Drones After a Successful First Flight,” Defense News, March 2, 2021, https://www.defensenews.com/air/2021/03/02/australia-makes-another-order-for-boeing-made-loyal-wingman-drones-after-a-successful-first-flight/.

  53. Tate Nurkin, “The Importance of Advancing Loyal Wingman Technology,” Defense News, December 21, 2020, https://www.defensenews.com/opinion/commentary/2020/12/21/the-importance-of-advancing-loyal-wingman-technology/.

  54. Kristin Huang, “Before the USS Decatur: Five Close China-US Military Encounters,” South China Morning Post, October 2, 2018, https://www.scmp.com/news/china/military/article/2166673/uss-decatur-five-close-china-us-military-encounters.

  55. “Chinese Fighter Jet had ‘Unsafe’ Interaction with U.S. Military Plane in June,” Politico, July 14, 2022, https://www.politico.com/news/2022/07/14/chinese-jet-us-military-interaction-00045832.

  56. Bradley Perrett, “Loyal Wingmen could be the Last Aircraft Standing in the Future Conflict,” The Strategist, Australian Strategic Policy Institute, November 22, 2021, https://www.aspistrategist.org.au/loyal-wingmen-could-be-the-last-aircraft-standing-in-a-future-conflict/.

  57. Mark J. Valencia, “US-China Underwater Drone Incident: Legal Grey Areas,” The Diplomat, January 11, 2017, https://thediplomat.com/2017/01/us-china-underwater-drone-incident-legal-grey-areas/.

  58. Elsa B. Kania, “‘AI Weapons’ in China’s Military Innovation,” Global China: Assessing China’s Growing Role in the World, Brookings Institution, April 2020.

  59. In the 2016 Bowditch incident, the two countries took conflicting positions on the legal status of the UUV and correspondingly, its rights and obligations when operating in waters China claims to be under its jurisdiction. Washington insisted that the UUV was “a sovereign immune vessel of the United States,” whereas China insisted the UUV did not meet the criteria for “warship” or “government vessel” as defined by UNCLOS and therefore was not entitled to sovereign immunity. Although this incident ended uneventfully with China returning the UUV to the United States, the two sides did not utilize the incident as an opportunity to initiate discussions to address their fundamental disagreement. For the U.S. position, see: United States Department of Defense, “Statement by Pentagon Press Secretary Peter Cook on Incident in the South China Sea,” December 16, 2016, https://www.defense.gov/News/Releases/Release/Article/1032611/statement-by-pentagon-press-secretary-peter-cook-on-incident-in-south-china-sea/.

  60. James Kraska and Raul (Pete) Pedrozo, “China’s Capture of U.S. Underwater Drone Violates Law of the Sea,” Lawfare, December 16, 2016, https://www.lawfareblog.com/chinas-capture-us-underwater-drone-violates-law-sea. On China’s position, see, for example, Yan Yan 闫岩, “Guanyu mei wu ren qianhang qi huomian quan wenti pingxi” 关于美无人潜航器豁免权问题评析 [Analysis of Sovereign Immunity for U.S. UUV], Collaborative Innovation Center of South China Sea Studies, December 20, 2016, https://nanhai.nju.edu.cn/a4/5a/c5800a173146/page.htm.

  61. For more general studies on these legal issues, see: Robert Veal, Michael Tsimplis, and Andrew Serdy, “The Legal Status and Operation of Unmanned Maritime Vehicles,” Ocean Development & International Law 50, no.1 (2019): 23–48.

  62. Michael N. Schmitt and David S. Goddard, “International Law and the Military Use of Unmanned System,” International Review of the Red Cross 98, no.2 (2016): 567–592.

    Article  Google Scholar 

  63. Natalie Klein, “Maritime Autonomous Vehicles within the International Law Framework to Enhance Maritime Security,” International Law Studies 95 (2019): 245–271.

    Google Scholar 

  64. Ryan Fedasiuk, “Chinese Perspectives on AI and Future Military Capabilities,” Center for Security and Emerging Technology, August 2020, https://doi.org/10.51593/20200022.

  65. Congressional Research Service, Lethal Autonomous Weapon Systems: Issues for Congress, R44466 (2016).

  66. Imbrie and Kania, “AI Safety, Security, and Stability.”

  67. The author thanks Tom Stefanick for pointing this out.

  68. Alex Stephenson and Ryan Fedasiuk, “How AI would – and wouldn’t – Factor into a U.S.-Chinese War,” War on the Rocks, May 3, 2022, https://warontherocks.com/2022/05/how-ai-would-and-wouldnt-factor-into-a-u-s-chinese-war/.

  69. Defense Science Board, Counter Autonomy: Executive Summary (Washington, DC: Office of the Under Secretary of Defense for Research and Engineering, 2020).

    Google Scholar 

  70. Adversarial machine learning seeks to fool machine learning models into making a mistake by providing training data intentionally designed for this purpose. Counter autonomy capabilities refer to methods that reduce the effectiveness of an autonomous system or cause it to fail in its intended mission. This could include traditional kinetic destruction of the system as well as efforts to confuse sensors, poison data, attack via cyber methods, etc.

  71. Fu and Allen, “Together.”

  72. United Nations Convention on Prohibition or Restriction on the Use of Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Chairperson’s summary, CCW/GGE.1/2020/WP.7, April 19, 2021.

  73. Jovana Davidovic, “What’s Wrong with Wanting a ‘Human in the Loop’?” War on the rocks, June 23, 2022, https://warontherocks.com/2022/06/whats-wrong-with-wanting-a-human-in-the-loop/.

Download references

Acknowledgements

The author would like to thank Ryan Hass, Chris Meserole, Jonathan Panter, Melanie Sisson, Tom Stefanick, and conference participants at the summer workshop hosted by the Cyber and Innovation Policy Institute (CIPI) of the U.S. Naval War College for feedback and suggestions on earlier drafts of this article. The author also thanks the editors of China International Strategy Review for their assistance in the production process.

Funding

The author has not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuxian Luo.

Ethics declarations

Conflict of interest

The author declares that she has no conflict of interest.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, S. Addressing military AI risks in U.S.–China crisis management mechanisms. China Int Strategy Rev. 4, 233–247 (2022). https://doi.org/10.1007/s42533-022-00110-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42533-022-00110-5

Keywords

Navigation