Skip to main content
Log in

Addressing military AI risks in U.S.–China crisis management mechanisms

  • Original Paper
  • Published:
China International Strategy Review Aims and scope Submit manuscript


Both the U.S. and Chinese militaries have been committed to investing in emerging technologies, in particular artificial intelligence (AI), to increase effectiveness and efficiency in command and control, weapons systems, semiautonomous and autonomous vehicles, intelligence, surveillance, reconnaissance (ISR), and logistics. Strategic communities in both countries have increasingly cautioned about the potential ramifications of military AI for future crisis dynamics. Building on existing scholarship, this article explores how the growing usage of military AI may impact U.S.–China crisis prevention and management against the backdrop of heightened geopolitical tensions in the Western Pacific.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  1. The term “autonomy” refers to the ability of a machine to perform a task without human input. This is conceptually different from two other terms: “automatic” (systems that have very simple, mechanical responses to environmental inputs, e.g., trip wires and mines) and “automated” (more complex, rule-based systems, e.g., self-driving cars and modern programmable thermostats). There is no internationally agreed-upon definition for “autonomous weapon.” This study adopts the definition provided in U.S. DODD 3000.09. An autonomous weapon system is a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This is conceptually different from “human-supervised autonomous weapon system” (an autonomous weapon system designed to provide human operators with the ability to intervene and terminate engagements) and “semi-autonomous weapon system (a weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator). Paul Scharre and Michael Horowitz, “An Introduction to Autonomy in Weapon Systems,” Project on Ethical Autonomy Working Paper, Center for a New American Security, February 2015,; United States Department of Defense, Autonomy in Weapon Systems, DOD Directive 3000.09 (Washington, DC: Department of Defense, 2012).

  2. Congressional Research Service, Artificial Intelligence and National Security, R45178 (2020).

  3. United States Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy (Washington, DC: Department of Defense, 2019),

  4. All current AI systems fall under the Narrow AI category, which refers to algorithms that address specific tasks. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by driving their own procedures through analysis of large training data sets. It will take much longer to develop General AI, which refers to more complicated systems capable of human-level intelligence across a broad arrange of tasks. Congressional Research Service, Artificial Intelligence and National Security.

  5. China State Council, “Guowuyuan Guanyu Yinfa Xin Yidai Rengong Zhineng Fazhan Guihua de Tongzhi” 国务院关于印发新一代人工智能发展规划的通知 [China’s Next Generation Artificial Intelligence Development Plan (2017)], July 8, 2017,

  6. China State Council, “Zhonghua Renmin Gongheguo Guomin Jingji He Shehui Fazhan Di Shisi Ge Wu Nian Guihua He 2035 Nian Yuanjing Mubiao Gangyao” 中华人民共和国国民经济和社会发展第十四个五年规划和2035年远景目标纲要 [Outline of the 14th Five-Year Plan (2021–2025) for National Economic and Social Development and Long-Range Objectives Through the 2035], March 13, 2021,

  7. United States Department of Defense, Military and Security Developments Involving the People’s Republic of China 2021 (Washington, DC: Office of the Secretary of Defense, 2021),

  8. Gregory C. Allen, Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security (Washington, DC: Center for a New American Security, 2019),

  9. United States Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy.

  10. United States Department of Defense. Unmanned Systems Integrated Roadmap, 2017–2042 (Washington, DC: Office of the Assistant Secretary of Defense for Acquisition, 2018).

    Google Scholar 

  11. United States Department of Defense, Autonomy in Weapon Systems.

  12. United States Department of Defense, “DOD Adopts Ethical Principles for Artificial Intelligence,” February 24, 2020,

  13. China Ministry of Science and Technology, “Xin yidai rengong zhineng yuanze—fazhan fuze ren de rengong zhineng” 新一代人工智能原则—发展负责任的人工智能。 [Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence], June 17, 2019,

  14. United States Indo-Pacific Command, “U.S. Indo-Pacific Command Representatives Meet with Chinese Counterparts at Military Maritime Consultative Agreement Working Group,” December 17, 2021,

  15. China Ministry of Defense, “Regular Press Conference of the Ministry of National Defense on December 30,” January 7, 2022,

  16. For details on existing crisis prevention and management mechanisms, see Tuosheng Zhang, “Strengthening Crisis Management, the Most Urgent Task in Current China-US and China-Japan Security Relations,” China International Strategy Review 3, no. 1 (2021): 34–55,

  17. Congressional Research Service, U.S.–China Military Contacts: Issues for Congress, RL32496 (2014); Bo Hu, “Systemic Obstacles and Possible Solutions to Crisis Management Between China and the US,” China International Strategy Review 3, no. 2 (2021): 261–277,

  18. International Crisis Group, “Risky Competition: Strengthening U.S.–China Crisis Management,” May 20, 2022,

  19. Ying Fu and John Allen, “Together: The U.S. and China Can Reduce the Risks from AI,” Noema, December 17, 2020,

  20. A recent report by the United States Institute of Peace includes brief exchanges between Chinese and American experts on the potential implications of AI for U.S.–China strategic stability. See United States Institute of Peace, Enhancing US-China Strategic Stability in an Era of Strategic Competition (Washington, DC: United States Institute of Peace, 2021),

  21. Gregory C. Allen, “One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk,” Commentary, Center for Strategic and International Studies, May 20, 2022,

  22. Author’s email correspondence with Gregory Allen, June 1, 2022. That China refused to discuss military AI risk reduction in the 2021 DPCT was confirmed in a separate private conversation the author had with two current DOD officials at a conference in June 2022.

  23. A good example is the U.S. Navy’s littoral combat ship (LCS) program. Jonathan Panter and Jonathan Falcone, “The Unplanned Costs of an Unmanned Fleet,” War on the Rocks, December 28, 2021,

  24. Michael C. Horowitz and Paul Scharre, AI and International Stability: Risks and Confidence-building Measures  (Washington, DC: Center for a New American Security, 2021).

    Google Scholar 

  25. Andrew Imbrie and Elsa B. Kania, “AI Safety, Security, and Stability Among Great Powers: Options, Challenges, and Lessons Learned for Pragmatic Engagement,” Center for Security and Emerging Technology, December 2019,

  26. Paul Scharre, “Debunking the AI Arms Race Theory,” Texas National Security Review 4, no. 3 (2021): 121–132.

  27. Paul Scharre, “A Million Mistakes a Second,” Foreign Policy, Sept. 12, 2018,

  28. Ben Buchanan, “A National Security Research Agenda for Cybersecurity and Artificial Intelligence,” Center for Security and Emerging Technology, May 2020,

  29. For sure, not just hackers. Basic military deception techniques can be used to mess up the datasets early. See for example, Andrew Pfau, “The Navy Must Learn to Hide from Algorithms,” Proceedings, U.S. Naval Institute, May 2022,

  30. Congressional Research Service, Artificial Intelligence and National Security.

  31. United States Department of the Navy, Unmanned Campaign Framework (Washington, DC: Department of the Navy, 2021).

    Google Scholar 

  32. Haotian Qi, “‘Smart’ Warfare and China-U.S. Stability: Strengths, Myths, and Risks,” China International Strategy Review 3, no.2 (2021): 278–299,

    Article  Google Scholar 

  33. Alexander George, ed., Avoiding War: Problems of Crisis Management  (Boulder, CO: Westview Press Inc, 1991).

  34. Horowitz and Scharre, AI and International Stability.

  35. Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat,” Center for Security and Emerging Technology, July 2021,

  36. Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin, “A High-reward, Low-risk Approach to AI Military Innovation,” Foreign Affairs, May/June 2022; Paul Scharre, Arm of None: Autonomous Weapons and the Future of War (New York: W.W. Norton & Company, 2018), 143 (Kindle version).

  37. Maria Cramer, “A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says,” New York Times, June 3, 2021,

  38. Sinan Tavsan, “Turkish Defense Company Says Drone Unable to Go Rogue in Libya,” Nikkei Asia, June 20, 2021,

  39. Hu, “Systemic Obstacles.”

  40. Congressional Research Service, U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Consideration for Congress, R45392 (2018).

  41. Congressional Research Service, Navy Large Unmanned Surfaced and Undersea Vehicles: Background and issues for Congress, R45757 (2021).

  42. United States Department of the Navy, Unmanned Campaign Framework.

  43. The U.S. Navy’s new operational concept that envisions a more distributed force structure is known as Distributed Maritime Operations (DMO), and a supporting Marine Corps operational concept is known as Expeditionary Advanced Base Operation (EABO).

  44. Office of the chief of naval operations, Report to Congress on the Annual Long-Range Plan for Construction of Naval Vessels for Fiscal Year 2023 (Washington, DC: Office of the Secretary of Navy, 2022),

  45. Congressional Research Service, Navy Large Unmanned Surfaced and Undersea Vehicles.

  46. Erik Lin-Greenberg, “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation,” Journal of Conflict Resolution (June 2022)

    Article  Google Scholar 

  47. Sam LaGrone, “Navy: Large USV Will Require Small Screws for the Next Several Years,” USNI News, August 3, 2021,

  48. The author thanks Jonathan Panter for discussing and sharing insights on the scenarios explored in this section.

  49. Jonathan Panter and Jonathan Falcone, “Feedback Loops and Fundamental Flaws in Autonomous Warships,” War on the Rocks, June 24, 2022,

  50. Emissions control refers to “the selective and controlled use of electromagnetic, acoustic, or other emitters to optimize command and control capabilities while minimizing, for operations security: a. detection by enemy sensors, b. mutual interference among friendly systems, and/or enemy interference with the ability to execute a military deception plan.” Office of the Chairman of the Joint Chiefs of Staff, DOD Dictionary of Military and Associated Terms (Washington, DC: The Joint Staff, 2021).

  51. Loyal wingman concepts pair a crewed fighter jet with a team of uncrewed low-cost aircraft. The wingman drones are envisioned to be able to fly autonomously with manned fighter jets, respond to new information on the battlefield, use electronic warfare capabilities to jam enemy signals, and launch their own missiles to carry out airstrikes and destroy targets. Stephen Losey, “How Autonomous Wingmen will Help Fighter Pilots in the Next War,” Defense News, February 15, 2022,

  52. Valerie Insinna, “Australia Makes Another Order for Boeing’s Loyal Wingman Drones After a Successful First Flight,” Defense News, March 2, 2021,

  53. Tate Nurkin, “The Importance of Advancing Loyal Wingman Technology,” Defense News, December 21, 2020,

  54. Kristin Huang, “Before the USS Decatur: Five Close China-US Military Encounters,” South China Morning Post, October 2, 2018,

  55. “Chinese Fighter Jet had ‘Unsafe’ Interaction with U.S. Military Plane in June,” Politico, July 14, 2022,

  56. Bradley Perrett, “Loyal Wingmen could be the Last Aircraft Standing in the Future Conflict,” The Strategist, Australian Strategic Policy Institute, November 22, 2021,

  57. Mark J. Valencia, “US-China Underwater Drone Incident: Legal Grey Areas,” The Diplomat, January 11, 2017,

  58. Elsa B. Kania, “‘AI Weapons’ in China’s Military Innovation,” Global China: Assessing China’s Growing Role in the World, Brookings Institution, April 2020.

  59. In the 2016 Bowditch incident, the two countries took conflicting positions on the legal status of the UUV and correspondingly, its rights and obligations when operating in waters China claims to be under its jurisdiction. Washington insisted that the UUV was “a sovereign immune vessel of the United States,” whereas China insisted the UUV did not meet the criteria for “warship” or “government vessel” as defined by UNCLOS and therefore was not entitled to sovereign immunity. Although this incident ended uneventfully with China returning the UUV to the United States, the two sides did not utilize the incident as an opportunity to initiate discussions to address their fundamental disagreement. For the U.S. position, see: United States Department of Defense, “Statement by Pentagon Press Secretary Peter Cook on Incident in the South China Sea,” December 16, 2016,

  60. James Kraska and Raul (Pete) Pedrozo, “China’s Capture of U.S. Underwater Drone Violates Law of the Sea,” Lawfare, December 16, 2016, On China’s position, see, for example, Yan Yan 闫岩, “Guanyu mei wu ren qianhang qi huomian quan wenti pingxi” 关于美无人潜航器豁免权问题评析 [Analysis of Sovereign Immunity for U.S. UUV], Collaborative Innovation Center of South China Sea Studies, December 20, 2016,

  61. For more general studies on these legal issues, see: Robert Veal, Michael Tsimplis, and Andrew Serdy, “The Legal Status and Operation of Unmanned Maritime Vehicles,” Ocean Development & International Law 50, no.1 (2019): 23–48.

  62. Michael N. Schmitt and David S. Goddard, “International Law and the Military Use of Unmanned System,” International Review of the Red Cross 98, no.2 (2016): 567–592.

    Article  Google Scholar 

  63. Natalie Klein, “Maritime Autonomous Vehicles within the International Law Framework to Enhance Maritime Security,” International Law Studies 95 (2019): 245–271.

    Google Scholar 

  64. Ryan Fedasiuk, “Chinese Perspectives on AI and Future Military Capabilities,” Center for Security and Emerging Technology, August 2020,

  65. Congressional Research Service, Lethal Autonomous Weapon Systems: Issues for Congress, R44466 (2016).

  66. Imbrie and Kania, “AI Safety, Security, and Stability.”

  67. The author thanks Tom Stefanick for pointing this out.

  68. Alex Stephenson and Ryan Fedasiuk, “How AI would – and wouldn’t – Factor into a U.S.-Chinese War,” War on the Rocks, May 3, 2022,

  69. Defense Science Board, Counter Autonomy: Executive Summary (Washington, DC: Office of the Under Secretary of Defense for Research and Engineering, 2020).

    Google Scholar 

  70. Adversarial machine learning seeks to fool machine learning models into making a mistake by providing training data intentionally designed for this purpose. Counter autonomy capabilities refer to methods that reduce the effectiveness of an autonomous system or cause it to fail in its intended mission. This could include traditional kinetic destruction of the system as well as efforts to confuse sensors, poison data, attack via cyber methods, etc.

  71. Fu and Allen, “Together.”

  72. United Nations Convention on Prohibition or Restriction on the Use of Certain Conventional Weapons, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Chairperson’s summary, CCW/GGE.1/2020/WP.7, April 19, 2021.

  73. Jovana Davidovic, “What’s Wrong with Wanting a ‘Human in the Loop’?” War on the rocks, June 23, 2022,

Download references


The author would like to thank Ryan Hass, Chris Meserole, Jonathan Panter, Melanie Sisson, Tom Stefanick, and conference participants at the summer workshop hosted by the Cyber and Innovation Policy Institute (CIPI) of the U.S. Naval War College for feedback and suggestions on earlier drafts of this article. The author also thanks the editors of China International Strategy Review for their assistance in the production process.


The author has not disclosed any funding.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Shuxian Luo.

Ethics declarations

Conflict of interest

The author declares that she has no conflict of interest.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, S. Addressing military AI risks in U.S.–China crisis management mechanisms. China Int Strategy Rev. 4, 233–247 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: