Skip to main content

Inverse Modeling of Combat Behavior with Virtual-Constructive Simulation Training

  • Conference paper
  • First Online:
Theory, Methodology, Tools and Applications for Modeling and Simulation of Complex Systems (AsiaSim 2016, SCS AutumnSim 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 644))

Included in the following conference series:

  • 1519 Accesses

Abstract

Modeling combat behavior is an important, yet complicated task because the combat behavior emerges from the rationality as well as the irrationality. For instance, when a soldier confronts a dilemma on accomplishing his mission and saving his life, it is difficult to model his ongoing thoughts with a simple model. This paper presents (1) how to reconstruct a realistic combat environment with a virtual-constructive simulation, and (2) how to model such combat behavior with the inverse reinforcement learning. The virtual-constructive simulation is a well-known simulation application for soldier training. Previous works on this virtual-constructive simulation focuses on a small number of entities and mission phases, so it was difficult to observe the frequent behavior dilemma in the field. This work presents a large scale and a complete brigade-level operation to provide such synthetic environment to human player. Then, our second work is observing the com-bat behavior through the virtual-constructive simulations, and modeling the behavior with the inverse reinforcement learning. Surely, we can observe the descriptive statistics of the observed behavior, but the inverse reinforcement learning provides calibrated weights on the valuation on hypothetical rewards from conflicting goals. Our study is the first attempt on merging the large-scale virtual constructive simulation and the inverse reinforcement learning on such massive scale.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bellman, R.: A Markovian decision process, pp. 679–684 (1957)

    Google Scholar 

  2. Russell, S.: Learning agents for uncertain environments (extended abstract). In: Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT), pp. 101–103 (1998)

    Google Scholar 

  3. Ng, A., Russell, S.: Algorithms for inverse reinforcement learning. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 663–670 (2000)

    Google Scholar 

  4. Ziebart, B., Maas, A., Bagnell, J., Dey, A.: Maximum entropy inverse reinforcement learning. In: AAAI, pp. 1433–1438 (2008)

    Google Scholar 

  5. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning, In: Proceedings of the 21st International Conference on Machine Learning (ICML), pp. 1–8 (2004)

    Google Scholar 

  6. Wood, C., López, P.J., Garcia, H.R., van Geest, J.: Developing a federation to demonstrate the NATO live, virtual and constructive concept. In: Proceedings of the 2008 Summer Computer Simulation Conference, p. 25. Society for Modeling and Simulation International (2008)

    Google Scholar 

  7. Tolk, A., Boulet, J.: Lessons learned on NATO experiments on C2/M&S interoperability. In: IEEE Spring Simulation Interoperability Workshop. Citeseer (2007)

    Google Scholar 

  8. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)

    Google Scholar 

  9. Zeigler, B.P., Praehofer, H., Kim, T.G.: Theory of Modeling and Simulation, p. 510 (2000)

    Google Scholar 

  10. Bae, J.W., Moon, I.-C.: LDEF formalism for agent-based model development. IEEE Trans. Syst. Man Cybern. Syst. PP(99), 1 (2015)

    Google Scholar 

Download references

Acknowledgment

This research was supported by the Korean ICT R&D program of MSIP/IITP (R7117-16-0219, Development of Predictive Analysis Technology on Socio-Economics using Self-Evolving Agent-Based Simulation embedded with Incremental Machine Learning).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Il-Chul Moon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this paper

Cite this paper

Kim, D., Kim, DH., Moon, IC. (2016). Inverse Modeling of Combat Behavior with Virtual-Constructive Simulation Training. In: Zhang, L., Song, X., Wu, Y. (eds) Theory, Methodology, Tools and Applications for Modeling and Simulation of Complex Systems. AsiaSim SCS AutumnSim 2016 2016. Communications in Computer and Information Science, vol 644. Springer, Singapore. https://doi.org/10.1007/978-981-10-2666-9_60

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2666-9_60

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2665-2

  • Online ISBN: 978-981-10-2666-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics