Abstract
Modeling combat behavior is an important, yet complicated task because the combat behavior emerges from the rationality as well as the irrationality. For instance, when a soldier confronts a dilemma on accomplishing his mission and saving his life, it is difficult to model his ongoing thoughts with a simple model. This paper presents (1) how to reconstruct a realistic combat environment with a virtual-constructive simulation, and (2) how to model such combat behavior with the inverse reinforcement learning. The virtual-constructive simulation is a well-known simulation application for soldier training. Previous works on this virtual-constructive simulation focuses on a small number of entities and mission phases, so it was difficult to observe the frequent behavior dilemma in the field. This work presents a large scale and a complete brigade-level operation to provide such synthetic environment to human player. Then, our second work is observing the com-bat behavior through the virtual-constructive simulations, and modeling the behavior with the inverse reinforcement learning. Surely, we can observe the descriptive statistics of the observed behavior, but the inverse reinforcement learning provides calibrated weights on the valuation on hypothetical rewards from conflicting goals. Our study is the first attempt on merging the large-scale virtual constructive simulation and the inverse reinforcement learning on such massive scale.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bellman, R.: A Markovian decision process, pp. 679–684 (1957)
Russell, S.: Learning agents for uncertain environments (extended abstract). In: Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT), pp. 101–103 (1998)
Ng, A., Russell, S.: Algorithms for inverse reinforcement learning. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 663–670 (2000)
Ziebart, B., Maas, A., Bagnell, J., Dey, A.: Maximum entropy inverse reinforcement learning. In: AAAI, pp. 1433–1438 (2008)
Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning, In: Proceedings of the 21st International Conference on Machine Learning (ICML), pp. 1–8 (2004)
Wood, C., López, P.J., Garcia, H.R., van Geest, J.: Developing a federation to demonstrate the NATO live, virtual and constructive concept. In: Proceedings of the 2008 Summer Computer Simulation Conference, p. 25. Society for Modeling and Simulation International (2008)
Tolk, A., Boulet, J.: Lessons learned on NATO experiments on C2/M&S interoperability. In: IEEE Spring Simulation Interoperability Workshop. Citeseer (2007)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)
Zeigler, B.P., Praehofer, H., Kim, T.G.: Theory of Modeling and Simulation, p. 510 (2000)
Bae, J.W., Moon, I.-C.: LDEF formalism for agent-based model development. IEEE Trans. Syst. Man Cybern. Syst. PP(99), 1 (2015)
Acknowledgment
This research was supported by the Korean ICT R&D program of MSIP/IITP (R7117-16-0219, Development of Predictive Analysis Technology on Socio-Economics using Self-Evolving Agent-Based Simulation embedded with Incremental Machine Learning).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media Singapore
About this paper
Cite this paper
Kim, D., Kim, DH., Moon, IC. (2016). Inverse Modeling of Combat Behavior with Virtual-Constructive Simulation Training. In: Zhang, L., Song, X., Wu, Y. (eds) Theory, Methodology, Tools and Applications for Modeling and Simulation of Complex Systems. AsiaSim SCS AutumnSim 2016 2016. Communications in Computer and Information Science, vol 644. Springer, Singapore. https://doi.org/10.1007/978-981-10-2666-9_60
Download citation
DOI: https://doi.org/10.1007/978-981-10-2666-9_60
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-2665-2
Online ISBN: 978-981-10-2666-9
eBook Packages: Computer ScienceComputer Science (R0)