, Volume 19, Issue 3, pp 370-401,
Open Access This content is freely available online to anyone, anywhere at any time.
Date: 29 Sep 2012

Inducing models of behavior from expert task performance in virtual environments

Abstract

We developed an end-to-end process for inducing models of behavior from expert task performance through in-depth case study. A subject matter expert (SME) performed navigational and adversarial tasks in a virtual tank combat simulation, using the dTank and Unreal platforms. Using eye tracking and Cognitive Task Analysis, we identified the key goals pursued by and attributes used by the SME, including reliance on an egocentric spatial representation, and on the fly re-representation of terrain in qualitative terms such as “safe” and “risky”. We demonstrated methods for automatic extraction of these qualitative higher-order features from combinations of surface features present in the simulation, producing a terrain map that was visually similar to the SME annotated map. The application of decision-tree and instance-based machine learning methods to the transformed task data supported prediction of SME task selection with greater than 95 % accuracy, and SME action selection at a frequency of 10 Hz with greater than 63 % accuracy, with real time constraints placing limits on algorithm selection. A complete processing model is presented for a path driving task, with the induced generative model deviating from the SME chosen path by less than 2 meters on average. The derived attributes also enabled environment portability, with path driving models induced from dTank performance and deployed in Unreal demonstrating equivalent accuracy to those induced and deployed completely within Unreal.