Virtual Reality

, Volume 8, Issue 3, pp 185–193

Intelligent virtual agents keeping watch in the battlefield

Original Article

Abstract

One of the first areas where virtual reality found a practical application was military training. Two fairly obvious reasons have driven the military to explore and employ this kind of technique in their training; to reduce exposure to hazards and to increase stealth. Many aspects of combat operations are very hazardous, and they become even more dangerous if the combatant seeks to improve his performance. Some smart weapons are autonomous, while others are remotely controlled after they are launched. This allows the shooter and weapon controller to launch the weapon and immediately seek cover, thus decreasing his exposure to return fire. Before launching a weapon, the person who controls that weapon must acquire/perceive as much information as he can, not only from its environment, but also from the people who inhabits that environment. Intelligent virtual agents (IVAs) are used in a wide variety of simulation environments, especially in order to simulate realistic situations as, for example, high fidelity virtual environment (VE) for military training that allows thousands of agents to interact in battlefield scenarios. In this paper, we propose a perceptual model, which seeks to introduce more coherence between IVA perception and human being perception, increasing the psychological “coherence” between the real life and the VE experience. Agents lacking this perceptual model could react in a non-realistic way, hearing or seeing things that are too far away or hidden behind other objects. The perceptual model, we propose in this paper introduces human limitations inside the agent’s perceptual model with the aim of reflecting human perception.

Keywords

Intelligent virtual agents (IVAs) Perception Awareness Focus Nimbus Human factors 

References

  1. 1.
    Benford SD, Fahlén LE (1993) A spatial model of interaction in large virtual environments. In: Proceedings of 3rd European conference on computer supported cooperative work (ECSCW’93). Kluwer, MilanoGoogle Scholar
  2. 2.
    Blumberg B (1997) Go with the flow: synthetic vision for autonomous animated creatures. In: Proceedings of the 1st international conference on autonomous agents (Agents’97), Marina del ReyGoogle Scholar
  3. 3.
    Chopra-Khullar S, Badler N (2001) Where to look? Automating attending behaviors of virtual human characters. Autonomous Agents Multi-agent Syst 4(1/2):9–23CrossRefGoogle Scholar
  4. 4.
    Endsley M (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):65–84Google Scholar
  5. 5.
    Herrero P (2003) A human like perceptual model for intelligent virtual agents. PhD Thesis. Universidad Politécnica de MadridGoogle Scholar
  6. 6.
    Herrero P, De Antonio A (2003) Keeping watch: intelligent virtual agents reflecting human-like perception in cooperative information systems. In: Proceedings of the 11th international conference on cooperative information systems (CoopIS 2003). Catania, SicilyGoogle Scholar
  7. 7.
    Herrero P, De Antonio A, Benford S, Greenhalgh C (2003) A hearing perceptual model for intelligent virtual agents. In: Proceedings of the 2nd international joint conference on autonomous agents and multiagent systems, MelbourneGoogle Scholar
  8. 8.
    Hill R, Han C, van Lent M (2002) Applying perceptually driven cognitive mapping to virtual urban environments. In: Conference on innovative applications of artificial intelligence (IAAI-2002) in EdmontonGoogle Scholar
  9. 9.
    Hill R, Han C, van Lent M (2002) Perceptually driven cognitive mapping of urban environments. In: Proceedings of the first international joint conference on autonomous agents and multiagent systems, BolognaGoogle Scholar
  10. 10.
    Howarth PA, Costello PJ (1997) Contemporary ergonomics 1997. Robertson SA (ed) Taylor and Francis, London, pp 109–116Google Scholar
  11. 11.
    Kendall G (2002) 3D Sound. Center for Music Technology School of Music. Northwestern University, Consulted.Google Scholar
  12. 12.
    Levi DM, Klein SA, Hariharan S (2002) Suppressive and facilitatory spatial interactions in foveal vision: foveal crowding is simple contrast masking. J Vis 2:140–166. http://journalofvision.org/2/2/2/Google Scholar
  13. 13.
    Levi DM, Hariharan S, Klein SA (2002) Suppressive and facilitatory spatial interactions in peripheral vision: peripheral crowding is neither size invariant nor simple contrast masking. J Vis 2:167–177. http://www.journalofvision.org/2/2/3/Google Scholar
  14. 14.
    Noser H (1997) A behavioral animation system based on L-systems and synthetic sensors for actors. PhD Thesis, École Polytechnique Fédérale De LausanneGoogle Scholar
  15. 15.
    Shinn-Cunningham, BG (2000) Distance cues for virtual auditory space. In: Proceedings of the IEEE 2000 international symposium on multimedia information processing, SydneyGoogle Scholar
  16. 16.
    Terzopoulos D, Rabie TF (1997) Animat vision: active vision in artificial animals. J Comput Vis Res 1(1):2–19Google Scholar
  17. 17.
    Thalmann D (2001) The foundations to build a virtual human society. In: Proceedings of Intelligent Virtual Actors (IVA’01). Madrid, SpainGoogle Scholar
  18. 18.
    Zahorik P (2002) Assessing auditory distance perception using virtual acoustics. J Acoust Soc Am 111:1832–1846CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2005

Authors and Affiliations

  1. 1.Facultad de InformáticaUniversidad Politécnica de MadridMadridSpain

Personalised recommendations