Intelligent Camera Control Using Behavior Trees

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7060)


Automatic camera systems produce very basic animations for virtual worlds. Users often view environments through two types of cameras: a camera that they control manually, or a very basic automatic camera that follows their character, minimizing occlusions. Real cinematography features much more variety producing more robust stories. Cameras shoot establishing shots, close-ups, tracking shots, and bird’s eye views to enrich a narrative. Camera techniques such as zoom, focus, and depth of field contribute to framing a particular shot. We present an intelligent camera system that automatically positions, pans, tilts, zooms, and tracks events occurring in real-time while obeying traditional standards of cinematography. We design behavior trees that describe how a single intelligent camera might behave from low-level narrative elements assigned by “smart events”. Camera actions are formed by hierarchically arranging behavior sub-trees encapsulating nodes that control specific camera semantics. This approach is more modular and particularly reusable for quickly creating complex camera styles and transitions rather then focusing only on visibility. Additionally, our user interface allows a director to provide further camera instructions, such as prioritizing one event over another, drawing a path for the camera to follow, and adjusting camera settings on the fly. We demonstrate our method by placing multiple intelligent cameras in a complicated world with several events and storylines, and illustrate how to produce a well-shot “documentary” of the events constructed in real-time.


intelligent cameras behavior trees camera control cinematography smart events 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amerson, D., Shaun, K., Young, R.M.: Real-time cinematic camera control for interactive narratives. In: Proceedings of the 2005 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, ACE 2005, pp. 369–369. ACM, New York (2005), Google Scholar
  2. 2.
    Arijon, D.: Grammar of the Film Language. Communication Arts Books, Hastings House Publishers (1976)Google Scholar
  3. 3.
    Bares, W., McDermott, S., Boudreaux, C., Thainimit, S.: Virtual 3d camera composition from frame constraints. In: Proceedings of the Eighth ACM International Conference on Multimedia, MULTIMEDIA 2000, pp. 177–186. ACM, New York (2000), Google Scholar
  4. 4.
    Bares, W.H., Grégoire, J.P., Lester, J.C.: Realtime constraint-based cinematography for complex interactive 3d worlds. In: Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI 1998/IAAI 1998, pp. 1101–1106. American Association for Artificial Intelligence, Menlo Park (1998), Google Scholar
  5. 5.
    Hecker, C., McHugh, L., Dyckho, M.A.,, M.: Three approaches to Halo-style behavior tree ai. In: Game Developers Conference (2007)Google Scholar
  6. 6.
    Christianson, D.B., Anderson, S.E., He, L.w., Salesin, D.H., Weld, D.S., Cohen, M.F.: Declarative camera control for automatic cinematography. In: Proceedings of the Thirteenth National Conference on Artificial Intelligence, AAAI 1996, vol. 1, pp. 148–155. AAAI Press (1996),
  7. 7.
    Christie, M., Machap, R., Normand, J.-M., Olivier, P., Pickering, J.H.: Virtual Camera Planning: A Survey. In: Butz, A., Fisher, B., Krüger, A., Olivier, P. (eds.) SG 2005. LNCS, vol. 3638, pp. 40–52. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Christie, M., Olivier, P.: Camera control in computer graphics: models, techniques and applications. In: ACM SIGGRAPH ASIA 2009 Courses, SIGGRAPH ASIA 2009, pp. 3:1–3:197. ACM, New York (2009), Google Scholar
  9. 9.
    Drucker, S.M., Galyean, T.A., Zeltzer, D.: Cinema: a system for procedural camera movements. In: Proceedings of the 1992 Symposium on Interactive 3D Graphics, I3D 1992, pp. 67–70. ACM, New York (1992), Google Scholar
  10. 10.
    Drucker, S.M., Zeltzer, D.: Camdroid: a system for implementing intelligent camera control. In: Proceedings of the 1995 Symposium on Interactive 3D Graphics, I3D 1995, pp. 139–144. ACM, New York (1995), Google Scholar
  11. 11.
    Elson, D.K., Riedl, M.O.: A lightweight intelligent virtual cinematography system for machinima production. In: AIIDE, pp. 8–13 (2007)Google Scholar
  12. 12.
    Friedman, D., Feldman, Y.A.: Automated cinematic reasoning about camera behavior. Expert Syst. Appl. 30, 694–704 (2006), CrossRefGoogle Scholar
  13. 13.
    Friedman, D.A., Feldman, Y.A.: Knowledge-based cinematography and its applications. In: ECAI, pp. 256–262 (2004)Google Scholar
  14. 14.
    Halper, N., Helbing, R., Strothotte, T.: A camera engine for computer games: Managing the trade-off between constraint satisfaction and frame coherence. Computer Graphics Forum 20(3), 174–183 (2001)CrossRefGoogle Scholar
  15. 15.
    He, L.w., Cohen, M.F., Salesin, D.H.: The virtual cinematographer: a paradigm for automatic real-time camera control and directing. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, pp. 217–224. ACM, New York (1996), CrossRefGoogle Scholar
  16. 16.
    Isla, D.: Handling complexity in the Halo 2 ai. In: Game Developers Conference, p. 12 (2005)Google Scholar
  17. 17.
    Isla, D.: Halo 3 - building a better battle. In: Game Developers Conference (2008)Google Scholar
  18. 18.
    Jhala, A.: Cinematic Discourse Generation. Ph.D. thesis, North Carolina State University (2009)Google Scholar
  19. 19.
    Li, T.-Y., Cheng, C.-C.: Real-Time Camera Planning for Navigation in Virtual Environments. In: Butz, A., Fisher, B., Krüger, A., Olivier, P., Christie, M. (eds.) SG 2008. LNCS, vol. 5166, pp. 118–129. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  20. 20.
    Lino, C., Christie, M., Lamarche, F., Schofield, G., Olivier, P.: A real-time cinematography system for interactive 3d environments. In: Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2010, pp. 139–148. Eurographics Association, Aire-la-Ville (2010), Google Scholar
  21. 21.
    Lukas, C.: Directing for Film and Television. Anchor Press / Doubleday (1985)Google Scholar
  22. 22.
    Oskam, T., Sumner, R.W., Thuerey, N., Gross, M.: Visibility transition planning for dynamic camera control. In: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2009, pp. 55–65. ACM, New York (2009), CrossRefGoogle Scholar
  23. 23.
    Pereira, F., Gelatti, G., Raupp Musse, S.: Intelligent virtual environment and camera control in behavioural simulation. In: Proceedings of XV Brazilian Symposium on Computer Graphics and Image Processing, pp. 365–372 (2002)Google Scholar
  24. 24.
    Stocker, C., Sun, L., Huang, P., Qin, W., Allbeck, J.M., Badler, N.I.: Smart Events and Primed Agents. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 15–27. Springer, Heidelberg (2010), Google Scholar
  25. 25.
    Young, R.M.: Story and discourse: A bipartite model of narrative generation in virtual worlds. Interaction Studies (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Department of Computer and Information ScienceUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations