Advertisement

Journal of Real-Time Image Processing

, Volume 5, Issue 2, pp 91–107 | Cite as

IMMIView: a multi-user solution for design review in real-time

  • Ricardo JotaEmail author
  • Bruno R. de Araújo
  • Luís C. Bruno
  • João M. Pereira
  • Joaquim A. Jorge
Special Issue

Abstract

IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.

Keywords

Mixed reality Design review Human–computer interaction Real-time collaborative Interaction Virtual reality 

Notes

Acknowledgments

Ricardo Jota was supported by the Portuguese Foundation for Science and Technology, Grant reference SFRH/BD/17574/2004. Bruno Araújo was supported by the Portuguese Foundation for Science and Technology, Grant reference SFRH/BD/31020/2006. We would like to thank José Pedro Dias for his work on IMMIView.

References

  1. 1.
    de Araùjo, B., Jorge, J.: Blobmaker: Free-form modelling with variational implicit surfaces. In: 12 Encontro Português de Computação Gráfica, pp. 335–342 (2003)Google Scholar
  2. 2.
    Bae, S.H., Kobayash, T., Kijima, R., Kim, W.S.: Tangible nurbs-curve manipulation techniques using graspable handles on a large display. In: UIST ’04. Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, pp. 81–90. ACM Press, New York (2004)Google Scholar
  3. 3.
    Broll, W.: The augmented round table—a new interface to urban planning and architectural design. In: Rauterberg M., Menozzi M., Wesson J. (eds.) INTERACT. IOS Press, Amsterdam (2003)Google Scholar
  4. 4.
    Buxton, W., Fitzmaurice, G., Balakrishnan, R., Kurtenbach, G.: Large displays in automotive design. IEEE Comput Graph Appl 20(4), 68–75 (2000)CrossRefGoogle Scholar
  5. 5.
    Callahan, J., Hopkins, D., Weiser, M., Shneiderman, B.: An empirical comparison of pie vs. linear menus. In: CHI ’88. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 95–100. ACM Press, New York (1988)Google Scholar
  6. 6.
    Cao, X., Balakrishnan, R.: Visionwand: interaction techniques for large displays using a passive wand tracked in 3d. ACM Trans Graph 23(3), 729–729 (2004)CrossRefGoogle Scholar
  7. 7.
    Davis, J., Chen, X.: Lumipoint: multi-user laser-based interaction on large tiled displays. Displays 23(5) (2002)Google Scholar
  8. 8.
    Deering, M.F.: Holosketch: a virtual reality sketching/animation tool. ACM Trans Comput Hum Interact 2(3), 220–238 (1995)CrossRefGoogle Scholar
  9. 9.
    Drettakis, G., Roussou, M., Reche, A., Tsingos, N.: Design and evaluation of a real-world virtual environment for architecture and urban planning. Presence: Teleoper Virtual Environ 16(3), 318–332 (2007)CrossRefGoogle Scholar
  10. 10.
    Dvorak, J., Hamata, V., Skacilik, J., Benes, B.: Boosting up architectural design education with virtual reality. In: CEMVR’05. Central European Multimedia and Virtual Reality Conference, Eurographics, pp. 95–200 (2005)Google Scholar
  11. 11.
    Fonseca, M.J., Pimentel, C., Jorge, J.A.: CALI: an online scribble recognizer for calligraphic interfaces. In: Proceedings of the 2002 AAAI Spring Symposium—Sketch Understanding, pp. 51–58. Palo Alto (2002)Google Scholar
  12. 12.
    Grossman, T., Balakrishnan, R., Kurtenbach, G., Fitzmaurice, G., Khan, A., Buxton, B.: Creating principal 3d curves with digital tape drawing. In: CHI ’02. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 121–128. ACM Press, New York (2002)Google Scholar
  13. 13.
    Hopkins, D.: The design and implementation of pie menus. Dr Dobb’s J 16(12), 16–26 (1991)Google Scholar
  14. 14.
    Hur, H., Fleisch, T., Kim, T.B., On, G.: Aici-advanced immersive collaborative interaction framework. In: Proceedings of the SCCE CAD/CAM Conference: Book of Abstracts & Proceedings CD-ROM., Society of CAD/CAM Engineers (SCCE), pp. 20–25. Pyungchang, Korea (2006)Google Scholar
  15. 15.
    Iacucci, G., Wagner, I.: Supporting collaboration ubiquitously: an augmented learning environment for architecture students. In: ECSCW’03. Proceedings of the Eighth Conference on European Conference on Computer Supported Cooperative Work, pp. 139–158. Kluwer Academic Publishers, Norwell (2003)Google Scholar
  16. 16.
    Igarashi, T., Matsuoka, S., Tanaka, H.: Teddy: A sketching interface for 3d freeform design. Proceedings of SIGGRAPH 99, pp. 409–416, ISBN 0-20148-560-5. Los Angeles (1999)Google Scholar
  17. 17.
    IGD, F., ZGDV.: Instantreality: advanced mixed reality technology. http://www.instantreality.org/ (2009). Retrieved Oct 2009
  18. 18.
    Isenberg, T., Neumann, P., Carpendale, S., Nix, S., Greenberg, S.: Interactive annotations on large, high-resolution information displays. In: 2006 Conference Compendium of IEEE VIS, IEEE InfoVis, and IEEE VAST (Baltimore, MA, USA), pp. 124–125. IEEE Computer Society, Los Alamitos (2006) (extended abstract and poster)Google Scholar
  19. 19.
    Ishii, H., Ben-Joseph, E., Underkoffler, J., Yeung, L., Chak, D., Kanji, Z., Piper, B.: Augmented urban planning workbench: overlaying drawings, physical models and digital simulation. In: ISMAR ’02. Proceedings of the 1st International Symposium on Mixed and Augmented Reality, p. 203. IEEE Computer Society, Washington, DC (2002)Google Scholar
  20. 20.
    Jacoby, R.H., Ellis, S.R.: Using virtual menus in a virtual environment. In: Proceedings of SPIE, Visual Data Interpretation (1992)Google Scholar
  21. 21.
    Jung, T., Gross, M.D., Do, E.Y.L.: Sketching annotations in a 3d web environment. In: CHI ’02. CHI ’02 Extended Abstracts on Human Factors in Computing Systems, pp. 618–619. ACM, New York (2002)Google Scholar
  22. 22.
    Kadobayashi, R., Lombardi, J., McCahill, M.P., Stearns, H., Tanaka, K., Kay, A.: 3d model annotation from multiple viewpoints for croquet. In: C5 ’06. Proceedings of the Fourth International Conference on Creating, Connecting and Collaborating through Computing, pp. 10–15. IEEE Computer Society, Washington, DC (2006)Google Scholar
  23. 23.
    Kim, M., Maher, M.L.: Comparison of designers using a tangible user interface and a graphical user interface and the impact on spatial cognition. In: Proceedings of International Workshop on Human Behaviour in Designing, Key Centre of Design Computing and Cognition, pp. 81–94. University of Sydney, Australia (2005)Google Scholar
  24. 24.
    Kleinermann, F., Troyer, O.D., Creelle, C., Pellens, B.: Adding semantic annotations, navigation paths and tour guides for existing virtual environments. In: Proceedings of the 13th International Conference on Virtual Systems and Multimedia, pp. 50–62. Springer, Brisbane (2007)Google Scholar
  25. 25.
    Lapointe, J.F., Godin, G.: On-screen laser spot detection for large display interaction. In: Haptic Audio Visual Environments and their Applications, p. 5. IEEE International Workshop (2005)Google Scholar
  26. 26.
    Nealen, A., Sorkine, O., Alexa, M., Cohen-Or, D.: A sketch-based interface for detail-preserving mesh editing. In: SIGGRAPH ’05: ACM SIGGRAPH 2005 Papers, pp. 1142–1147. ACM Press, New York (2005)Google Scholar
  27. 27.
    Oh, J.Y., Stuerzlinger, W.: Laser pointers as collaborative pointing devices. In: Proceedings of Graphics Interface, pp. 141–150 (2002)Google Scholar
  28. 28.
    OpenSG.: Opensg. Opensource scenegraph. http://www.opensg.org (2009)
  29. 29.
    Pereira, J.P., Jorge, J.A., Branco, V.A., Ferreira, F.N.: Calligraphic interfaces: mixed metaphors for design. In: Jorge J.A., Nunes N.J., Cunha, J.F. (eds.) DSV-IS, Lecture Notes in Computer Science, vol 2844, pp. 154–170. Springer, Berlin (2003)Google Scholar
  30. 30.
    Reitmayr, G., Schmalstieg, D.: Opentracker: a flexible software design for three-dimensional interaction. Virtual Real 9(1),79–92 (2005)CrossRefGoogle Scholar
  31. 31.
    Schmalstieg, D., Fuhrmann, A., Hesina, G., Szalavári, Z., Encarnaçäo, L.M., Gervautz, M., Purgathofer, W.: The studierstube augmented reality project. Presence Teleoper Virtual Environ 11(1), 33–54 (2002)CrossRefGoogle Scholar
  32. 32.
    Shesh, A., Chen, B.: Smartpaper: an interactive and user friendly sketching system. Comput Graph Forum 23(3), 301–310 (2004)CrossRefGoogle Scholar
  33. 33.
    Wang, X.: Specifying augmented virtuality systems for creative architectural design. In: IV ’07: Proceedings of the 11th International Conference Information Visualization, pp. 584–589. IEEE Computer Society, Washington, DC (2007)Google Scholar
  34. 34.
    Welch, G., Bishop, G.: An introduction to the kalman filter. Technical report. University of North Carolina, USA (2006)Google Scholar
  35. 35.
    XMLBlaster.: Xmlblaster. Open source for message oriented middleware. http://www.xmlblaster.org (2007)
  36. 36.
    Zeleznik, R.C., Herndon, K.P., Hughes, J.F.: SKETCH: an Interface for sketching 3D scenes. In: SIGGRAPH 96 Conference Proceedings, pp. 163–170 (1996)Google Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Ricardo Jota
    • 1
    Email author
  • Bruno R. de Araújo
    • 1
  • Luís C. Bruno
    • 1
  • João M. Pereira
    • 1
  • Joaquim A. Jorge
    • 1
  1. 1.VIMMI Group, Department of Computer Science and EngineeringINESC-ID, IST/Technical University of LisbonLisboaPortugal

Personalised recommendations