The Visual Computer

, Volume 25, Issue 1, pp 25–37 | Cite as

Mobile phone-based mixed reality: the Snap2Play game

  • Tat-Jun Chin
  • Yilun You
  • Celine Coutrix
  • Joo-Hwee Lim
  • Jean-Pierre Chevallet
  • Laurence Nigay
Original Article

Abstract

The ubiquity of camera phones provides a convenient platform to develop immersive mixed-reality games. In this paper we introduce such a game which is loosely based on the popular card game “Memory”, where players are asked to match a pair of identical cards among a set of overturned cards by revealing only two cards at a time. In our game, the players are asked to match a “digital card”, which corresponds to a scene in a virtual world, to a “physical card”, which is an image of a scene in the real world. The objective is to convey a mixed-reality sensation. Cards are matched with a scene identification engine which consists of multiple classifiers trained on previously collected images. We present our comprehensive overall game design, as well as implementation details and results. We also describe how we constructed our scene identification engine and its performance. Finally, we present an analysis of player surveys to gauge the potential market acceptance.

Keywords

Mixed reality Scene identification Memory game Mobile phone 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ballagas, R.A., Kratz, S.G., Borchers, J., Yu, E., Walz, S.P., Fuhr, C.O., Hovestadt, L., Tann, M.: REXplorer: a mobile, pervasive spell-casting game for tourists. In: CHI ’07 Extended Abstracts on Human Factors in Computing Systems (2007) Google Scholar
  2. 2.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: European Conference on Computer Vision (2006) Google Scholar
  3. 3.
    Coutrix, C., Nigay, L.: Mixed reality: a model of mixed interaction. In: Conference on Advanced Visual Interface (2006) Google Scholar
  4. 4.
    Coutrix, C., Nigay, L.: Balancing physical and digital properties in mixed objects. In: Conference on Advanced Visual Interface (2008) Google Scholar
  5. 5.
    Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: Computer Vision and Pattern Recognition (2003) Google Scholar
  6. 6.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003) Google Scholar
  7. 7.
    Li, F.F., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: Computer Vision and Pattern Recognition (2005) Google Scholar
  8. 8.
    Lim, J.H., Chevallet, J.P., Gao, S.: Scene identification using discriminative patterns. In: International Conference on Pattern Recognition (2006) Google Scholar
  9. 9.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) CrossRefGoogle Scholar
  10. 10.
    Opelt, A., Pinz, A., Fussenegger, M., Auer, P.: Generic object recognition with boosting. Pattern Anal. Mach. Intel. 28(3), 416–431 (2006) CrossRefGoogle Scholar
  11. 11.
    Strachan, S., Williamson, J., Murray-Smith, R.: Show me the way to Monte Carlo: density-based trajectory navigation. In: Proceedings of ACM SIG Computer-Human Interaction (CHI) (2007) Google Scholar
  12. 12.
    Vernier, F., Nigay, L.: A framework for the combination and characterization of output modalities. In: International Workshop on Design, Specification and Verification of Interactive Systems (2000) Google Scholar

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  • Tat-Jun Chin
    • 1
  • Yilun You
    • 1
  • Celine Coutrix
    • 2
  • Joo-Hwee Lim
    • 1
  • Jean-Pierre Chevallet
    • 1
  • Laurence Nigay
    • 2
  1. 1.Image Perception, Access and Language LabInstitute for Infocomm ResearchSingaporeSingapore
  2. 2.Laboratoire d’Informatique de GrenobleUniversité Joseph FourierGrenoble cedex 9France

Personalised recommendations