Multimedia Tools and Applications

, Volume 67, Issue 2, pp 473–495 | Cite as

Augmented interaction with physical books in an Ambient Intelligence learning environment

  • George Margetis
  • Xenophon Zabulis
  • Panagiotis Koutlemanis
  • Margherita Antona
  • Constantine Stephanidis
Article

Abstract

This paper presents an augmented reality environment for students' improved learning, which is based on unobtrusive monitoring of the natural reading and writing process. This environment, named SESIL, is able to perform recognition of book pages and of specific elements of interest within a page, as well as to perceive interaction with actual books and pens/pencils, without requiring any special interaction device. As a result, unobtrusive, context - aware student assistance can be provided. In this way, the learning process can be enhanced during reading with the retrieval and presentation of related material and, during writing, by the provision of assistance to accomplish writing tasks whenever appropriate. The SESIL environment is evaluated in terms of robustness, accuracy and usability.

Keywords

Augmented reality Augmented books Page recognition Gesture recognition Pose estimation Handwriting Ambient intelligence 

Notes

Acknowledgments

This work is supported by the Foundation for Research and Technology Hellas – Institute of Computer Science (FORTH – ICS) internal RTD Programme 'Ambient Intelligence and Smart Environments' [15]. The authors would like to thank Mrs. Stavroula Ntoa for her contribution to the usability evaluation of SESIL.

References

  1. 1.
    Aarts E, Encarnacao JL (2008) True visions. The Emergence of Ambient Intelligence. Springer, ISBN 978-3-540-28972-2Google Scholar
  2. 2.
    Abrami PC, Bernard RM, Wade CA, Schmid RF, Borokhovski E, Tamim R, Surkes M, Lowerison G, Zhang D, Nicolaidou I, Newman S, Wozney L, Peretiatkowicz A (2006) A review of E-learning in Canada: a rough sketch of the evidence, gaps and promising directions. Canadian Journal of Learning and Technology, 32 (3). http://www.cjlt.ca/index.php/cjlt/article/view/27/25. Accessed 16 February 2011
  3. 3.
    Anoto, Development Guide for Service Enabled by Anoto Functionality. 2002Google Scholar
  4. 4.
    Antona M, Margetis G, Ntoa S, Leonidis A, Korozi M, Paparoulis G, Stephanidis C (2010) Ambient Intelligence in the classroom: an augmented school desk. W. Karwowski & G. Salvendy (Eds.), Proceedings of the 2010 AHFE International Conference (3rd International Conference on Applied Human Factors and Ergonomics), Miami, Florida, USA, 17-20 JulyGoogle Scholar
  5. 5.
    Artifex Software, Inc. MuPDF: a lightweight PDF and XPS viewer. http://www.mupdf.com/. Accessed 16 February 2011
  6. 6.
    Ayache N, Lustman F (1987) Fast and reliable passive trinocular stereovision. ICCV, pp. 422–427Google Scholar
  7. 7.
    Baillard C, Schmid C, Zisserman A, Fitzgibbon A (1999) Automatic line matching and 3D reconstruction of buildings from multiple views. ISPRS Conference on Automatic Extraction of GIS Objects from Digital ImageryGoogle Scholar
  8. 8.
    Billinghurst M, Kato H, Poupyrev I (2001) The MagicBook: moving seamlessly between reality and virtuality. IEEE Computer Graphics, pp. 6–8Google Scholar
  9. 9.
    Bookstein FL (1991) Morphometric tools for landmark data. Cambridge University PressGoogle Scholar
  10. 10.
    Cook DJ, Augusto JC, Jakkula VR (2009) Ambient intelligence: technologies, applications, and opportunities. Pervasive Mobile Comput 5(4):277–298CrossRefGoogle Scholar
  11. 11.
    Cook DJ, Das SK (2007) How smart are our environments? An updated look at the state of the art. J Pervasive Mobile Comput 3(2):53–73CrossRefGoogle Scholar
  12. 12.
    Duda RO, Hart PE (1972) Use of the hough transformation to detect lines and curves in pictures. Comm ACM 15:11–15CrossRefGoogle Scholar
  13. 13.
    Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm ACM 24:381–395MathSciNetCrossRefGoogle Scholar
  14. 14.
    Forsberg AS, LaViola Jr. JJ, Zeleznik RC (1998) ErgoDesk: a framework for two- and three-dimensional interaction at the ActiveDesk. Second International Immersive Projection Technology Workshop, pp. 11–12Google Scholar
  15. 15.
    FORTH-ICS Ambient Intelligence Programme. http://www.ics.forth.gr/ami. Accessed 16 February 2011
  16. 16.
    Gelb A (1974) Applied optimal estimation. MIT PressGoogle Scholar
  17. 17.
    Grasset R, Duenser A, Seichter H, Billinghurst M (2007) The mixed reality book: a new multimedia reading experience. CHI '07 extended abstracts on Human factors in computing systems, pp. 1953–1958Google Scholar
  18. 18.
    Hartley RI, Zisserman A (2004) Multiple view geometry in computer visionGoogle Scholar
  19. 19.
    Hile H, Kim J, Borriello G (2004) Microbiology tray and pipette tracking as a proactive tangible user interface. Pervasive Computing, pp. 323–339Google Scholar
  20. 20.
    IEEE LOM (2002) Draft standard for learning object metadata. IEEE Learning Technology Standards Committee, IEEE 1484.12.1Google Scholar
  21. 21.
    Ishii H, Ullmer B (1997) Tangible bits: towards seamless interfaces between people, bits and atoms. CHI. pp. 234–241Google Scholar
  22. 22.
    ISO FDIS 9241-210:2009 Ergonomics of human system interaction - Part 210: Human-centred design for interactive systems (formerly known as 13407). International Organization for Standardization (ISO). SwitzerlandGoogle Scholar
  23. 23.
    Kobayashi M, Koike H (1998) EnhancedDesk: integrating paper documents and digital documents. Asia Pacific Computer Human Interaction (APCHI’98). IEEE CS, pp. 57–62Google Scholar
  24. 24.
    Law E, Roto V, Hassenzahl M, Vermeeren A, Kort J (2009) Understanding, scoping and defining user experience: a survey approach. Proceedings of the Human Factors in Computing Systems conference, CHI’09, April 4-9, 2009, Boston, MA, USAGoogle Scholar
  25. 25.
    Leonidis A, Margetis G, Antona M, Stephanidis C (2011) ClassMATE: enabling ambient intelligence in the classroom. World Academy of Science, Engineering and Technology, 66: 594 – 598. http://www.waset.org/journals/waset/v66/v66-96.pdf. Accessed 16 February 2011
  26. 26.
    Liao C, Guimbreti F, Hinckley K, Hollan J (2008) Papiercraft: a gesture-based command system for interactive paper. ACM Transactions in Computer-Human Interactions, 14 (4), Article 18, 27 pagesGoogle Scholar
  27. 27.
    Lowe DG (1987) 3D object recognition from single 2D images. Artif Intell 3:355–397CrossRefGoogle Scholar
  28. 28.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  29. 29.
    Luff P, Heath C, Norrie M, Signer B, Herdman P (2004) Only touching the surface: creating affinities between digital content and paper. In Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW '04). ACM, New York, NY, USA, 523–532Google Scholar
  30. 30.
    Maier­hofer software, NHunspell: C#/.NET free Spell Checker, Hyphenator, Thesaurus. Hunspell spelling corrector and hyphenation for C# and Visual Basic. http://nhunspell.sourceforge.net/. Accessed 16 February 2011
  31. 31.
    Mangen A, Velay J-L (2009) Digitizing literacy: reflections on the haptics of writing. Zadeh MH (Ed.), Advances in Haptics, pp. 385–401Google Scholar
  32. 32.
    Martinec D, Pajdla T (2003) Line reconstruction from many perspective images by factorization. CVPR, pp. 497–502Google Scholar
  33. 33.
    Michel D, Argyros AA, Grammenos D, Zabulis X, Sarmis T (2009) Building a multi-touch display based on computer vision techniques. IAPR Conference on Machine Vision Applications, May 20-22. Hiyoshi Campus, Keio University, JapanGoogle Scholar
  34. 34.
    Microsoft, Ink Analysis Overview (Windows). http://msdn.microsoft.com/en-us/library/ms704040(v=VS.85).aspx. Accessed 16 February 2011
  35. 35.
    Moons T, Frere D, Vandekerckhove J, Gool L (1998) Automatic modeling and 3D reconstruction of urban house roofs from high resolution aerial imagery. ECCV, pp. 410–425Google Scholar
  36. 36.
    Nickel K, Stiefelhagen R (2007) Visual recognition of pointing gestures for human-robot interaction. Image Vision Comput 25:1875–1884CrossRefGoogle Scholar
  37. 37.
    Nielsen J, Mack RL (1994) Usability inspection methods. John Wiley & Sons Inc., New York, pp 25–61Google Scholar
  38. 38.
    Open Office Wiki, Dictionaries. http://wiki.services.openoffice.org/wiki/Dictionaries. Accessed 16 February 2011
  39. 39.
    Quan L, Kanade T (1997) A_ne structure from line correspondences with uncalibrated a_ne cameras. PAMI 19:834–845CrossRefGoogle Scholar
  40. 40.
    Quensbery W (2003) The five dimensions of usability. Albers MJ, Mazur B (Eds.) Content and Complexity, Routledge, pp. 75–94Google Scholar
  41. 41.
    Schmalstieg D, Fuhrmann V, Hesina G, Szalav Z, Encarna LM, Gervautz M, Purgathofer W (2002) The studierstube augmented reality project. Presence: Teleoper. Virtual Environ 11:33–54CrossRefGoogle Scholar
  42. 42.
    Shi Y, Xie W, Xu G, Shi R, Chen E, Mao Y, Liu F (2003) The smart classroom: merging technologies for seamless tele-education. IEEE Pervasive Computing (April–June), pp. 47–55Google Scholar
  43. 43.
    Simon A, Dressler A, Kruger H, Scholz S, Wind J (2005) Interaction and co-located collaboration in large projection-based virtual environments. IFIP Conference on Human-Computer Interaction, pp. 364–376Google Scholar
  44. 44.
    Smith R, Chang S (1996) VisualSEEk: a fully automated content-based image query system. ADM Multimedia, pp. 87–89Google Scholar
  45. 45.
    Torr PHS, Zisserman A (1997) Robust parameterization and computation of the trifocal tensor. Image Vision Comput 15:591–605CrossRefGoogle Scholar
  46. 46.
    Wellner P (1993) Interacting with paper on the DigitalDesk. Commun. ACM 36, 7 July, pp. 87–96Google Scholar
  47. 47.
    Woo D, Park D, Han S (2009) Extraction of 3D line segment using disparity map. Digital Image Processing, pp. 127–131Google Scholar
  48. 48.
    Zabulis X, Koutlemanis P, Baltzakis H, Grammenos D (2011) Multiview 3D pose estimation of a wand for human-computer interaction. International Symposium on Visual Computing, September 26-28, Las Vegas, Nevada, USAGoogle Scholar
  49. 49.
    Zabulis X, Sarmis T, Tzevanidis K, Koutlemanis P, Grammenos D, Argyros AA (2010) A platform for monitoring aspects of human presence in real-time. International Symposium on Visual Computing, Las Vegas, Nevada, USA, November 29 - December 1Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • George Margetis
    • 1
  • Xenophon Zabulis
    • 1
  • Panagiotis Koutlemanis
    • 1
  • Margherita Antona
    • 1
  • Constantine Stephanidis
    • 1
    • 2
  1. 1.Foundation for Research and Technology – Hellas (FORTH), Institute of Computer ScienceHeraklionGreece
  2. 2.Department of Computer ScienceUniversity of CreteHeraklionGreece

Personalised recommendations