Abstract
Learning activities are not necessary to be only in traditional physical classrooms but can also be set up in virtual environment. Therefore the authors propose a novel augmented reality system to organize a class supporting real-time collaboration and active interaction between educators and learners. A pre-processing phase is integrated into a visual search engine, the heart of our system, to recognize printed materials with low computational cost and high accuracy. The authors also propose a simple yet efficient visual saliency estimation technique based on regional contrast is developed to quickly filter out low informative regions in printed materials. This technique not only reduces unnecessary computational cost of keypoint descriptors but also increases robustness and accuracy of visual object recognition. Our experimental results show that the whole visual object recognition process can be speed up 19 times and the accuracy can increase up to 22%. Furthermore, this pre-processing stage is independent of the choice of features and matching model in a general process. Therefore it can be used to boost the performance of existing systems into real-time manner.
Keywords
Download to read the full chapter text
Chapter PDF
References
Bonwell, C.C., Eison, J.A.: Active learning: Creating excitement in the classroom. School of Education and Human Development, George Washington University, Washington, DC, USA (1991)
Kaufmann, H., Schmalstieg, D., Wagner, M.: Construct3D: A Virtual Reality Application for Mathematics and Geometry Education. Education and Information Technologies 5(4), 163–276 (2000)
Winkler, T., Kritzenberger, H., Herczeg, M.: Mixed Reality Environments as Collaborative and Constructive Learning Spaces for Elementary School Children. In: The World Conference on Educational Multimedia, Hypermedia and Telecommunications (2002)
Billinghurst, M., Kato, H., Poupyrev, I.: The MagicBook - Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications 21(3), 6–8 (2001)
Woods, E., et al.: Augmenting the science centre and museum experience. In: The 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia (2004)
Goferman, S., Zelnik-Manor, L., Tal, A.: Context-Aware Saliency Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(10), 1915–1926 (2012)
Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: 22nd IEEE Computer Society on Computer Vision and Pattern Recognition, pp. 1597–1604 (2009)
Ma, Y.-F., Zhang, H.-J.: Contrast-based image attention analysis by using fuzzy growing. In: 11th ACM International Conference on Multimedia, pp. 374–381 (2003)
Siva, P., Russell, C., Xiang, T., Agapito, L.: Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection. In: 26th IEEE Conference on Computer Vision and Pattern Recognition (2013)
Qiong, Y., Xu, L., Shi, J., Jia, J.: Hierarchical Saliency Detection. In: 26th IEEE Conference on Computer Vision and Pattern Recognition (2013)
Prim, R.C.: Shortest connection networks and some generalizations. Bell System Technical Journal 36(6), 1389–1401 (1957)
Zhang, J., Sclaroff, S.: Saliency detection: A boolean map approach. In: The IEEE International Conference on Computer Vision, ICCV (2013)
Cheng, M.-M., et al.: Efficient Salient Region Detection with Soft Image Abstraction. In: IEEE International Conference on Computer Vision (2013)
Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: 24th IEEE Conference on Computer Vision and Pattern Recognition, pp. 409–416 (2011)
Zhai, Y., Shah, M.: Visual Attention Detection in Video Sequences Using Spatiotemporal Cues. In: The 14th Annual ACM International Conference on Multimedia, pp. 815–824 (2006)
Hou, X., Zhang, L.: Saliency Detection: A Spectral Residual Approach. In: 20th IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007)
Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: Binary robust independent elementary features. In: 11th European Conference on Computer Vision, pp. 778–792 (2010)
Leutenegger, S., Chli, M., Siegwart, R.Y.: BRISK: Binary Robust Invariant Scalable Keypoints. In: 13th IEEE International Conference on Computer Vision (ICCV), pp. 2548–2555 (2011)
Lowe, D.: Distinctive Image Features from Scale Invariant Keypoints. International Journal of Computer Vision 20(2), 91–110 (2004)
Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: SURF: Speeded Up Robust Features. In: 9th European Conference on Computer Vision, pp. 404–417 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Le, TN., Le, YT., Tran, MT. (2014). Applying Saliency-Based Region of Interest Detection in Developing a Collaborative Active Learning System with Augmented Reality. In: Shumaker, R., Lackey, S. (eds) Virtual, Augmented and Mixed Reality. Applications of Virtual and Augmented Reality. VAMR 2014. Lecture Notes in Computer Science, vol 8526. Springer, Cham. https://doi.org/10.1007/978-3-319-07464-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-07464-1_5
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07463-4
Online ISBN: 978-3-319-07464-1
eBook Packages: Computer ScienceComputer Science (R0)