Advertisement

Mo Músaem Fíorúil: A Web-Based Search and Information Service for Museum Visitors

  • Michael Blighe
  • Sorin Sav
  • Hyowon Lee
  • Noel E. O’Connor
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5112)

Abstract

We describe the prototype of an interactive, web-based, museum artifact search and information service. Mo Músaem Fíorúil clusters and indexes images of museum artifacts taken by visitors to the museum where the images are captured using a passive capture device such as Microsoft’s SenseCam [1]. The system also matches clustered artifacts to images of the same artifact from the museums official photo collection and allows the user to view images of the same artifact taken by other visitors to the museum. This matching process potentially allows the system to provide more detailed information about a particular artifact to the user based on their inferred preferences, thereby greatly enhancing the user’s overall museum experience. In this work, we introduce the system and describe, in broad terms, it’s overall functionality and use. Using different image sets of artificial museum objects, we also describe experiments and results carried out in relation to the artifact matching component of the system.

Keywords

Mobile Phone Model Image Hough Transform Background Clutter Photo Collection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gemmell, J., Williams, L., Wood, K., Lueder, R., Bell, G.: Passive capture and ensuing issues for a personal lifetime store (October 2004)Google Scholar
  2. 2.
    Proctor, N., Tellis, C.: The state of the art in museum handhelds in 2003. In: Museums and the Web (2003)Google Scholar
  3. 3.
    Aroyo, L., Wang, Y., Brussee, R., Gorgels, P., Rutledge, L., Stash, N.: Personalized museum experience - the rijksmuseum use case. In: Museums and The Web (April 2007)Google Scholar
  4. 4.
    Fockler, P., Zeidler, T., Brombach, B., Bruns, E., Bimber, O.: Phoneguide - museum guidance supported by on-device object recognition on mobile phones. 54 74, Bauhaus-University Weimar, Weimar, Germany (2005)Google Scholar
  5. 5.
    Bay, H., Fasel, B., Gool, L.V.: Interactive museum guide - fast and robust recognition of museum objects. In: First International Workshop on Mobile Vision (2006)Google Scholar
  6. 6.
    Oppermann, R., Sprecht, M.: Adaptive mobile museum guide for information and learning on demand. In: 8th International Conference on Human-Computer Interaction, vol. 2, pp. 642–646 (1999)Google Scholar
  7. 7.
    Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A., Dellaert, F., Fox, D., Hahnel, D., Rosenberg, C., Roy, N., Schulte, J., Schulz, D.: Probabilistic algorithms and the interactive museum tour-guide robot minerva. International Journal of Robotics Research 19(11), 972–999 (2000)CrossRefGoogle Scholar
  8. 8.
    Semper, R., Spasojevic, M.: The electronic guidebook - using portable devices and a wireless web-based network to extend the museum experience. In: Museums and the Web (April 2002)Google Scholar
  9. 9.
    Hsi, S.: The electronic guidebook - a study of user experiences using mobile web content in a museum setting. In: International Workshop on Wireless and Mobile Technologies in Education (August 2002)Google Scholar
  10. 10.
    Healey, J., Picard, R.: Startlecam: A cyberbetic wearable camera (October 1998)Google Scholar
  11. 11.
    Gemmell, J., Aris, A., Lueder, R.: Telling stories with mylifebits. In: ICME (July 2005)Google Scholar
  12. 12.
    Blighe, M., Borgne, H.L., O’Connor, N.: Exploiting context information to aid landmark detection in sensecam images. In: 2nd International Workshop on Exploiting Context Histories in Smart Environments - Infrastructures and Design (ECHISE) (September 2006)Google Scholar
  13. 13.
    Bush, V.: As we may think. The Atlantic Monthly (July 1945)Google Scholar
  14. 14.
    Joki, A., Burke, J., Estrin, D.: Campaignr - a framework for participatory data collection on mobile phones. Technical Report 770, Centre for Embedded Network Sensing, University of California, Los Angeles (October 2007)Google Scholar
  15. 15.
    Schmid, C., Mohr, R.: Local grayvalue invariants for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(5), 530–535 (1997)CrossRefGoogle Scholar
  16. 16.
    Baumberg, A.: Reliable feature matching across widely separated views. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 774–781 (2000)Google Scholar
  17. 17.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  18. 18.
    Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. International Journal of Computer Vision 37(2), 151–172 (2000)zbMATHCrossRefGoogle Scholar
  19. 19.
    Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(10), 1615–1630 (2005)CrossRefGoogle Scholar
  20. 20.
    Beis, J., Lowe, D.: Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: Conference on Computer Vision and Pattern Recognition, pp. 1000–1006 (1997)Google Scholar
  21. 21.
    Ballard, D.: Generalizing the hough transform to detect arbitrary patterns. Pattern Recognition 13(2), 111–122 (1981)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Michael Blighe
    • 1
  • Sorin Sav
    • 1
  • Hyowon Lee
    • 1
  • Noel E. O’Connor
    • 1
  1. 1.Centre for Digital Video Processing, Adaptive Information ClusterDublin City UniversityIreland

Personalised recommendations