Omero 2.0

  • Matteo PalieriEmail author
  • Cataldo Guaragnella
  • Giovanni Attolico
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10850)


The OMERO 2.0 system (Organized Multimodal Exploration of Relevant Virtual Objects) is an innovative system that enables visually impaired users to explore and edit 3D virtual models. It involves three interaction modalities: visual, haptic and auditory. Virtual models are properly designed to convey the information of interest in a polymorphous and redundant way: the user can therefore choose the sensorial modalities best suited to his/her characteristics, accounting for specific limitations and/or impairments. Virtual models are specially organized to help visually impaired people in building an integrated mental scheme of complex realities (cultural heritage objects and sites, large buildings, abstract concepts in fields such as geometry or chemistry etc.): a challenging task when using a serial sense such as touch. Different semantic layers of the scene (scenarios) convey logically different views of the scene at hand and can be selected separately or in combination depending on the user’s needs: that prevents users from being overwhelmed by too many simultaneous details. The software tools used in this new version of OMERO increase the generality of the system and support a larger number of haptic devices. Moreover, the completely new Interactive Haptic Editor of OMERO offers an innovative haptic interface: the haptic properties of the virtual models can be edited even without using the GUI. This redundant combination of vision and touch improves the efficiency for sighted people and enables visually impaired users (that cannot use a GUI) to modify autonomously the rendering of virtual scenes. This results in their active involvement even in the design phase, improving their ability to match the rendering with their specific and individual needs.


Virtual reality Haptic user interface Human computer interaction Visual impairment Scene cognition and understanding 


  1. 1.
    De Boeck, J., Cuppens, E., De Weyer, T., Raymaejers, C., Coninx, K.: Multisensory interaction metaphors with haptics and propioception in virtual environments. In: Proceedings of NordiCHI 2004, Tampere, FI, October 2004Google Scholar
  2. 2.
    De Boeck, J., Raymaekers, C., Coninx, K.: Are existing metaphors in virtual environments suitable for haptic interaction. In: Proceedings of the 7th International Conference on Virtual Reality, pp. 261–268 (2005)Google Scholar
  3. 3.
    De Felice, F., Gramegna, T., Renna, F., Attolico, G., Distante, A.: A portable system to build 3D models of culturale heritage and to allow their explorations by blind people. In: IEEE Proceedings of HAVE 2005, Ottawa, Ontario, Canada, October 2005Google Scholar
  4. 4.
    De Felice, F., Renna, F., Attolico, G., Distante, A.: A haptic/acoustic application to allow blind the access to spatial information. In: Proceeding of WorldHaptics 2007, Tzukuba, Japan, March 2007Google Scholar
  5. 5.
    De Felice, F., Attolico, G., Distante, A.: Configurable design of multimodal non-visual interfaces for 3D VEs. In: Proceedings of HAID 2009 4th International Conference on Haptic and Audio Interaction Design, Dresden, Germany, September 2009, p. 71 (2009)Google Scholar
  6. 6.
    Lahav, O., Mioduser, D.: Haptic-feedback support for cognitive mapping of unknown spaces by people who are blind. Int. J. Hum Comput Stud. 66, 23–25 (2008)CrossRefGoogle Scholar
  7. 7.
    Lecuyer, A., Mobuchon, P., Megard, C., Perret, J., Andriot, C., Colinot, J.: Homere: a multimodel system for visually impaired people to explore virtual environments. In: Proceedings of IEEE Virtual Reality, pp. 251–258 (2003)Google Scholar
  8. 8.
    Okamura, A., Cutkosky, M.: Haptic exploration of fine surface features. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 2930–2936 (1999)Google Scholar
  9. 9.
    Ott, R., Vexo, F., Thalmann, D.: Two-handed haptic manipulation for CAD and VR applications. Comput. Aided Des. Appl. 7(1), 125–138 (2010)CrossRefGoogle Scholar
  10. 10.
    Badcock, D.R., Palmisano, S., May, J.G.: Vision and virtual environments. In: Hale, K.S., Stanney, K.M. (eds.) Handbook of Virtual Environments: Design, Implementation, and Applications, 2nd edn, pp. 39–86. Taylor & Francis Group Inc., Boca Raton (2014)CrossRefGoogle Scholar
  11. 11.
    Wuillemin, D., van Doom, G., Richardson, B., Symmons, M.: Haptic and visual size judgements in virtual and real environments. In: Proceedings of IEEE World Hapics Conference, pp. 86–89 (2005)Google Scholar
  12. 12.
    Yoon, W.J., Hwang, W.-Y., Perry, J.C.: Study on effects of surface properties in haptic perception of virtual curvature. Int. J. Comput. Appl. Technol. 53, 236–243 (2016)CrossRefGoogle Scholar
  13. 13.
    Maidenbaum, S., Levy-Tzedek, S., Chebat, D.R., Amedi, A.: Increasing accessibility to the blind of virtual environments, using a virtual mobility aid based on the “EyeCane”: feasibility study. PLoS One 8(8), e72555 (2013). Scholar
  14. 14.
    Picinali, L., Afonso, A., Denis, M., Katz, B.F.G.: Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge. Int. J. Hum Comput Stud. 72(4), 393–407 (2014). Scholar
  15. 15.
    Jaimes, A., Sebe, N.: Multimodal human-computer interaction: A survey. Comput. Vis. Image Underst. 108(1–2), 116–134 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Matteo Palieri
    • 1
    • 2
    Email author
  • Cataldo Guaragnella
    • 1
    • 2
  • Giovanni Attolico
    • 1
    • 2
  1. 1.Institute of Intelligent Systems for AutomationNational Research Council of ItalyBariItaly
  2. 2.Department of Electrical and Information EngineeringPolytechnic University of BariBariItaly

Personalised recommendations