Advertisement

Experiences with a Barista Robot, FusionBot

  • Dilip Kumar Limbu
  • Yeow Kee Tan
  • Chern Yuen Wong
  • Ridong Jiang
  • Hengxin Wu
  • Liyuan Li
  • Eng Hoe Kah
  • Xinguo Yu
  • Dong Li
  • Haizhou Li
Part of the Communications in Computer and Information Science book series (CCIS, volume 44)

Abstract

In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

Keywords

Social robots Human-robot interaction Human perception and attitudes 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Shiomi, M., et al.: Interactive Humanoid Robots for a Science Museum. IEEE Intelligent Systems 22(2), 25–32 (2007)CrossRefGoogle Scholar
  2. 2.
    Thrun, S., et al.: MINERVA: a second-generation museum tour-guide robot. In: IEEE International Conference on Robotics and Automation, Detroit, Michigan, USA, pp. 1999–2005 (2005)Google Scholar
  3. 3.
    Gockley, R., et al.: Grace and George: Social Robots at AAAI. In: American Association for Artificial Intelligence (AAAI), San Jose, California, USA (2004)Google Scholar
  4. 4.
    Lopes, L.S., et al.: Towards a Personal Robot with Language Interface. In: 8th European Conference on Speech Communication and Technology, Geneva, Switzerland, pp. 2205–2208 (2003)Google Scholar
  5. 5.
    Spiliotopoulos, D., Androutsopoulos, I., Spyropoulos, C.D.: Human-robot interaction based on spoken natural language dialogue. In: European Workshop on Service and Humanoid Robots, Santoriri, Greece, pp. 25–27 (2001)Google Scholar
  6. 6.
    Montemerlo, M., et al.: Experiences with a mobile robotic guide for the elderly. In: 18th national conference on Artificial intelligence, Edmonton, Alberta, Canada, pp. 587–592 (2002)Google Scholar
  7. 7.
    Toptsis, I., et al.: Modality Integration and Dialog Management for a Robotic Assistant. In: 9th European Conf. on Speech Communication and Technology, Lisbon, Portugal, pp. 837–840 (2005)Google Scholar
  8. 8.
    Cook, D., et al.: MavHome: An Agent-Based Smart Home. In: 1st IEEE International Conference on Pervasive Computing and Communications, pp. 521–534. IEEE Computer Society, New Work (2003)Google Scholar
  9. 9.
    Cook, D., Das, S.: Smart Environments: Technologies, Protocols and Applications. Wiley Interscience, Hoboken (2004)CrossRefGoogle Scholar
  10. 10.
    Kidd, C.D., et al.: The aware home: A living laboratory for ubiquitous computing research. In: 2nd International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture, Pittsburgh, USA, pp. 191–198 (1999)Google Scholar
  11. 11.
    Intille, S.S., et al.: A Living Laboratory for the Design and Evaluation of Ubiquitous Computing Technologies. In: Human factors in computing systems, Portland, USA, pp. 1941–1944 (2005)Google Scholar
  12. 12.
    Jiang, R., et al.: Development of Event Driven Dialogue System for Social Mobile Robot. In: Global Congress on Intelligent Systems (GCIS), Xiamen, China (2009)Google Scholar
  13. 13.
    Haizhou, L., Bin, M., Chin-Hui, L.: A Vector Space Modeling Approach to Spoken Language Identification. IEEE Transactions on Audio, Speech and Language Processing 15(1) (2007)Google Scholar
  14. 14.
    Compernolle, D.V.: Switching Adaptive Filters for Enhancing Noisy and Reverberant Speech from Microphone Array Recordings. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Albuquerque, USA, pp. 833–836 (1990)Google Scholar
  15. 15.
    Cohen, I., Berdugo, B.: Speech enhancement for nonstationary noise environments. Signal Processing 81(11), 2403–2418 (2001)CrossRefMATHGoogle Scholar
  16. 16.
    Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. In: International Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, pp. 886–893 (2005)Google Scholar
  17. 17.
    Fox, D.: KLD-Sampling: Adaptive Particle Filters. In: Advances in Neural Information Processing Systems, pp. 713–720. MIT Press, Cambridge (2001)Google Scholar
  18. 18.
    Latombe, J.-C.L.: Robot Motion Planning. Springer, New York (1990)Google Scholar
  19. 19.
    Minguez, J., Osuna, J., Montano, L.: A Divide and Conquer” Strategy based on Situations to achieve Reactive Collision Avoidance in Troublesome Scenarios. In: EEE International Conference on Robotics and Automation (ICRA), New Orleans, USA, pp. 3855–3862 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Dilip Kumar Limbu
    • 1
  • Yeow Kee Tan
    • 1
  • Chern Yuen Wong
    • 1
  • Ridong Jiang
    • 1
  • Hengxin Wu
    • 1
  • Liyuan Li
    • 1
  • Eng Hoe Kah
    • 1
  • Xinguo Yu
    • 1
  • Dong Li
    • 1
  • Haizhou Li
    • 1
  1. 1.Institute for Infocomm ResearchSingapore

Personalised recommendations