Recognition of Household Objects by Service Robots Through Interactive and Autonomous Methods
Service robots need to be able to recognize and identify objects located within complex backgrounds. Since no single method may work in every situation, several methods need to be combined. However, there are several cases when autonomous recognition methods fail. We propose several types of interactive recognition methods in those cases. Each one takes place at the failures of autonomous methods in different situations. We proposed four types of interactive methods such that robot may know the current situation and initiate the appropriate interaction with the user. Moreover we propose the grammar and sentence patterns for the instructions used by the user. We also propose an interactive learning process which can be used to learn or improve an object model through failures.
KeywordsObject Recognition Service Robot Gabor Feature Require Object Class Recognition
Unable to display preview. Download preview PDF.
- 2.Serre, T., Wolf, L., Poggio, T.: A New Biologically Motivated Framework for Robust Object Recognition. Ai memo, 2004-026, cbcl memo 243, MIT (2004)Google Scholar
- 3.Li, S.Z., et al.: Kernel Machine Based Learning for MultiView Face Detection and Pose Estimation. In: Eighth International Conference on Computer Vision, pp. 674–679 (2001)Google Scholar
- 6.Intelligent Robotics and Communication Laboratories, http://www.irc.atr.jp/index.html
- 10.Sakata, K., Kuno, Y.: Detection of Objects Based on Research of Human Expression for Objects. In: Symposium on Sensing Via Image Information CD ROM (in Japanese) (2007)Google Scholar
- 14.Mansur, A., Hossain, M.A., Kuno, Y.: Integration of Multiple Methods for Class and Specific Object Recognition. In: International Symposium on Visual Computing Part I, pp. 841–849 (2006)Google Scholar
- 15.Mansur, A., Kuno, Y.: Integration of Multiple Methods for Robust Object Recognition. Accepted in: SICE Anual Conference, Kagawa, Japan (September 2007)Google Scholar
- 16.Hanafiah, Z.M., Yamazaki, C., Nakamura, A., Kuno, Y.: Human-Robot Speech Interface Understanding Inexplicit Utterances Using Vision. In: International Conference for Human-Computer Interaction, 1321-1324/CD-ROM Disc2 2p1321.pdf (2004)Google Scholar