iRov: A Robot Platform for Active Vision Research and as Education Tool

Abstract

This paper introduces an autonomous camera-equipped robot platform for active vision research and as an education tool. Due to recent progress in electronics and computing power, in control and agent technology, and in computer vision and machine learning, the realization of an autonomous robots platform capable of solving high-level deliberate tasks in natural environments can be achieved. We used iPhone 4 technologies with Lego NXT to build a mobile robot called the iRov. iRov is a desk-top size robot that can perform image processing onboard utilizing the A4 chip which is a System-on-a-Chip (SoC) in the iPhone 4. With the CPU and the GPU processors working in parallel, we demonstrate real-time filters and 3D object recognition. Using this platform, the processing speed was 10 times faster than using the CPU alone.

Keywords

Mobile Robot Education Tool Active Vision Robot Platform Robot Vision 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mondada, F., Bonani, M., Raemy, X., et al.: The e-puck, a Robot Designed for Education in Engineering. In: Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions, vol. 1, pp. 59–65 (2009)Google Scholar
  2. 2.
    Filipescu, A., Susnea, I., Stancu, A.L., et al.: Path following, real-time, embedded fuzzy control of a mobile platform pioneer 3-DX. In: Proceedings of the 8th WSEAS International Conference on Systems Theory and Scientific Computation (ISTASC 2008), Rhodes (Rodos), Island, Greece, pp. 334–335 (2008)Google Scholar
  3. 3.
    Hafiz, A.R., Alnajjar, F., Murase, K.: A Novel Dynamic Edge Detection Inspired from Mammalian Retina toward Better Robot Vision. In: Proceedings of the 12th International Symposium on Robotics and Applications (ISORA 2010), World Automation Congress (WAC 2010), Kobe, Japan, pp. 1–6 (2010)Google Scholar
  4. 4.
    Thayananthan, A., Stenger, B., Torr, P.H.S., et al.: Shape context and chamfer matching in cluttered scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 127–133 (2003)Google Scholar
  5. 5.
    Butt, M.A., Maragos, P.: Optimum design of chamfer distance transforms. IEEE Transactions on Image Processing 7, 1477–1484 (1998)CrossRefGoogle Scholar
  6. 6.
    Zhang, Q., Xu, P., Li, W., et al.: Efficient edge matching using improved hierarchical chamfer matching. In: Circuits and Systems, ISCAS 2009, pp. 1645–1648 (2009)Google Scholar
  7. 7.
    Apple Inc., iOS Developer Center, http://developer.apple.com/devcenter/ios/index.action (Cited March 8, 2011)
  8. 8.
    Apple Inc., iOS Developer University Program, http://developer.apple.com/programs/ios/university (Cited March 8, 2011)
  9. 9.
    University of Fukui, Bio Science and Engineering Laboratory, http://www.synapse.his.fukui-u.ac.jp/en/ (Cited March 8, 2011)

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Graduate School of Engineering, Department of Human and Artificial Intelligence SystemUniversity of FukuiFukuiJapan

Personalised recommendations