Unsupervised Learning of Sensory Primitives from Optical Flow Fields

  • Oswald Berthold
  • Verena V. Hafner
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8575)

Abstract

Adaptive behaviour of animats largely depends on the processing of their sensory information. In this paper, we examine the estimation of robot egomotion from visual input by unsupervised online learning. The input is a sparse optical flow field constructed from discrete motion detectors. The global flow field properties depend on the robot motion, the spatial distribution of motion detectors with respect to the robot body and the visual environment. We show how online linear Principal Component Analysis can be applied to this problem to enable a robot to continuously adapt to a changing environment.

Keywords

adaptive behaviour source separation feature learning neural network optical flow primitives redundancy representation learning sensor array unsupervised vision 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Berthold, O., Hafner, V.V.: Neural sensorimotor primitives for visioncontrolled flying robots. In: Workshop on Vision-based Closed-Loop Control and Navigation of Micro Helicopters in GPS-Denied Environments at IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2013 (2013), http://rpg.ifi.uzh.ch/docs/IROS13workshop/Berthold.pdf
  2. 2.
    Lettvin, J.Y., et al.: What the Frog’s Eye Tells the Frog’s Brain. In: Proceedings of the IRE 47.11, pp. 1940–1951 (1959), doi:10.1109/JRPROC.1959.287207Google Scholar
  3. 3.
    Mussa-Ivaldi, F.A., Solla, S.A.: Neural Primitives for Motion Control. IEEE Journal of Oceanic Engineering 29, 640 (2004)CrossRefGoogle Scholar
  4. 4.
    Borst, A., Egelhaaf, M.: Principles of visual motion detection. Trends in Neurosciences 12(8), 297–306 (1989)CrossRefGoogle Scholar
  5. 5.
    Bruss, A.R., Horn, B.K.P.: Passive navigation. Computer Vision, Graphics, and Image Processing 21(1), 3–20 (1983)CrossRefGoogle Scholar
  6. 6.
    Franz, M.O., Krapp, H.G.: Wide-field, motion-sensitive neurons and matched filters for optic flow fields. Biological Cybernetics 83(3), 185–197 (2000), http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf81.pdf, doi:10.1007/s004220000163CrossRefGoogle Scholar
  7. 7.
    Longuet-Higgins, H.C., Prazdny, K.: The Interpretation of a Moving Retinal Image. Royal Society of London Proceedings Series B 208, 385–397 (1980)CrossRefGoogle Scholar
  8. 8.
    Horn, B.K.P.: Motion fields are hardly ever ambiguous. International Journal of Computer Vision 1(3), 259–274 (1988), http://dx.doi.org/10.1007/BF00127824, doi:10.1007/BF00127824CrossRefGoogle Scholar
  9. 9.
    Verri, A., Poggio, T.: Motion field and optical flow: Qualitative properties. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(5), 490–498 (1989), doi:10.1109/34.24781Google Scholar
  10. 10.
    Krapp, H.G., Hengstenberg, B., Hengstenberg, R.: Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology 79, 1902–1917 (1998)Google Scholar
  11. 11.
    Dahmen, H.-J., Franz, M.O., Krapp, H.G.: Extracting Egomotion from Optic Flow: Limits of Accuracy and Neural Matched Filters. In: Motion Vision, pp. 143–168. Springer, Heidelberg (2001), http://dx.doi.org/10.1007/978-3-642-56550-2_8, doi:10.1007/978-3-642-56550-2_8, ISBN: 978-3-642-62979-2
  12. 12.
    Franz, M.O., Chahl, J.S.: Linear combinations of optic flow vectors for estimating self-motion: A real-world test of a neural model. In: Advances in Neural Information Processing Systems (NIPS), vol. 15, pp. 1319–1326. MIT Press (2002)Google Scholar
  13. 13.
    Fleet, D.J., et al.: Design and Use of Linear Models for Image Motion Analysis. International Journal of Computer Vision 36(3), 171–193 (2000), http://dx.doi.org/10.1023/A:1008156202475, doi:10.1023/A:1008156202475, ISSN: 0920-5691
  14. 14.
    Roberts, R., Potthast, C., Dellaert, F.: Learning general optical flow subspaces for egomotion estimation and detection of motion anomalies. In: CVPR, pp. 57–64. IEEE (2009), http://dblp.uni-trier.de/db/conf/cvpr/cvpr2009.htmlRobertsPD09, ISBN: 978-1-4244-3992-8
  15. 15.
    Barth, M., Ishiguro, H., Tsuji, S.: Determining Robot Egomotion from Motion Parallax Observed by an Active Camera. In: Proceedings of the 12th International Joint Conference on Artificial Intelligence, IJCAI 1991, vol. 2, pp. 1247–1253. Morgan Kaufmann Publishers Inc. (1991), http://dl.acm.org/citation.cfm?id=1631552.1631644, ISBN: 1-55860-160- 0
  16. 16.
    Guthier, T., Eggert, J., Willert, V.: Unsupervised Learning of Motion Patterns. In: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, vol. 20, pp. 323–328. Bruges (April 2012), http://tubiblio.ulb.tu-darmstadt.de/57795/
  17. 17.
    Lichtensteiger, L., Eggenberger, P.: Evolving the morphology of a compound eye on a robot. In: 1999 Third European Workshop on Advanced Mobile Robots, Eurobot 1999, pp. 127–134 (1999), doi:10.1109/EURBOT.1999.827631Google Scholar
  18. 18.
    Briod, A., Zufferey, J.C., Floreano, D.: Automatically calibrating the viewing direction of optic-flow sensors. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 3956–3961 (2012), doi:10.1109/ICRA.2012.6225011Google Scholar
  19. 19.
    Ruesch, J., Ferreira, R., Bernardino, A.: Self-organization of Visual Sensor Topologies Based on Spatiotemporal Cross-Correlation. In: Ziemke, T., Balkenius, C., Hallam, J. (eds.) SAB 2012. LNCS, vol. 7426, pp. 259–268. Springer, Heidelberg (2012), http://dblp.unitrier.de/db/conf/sab/sab2012.html#RueschFB12
  20. 20.
    Dong, F., et al.: Plenoptic cameras in real-time robotics. The International Journal of Robotics Research 32(2), 206–217 (2013), doi:10.1177/0278364912469420Google Scholar
  21. 21.
    Bengio, Y., Courville, A.C., Vincent, P.: Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)Google Scholar
  22. 22.
    Pierce, D., Kuipers, B.: Map Learning with Uninterpreted Sensors and Effectors. Artificial Intelligence 92, 169–227 (1997), http://dblp.org/db/journals/ai/ai92.html#PierceK97
  23. 23.
    Olsson, L., Nehaniv, C.L., Polani, D.: From unknown sensors and actuators to actions grounded in sensorimotor perceptions. Connection Science 18(2), 121–144 (2006), http://dblp.org/db/journals/connection/connection18.html#OlssonNP06
  24. 24.
    Kaplan, F., Hafner, V.V.: Information-theoretic framework for unsupervised activity classification. Advanced Robotics 20(10), 1087–1103 (2006)Google Scholar
  25. 25.
    Philipona, D., O’Regan, J.K., Nadal, J.P.: Is there something out there?: Inferring space from sensorimotor dependencies. Neural Computation 15(9), 2029–2049 (2003)Google Scholar
  26. 26.
    Bradski, G.: The OpenCV Library. Dr. Dobb’s Journal of Software Tools (2000)Google Scholar
  27. 27.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006), doi:10.1126/science.1127647Google Scholar
  28. 28.
    Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biological Cybernetics 59(4-5), 291–294 (1988), doi:10.1007/BF00332918Google Scholar
  29. 29.
    Haykin, S.: Neural networks - A comprehensive foundation. Pearson (1999)Google Scholar
  30. 30.
    Echeverria, G., Lemaignan, S., Degroote, A., Lacroix, S., Karg, M., Koch, P., Lesire, C., Stinckwich, S.: Simulating Complex Robotic Scenarios with MORSE. In: Noda, I., Ando, N., Brugali, D., Kuffner, J.J. (eds.) SIMPAR 2012. LNCS, vol. 7628, pp. 197–208. Springer, Heidelberg (2012), http://morse.openrobots.org

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Oswald Berthold
    • 1
  • Verena V. Hafner
    • 1
  1. 1.Cognitive Robotics Group, Dept. of Computer ScienceHumboldt-Universität zu BerlinBerlinGermany

Personalised recommendations