Advertisement

Dynamic Environments Localization via Dimensions Reduction of Deep Learning Features

  • Hui Zhang
  • Xiangwei Wang
  • Xiaoguo Du
  • Ming Liu
  • Qijun Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10528)

Abstract

How to autonomous locate a robot quickly and accurately in dynamic environments is a primary problem for reliable robot navigation. Monocular visual localization combined with deep learning has gained incredible results. However, the features extracted from deep learning are of huge dimensions and the matching algorithm is complex. How to reduce dimensions with precise localization is one of the difficulties. This paper presents a novel approach for robot localization by training in dynamic environments in a large scale. We extracted features from AlexNet and reduced dimensions of features with IPCA, and what’s more, we reduced ambiguities with kernel method, normalization and morphology processing to matching matrix. Finally, we detected best matching sequence online in dynamic environments across seasons. Our localization algorithm can locate robots quickly with high accuracy.

Notes

Acknowledgment

This research is a cooperation work between RAM-LAB of HKUST and RAI-LAB of Tongji University. Our work is supported by National Natural Science Foundation (61573260), Natural Science Foundation of Shanghai (16JC1401200); Shenzhen Science, Technology and Innovation Commission (SZSTI) (JCYJ20160428154842603 and JCYJ20160401100022706); partially supported by the HKUST Project (IGN16EG12).

References

  1. 1.
    Arroyo, R., Alcantarilla, P.F., Bergasa, L.M., Romera, E.: Fusion and binarization of CNN features for robust topological localization across seasons. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4656–4663. IEEE (2016)Google Scholar
  2. 2.
    Arroyo, R., Alcantarilla, P.F., Bergasa, L.M., Yebes, J.J., Bronte, S.: Fast and effective visual place recognition using binary codes and disparity information. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 3089–3094. IEEE (2014)Google Scholar
  3. 3.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). doi: 10.1007/11744023_32 CrossRefGoogle Scholar
  4. 4.
    Chow, C., Liu, C.: Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory 14(3), 462–467 (1968)CrossRefzbMATHGoogle Scholar
  5. 5.
    Churchill, W., Newman, P.: Practice makes perfect? Managing and leveraging visual experiences for lifelong navigation. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 4525–4532. IEEE (2012)Google Scholar
  6. 6.
    Corke, P., Paul, R., Churchill, W., Newman, P.: Dealing with shadows: capturing intrinsic scene appearance for image-based outdoor localisation. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2085–2092. IEEE (2013)Google Scholar
  7. 7.
    Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008)CrossRefGoogle Scholar
  8. 8.
    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DECAF: a deep convolutional activation feature for generic visual recognition. In: ICML, vol. 32, pp. 647–655 (2014)Google Scholar
  9. 9.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  11. 11.
    Li, F., Kosecka, J.: Probabilistic location recognition using reduced feature set. In: Proceedings of 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, pp. 3405–3410. IEEE (2006)Google Scholar
  12. 12.
    Liu, M., Colas, F., Pomerleau, F., Siegwart, R.: A Markov semi-supervised clustering approach and its application in topological map extraction. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4743–4748. IEEE (2012)Google Scholar
  13. 13.
    Liu, M., Scaramuzza, D., Pradalier, C., Siegwart, R., Chen, Q.: Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptors. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, pp. 116–121. IEEE (2009)Google Scholar
  14. 14.
    Liu, M., Siegwart, R.: Topological mapping and scene recognition with lightweight color descriptors for an omnidirectional camera. IEEE Trans. Robot. 30(2), 310–324 (2014)CrossRefGoogle Scholar
  15. 15.
    Liu, M., Wang, L., Siegwart, R.: DP-fusion: a generic framework for online multi sensor recognition. In: 2012 IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 7–12. IEEE (2012)Google Scholar
  16. 16.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  17. 17.
    Lowry, S., Sünderhauf, N., Newman, P., Leonard, J.J., Cox, D., Corke, P., Milford, M.J.: Visual place recognition: a survey. IEEE Trans. Robot. 32(1), 1–19 (2016)CrossRefGoogle Scholar
  18. 18.
    Lowry, S.M., Milford, M.J., Wyeth, G.F.: Transforming morning to afternoon using linear regression techniques. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3950–3955. IEEE (2014)Google Scholar
  19. 19.
    McManus, C., Upcroft, B., Newman, P.: Learning place-dependant features for long-term vision-based localisation. Auton. Rob. 39(3), 363–387 (2015)CrossRefGoogle Scholar
  20. 20.
    Milford, M.J., Wyeth, G.F.: SeqSLAM: visual route-based navigation for sunny summer days and stormy winter nights. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 1643–1649. IEEE (2012)Google Scholar
  21. 21.
    Naseer, T., Spinello, L., Burgard, W., Stachniss, C.: Robust visual robot localization across seasons using network flows. In: AAAI, pp. 2564–2570 (2014)Google Scholar
  22. 22.
    Neubert, P., Sünderhauf, N., Protzel, P.: Superpixel-based appearance change prediction for long-term navigation across seasons. Robot. Auton. Syst. 69, 15–27 (2015)CrossRefGoogle Scholar
  23. 23.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2564–2571. IEEE (2011)Google Scholar
  24. 24.
    Schindler, G., Brown, M., Szeliski, R.: City-scale location recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2007)Google Scholar
  25. 25.
    Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013)
  26. 26.
    Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014)Google Scholar
  27. 27.
    Sünderhauf, N., Protzel, P.: BRIEF-Gist-Closing the loop by simple means. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1234–1241. IEEE (2011)Google Scholar
  28. 28.
    Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., Milford, M.: On the performance of convnet features for place recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4297–4304. IEEE (2015)Google Scholar
  29. 29.
    Tai, L., Liu, M., Deep-learning in mobile robotics-from perception to control systems: a survey on why and why not. arXiv preprint arXiv:1612.07139 (2016)
  30. 30.
    Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  31. 31.
    Weng, J., Zhang, Y., Hwang, W.-S.: Candid covariance-free incremental principal component analysis. IEEE Trans. Pattern Anal. Mach. Intell. 25(8), 1034–1040 (2003)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Hui Zhang
    • 1
  • Xiangwei Wang
    • 1
  • Xiaoguo Du
    • 1
  • Ming Liu
    • 2
  • Qijun Chen
    • 1
  1. 1.RAI-LABTongji UniversityShanghaiChina
  2. 2.RAM-LAB, Robotics InsitituteHKUSTHongkongChina

Personalised recommendations