Advertisement

Object Detection on Images in Docking Tasks Using Deep Neural Networks

  • Ivan FominEmail author
  • Dmitrii Gromoshinskii
  • Aleksandr Bakhshiev
Conference paper
Part of the Studies in Computational Intelligence book series (SCI, volume 736)

Abstract

In process of docking of automated apparatus there is a problem of determining of them relative position. This problem may be effectively solved with algorithms for relative position calculation, based on television picture formed by camera, installed on one apparatus and observing another one, or docking position. Apparatus position and orientation calculates using visual landmarks positions and information about 3D configuration of observing object and visual landmarks’ relative positions. Visual landmarks detection algorithm is the crucial part of such solution. Study of ability of application of object detection system based on deep convolutional neural network to task of visual landmark detection will be discussed in this article. As an example, detection of visual landmarks on space docking images will be discussed. Neural network based detection system learned using images of International Space Station received in process of docking of cargo spacecrafts will be represented.

Keywords

Object detection Deep neural networks Convolutional neural networks Faster R-CNN Machine learning Computer vision 

References

  1. 1.
    Stepanov, D., Bakhshiev, A., Gromoshinskii, D., Kirpan, N., Gundelakh, F.: Determination of the relative position of space vehicles by detection and tracking of natural visual features with the existing TV-cameras. Analysis of Images, Social networks and Texts, Four International Conference, AIST 2015, Yekaterinburg, Russia, 9–11 April 2015, Revised Selected papers. Communications in Computer and Information Science, vol. 542, pp. 431–442 (2015)Google Scholar
  2. 2.
    Bakhshiev, A.V., Korban, P.A., Kirpan, N.A.: Software package for determining the spatial orientation of objects by TV picture in the problem space docking. In: Robotics and Technical Cybernetics, Saint-Petersburg, Russia, RTC, vol. 1, pp. 71–75 (2013)Google Scholar
  3. 3.
    InterSpace. The channel is hosting a record of all space launches in the world. https://www.youtube.com/channel/UC9Fu5Cry8552v6z8WimbXvQ. Accessed 19 Apr 2017
  4. 4.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Neural Information Processing Systems (NIPS) (2015)Google Scholar
  5. 5.
    Girshick, R.: Faster R-CNN (Python implementation). https://github.com/rbgirshick/py-faster-rcnn
  6. 6.
    Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the Devil in the Details: Delving Deep into Convolutional Nets. British Machine Vision Conference (2014)Google Scholar
  7. 7.
    Model Zoo – BVLC. affe Wiki – GitHub. https://github.com/BVLC/caffe/wiki/Model-Zoo
  8. 8.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal Visual Object Classes (VOC) Challenge. http://host.robots.ox.ac.uk/pascal/VOC/pubs/everingham10.pdf

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Ivan Fomin
    • 1
    Email author
  • Dmitrii Gromoshinskii
    • 1
  • Aleksandr Bakhshiev
    • 1
  1. 1.The Russian State Scientific Center for Robotics and Technical Cybernetics (RTC)Saint-PetersburgRussia

Personalised recommendations