Locator slope calculation via deep representations based on monocular vision

  • Yang YangEmail author
  • Wensheng Zhang
  • Zewen He
  • Dongjie Chen
Original Article


The locator is the key component to control the track of contact wire in overhead catenary system (OCS) for high-speed railway. Once the locator slope is out of bound, it would pose a huge hazard to the safety of the high-speed trains and threat to the human life and property damage. In this work, a novel end-to-end locator slope calculation framework is presented for locator slope real-time inspection in high-speed railway system. The pipeline is composed of two stages: locator contour detection and slope calculation. In order to precisely detect the locator contours in OCS images captured from high-speed extreme environments, a novel detection mechanism including rough detection and fine fitting is proposed. For the fast slope calculation through only one camera, monocular vision model is modified by two novel assumptions to calculate the locator space coordinates. Rigorous experiments are performed across a number of standard locator slope calculation benchmarks, showing a large improvement in the precision and speed over all previous methods. Finally, the effectiveness of proposed framework is demonstrated through a real-world application of the high-speed rail OCS inspection system.


Slope calculation Locator detection Convolution neural networks Monocular vision 



The authors would like to thank all the research scholars, M.Tech. students and technical staffs for their comments and suggestions. At the same time, the research was supported in part by Natural Science Foundation of China (Nos. 61602484, 61432008, 61472423 and U1636220).

Compliance with ethical standards

Conflict of interest

The authors declare that there is no conflict of interest in this manuscript


  1. 1.
    Aydin I (2015) A new approach based on firefly algorithm for vision-based railway overhead inspection system. Measurement 74:43–55CrossRefGoogle Scholar
  2. 2.
    Wu X, Yuan P, Peng Q et al (2016) Detection of bird nests in overhead catenary system images for high-speed rail. Pattern Recognit 51(C):242–254CrossRefGoogle Scholar
  3. 3.
    Aydin I, Karakose M, Akin E (2015) Anomaly detection using a modified kernel-based tracking in the pantograph–catenary system. Expert Syst Appl 42(2):938–948CrossRefGoogle Scholar
  4. 4.
    Aydin I, Karakose M, Akin E (2013) A robust anomaly detection in pantograph–catenary system based on mean-shift tracking and foreground detection. In: IEEE international conference on systems, man, and cybernetics. SMC 2013, pp 4444–4449Google Scholar
  5. 5.
    Skibicki J, Bartomiejczyk M (2017) Analysis of measurement uncertainty for contact-less method used to measure the position of catenary contact wire, performed with the use of Monte Carlo method. Measurement 97:203–217CrossRefGoogle Scholar
  6. 6.
    Fan H, Bian C, Zhu T et al (2010) Automatic Detection Technology for Gradient of Locator in Contactless Overhead Contact System [J]. Computer Applications 30(12):101–103Google Scholar
  7. 7.
    Wang X, Wu J, Xu K et al (2014) Identification of OCS locator based on AdaBoost algorithm. High Speed Railw Technol 5(3):9–12Google Scholar
  8. 8.
    Duan R, Wei Z, Huang S et al (2011) Automatic measurement method of the catenary localizer slope based on computer vision. Zhongguo Tiedao Kexue China Railw Sci 32(4):82–87Google Scholar
  9. 9.
    Zhang T, Hao K, Duan R (2013) Dynamic detection algorithm of slope gradient of catenary locator based on computer vision technology. Railw Stand Des 2013(1):105–108Google Scholar
  10. 10.
    Zhang C, Wang S, Zhang D et al (2015) Detection of catenary steady clamp based on image processing. J Railw Sci Eng 2015(6):1478–1484Google Scholar
  11. 11.
    Gu H (2014) Research on contenary steady arm recognition method based on video image. Southwest Jiaotong University, ChengduGoogle Scholar
  12. 12.
    Liu Y, Hou M, Rao X et al (2008) A steady corner detection of gray level images based on improved Harris algorithm. In: IEEE international conference on networking, sensing and control, pp 708–713Google Scholar
  13. 13.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. NIPS 2012, pp 1097–1105Google Scholar
  14. 14.
    Simonyan K, Zisserman A (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556
  15. 15.
    Semwal VB, Mondal K, Nandi GC (2017) Robust and accurate feature selection for humanoid push recovery and classification: deep learning approach. Neural Comput Appl 28(3):565–574CrossRefGoogle Scholar
  16. 16.
    Chen L, Qu H, Zhao J et al (2016) Efficient and robust deep learning with correntropy-induced loss function. Neural Comput Appl 27(4):1019–1031CrossRefGoogle Scholar
  17. 17.
    Zhang H, Cao X, Ho JKL et al (2017) Object-level video advertising: an optimization framework. IEEE Trans Ind Inf 13(2):520–531CrossRefGoogle Scholar
  18. 18.
    Zhang H, Li J, Ji Y et al (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inf 13(2):616–624CrossRefGoogle Scholar
  19. 19.
    Girshick R, Donahue J, Darrell T et al (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158CrossRefGoogle Scholar
  20. 20.
    Ren S, He K, Girshick R et al (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. NIPS 2015, pp 91–99Google Scholar
  21. 21.
    Redmon J, Divvala S, Girshick R et al (2015) You only look once: unified, real-time object detection. arXiv preprint arXiv:1506.02640
  22. 22.
    He K, Zhang X, Ren S et al (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916CrossRefGoogle Scholar
  23. 23.
    Mu N, Xu X, Zhang X et al (2017) Salient object detection using a covariance-based CNN model in low-contrast images. Neural Comput Appl. doi: 10.1007/s00521-017-2870-6 Google Scholar
  24. 24.
    Bai L, Li K, Pei J et al (2016) Main objects interaction activity recognition in real images. Neural Comput Appl 27(2):335–348CrossRefGoogle Scholar
  25. 25.
    Felzenszwalb PF, Girshick RB, McAllester D et al (2010) Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell 32(9):1627–1645CrossRefGoogle Scholar
  26. 26.
    Jaderberg M, Simonyan K, Vedaldi A et al (2016) Reading Text in the Wild with Convolutional Neural Networks[J]. Int J Comput Vision 116(1):1–20MathSciNetCrossRefGoogle Scholar
  27. 27.
    Lepetit V, Fua P (2005) Monocular model-based 3D tracking of rigid objects: a survey. Found Trends Comput Gr Vis 1(1):1–89CrossRefGoogle Scholar
  28. 28.
    Douxchamps D, Chihara K (2009) High-accuracy and robust localization of large control markers for geometric camera calibration. IEEE Trans Pattern Anal Mach Intell 31(2):376–83CrossRefGoogle Scholar
  29. 29.
    Leng D (2011) Research on monocular vision based metrology for 3D rigid object. Tsinghua University, BeijingGoogle Scholar
  30. 30.
    Royer E, Lhuillier M, Dhome M et al (2007) Monocular Vision for Mobile Robot Localization and Autonomous Navigation[J]. Int J Comput Vision 74(3):237–260CrossRefGoogle Scholar
  31. 31.
    Michels J, Saxena A, Ng AY (2005) High speed obstacle avoidance using monocular vision and reinforcement learning. Int Conf DBLP 2005:593–600Google Scholar
  32. 32.
    Achtelik M, Achtelik M, Weiss S et al (2011) Onboard IMU and monocular vision based control for MAVs in unknown in and outdoor environments[C]. IEEE International Conference on Robotics and Automation 47(10):3056–3063Google Scholar
  33. 33.
    Uijlings JRR, van de Sande KEA, Gevers T et al (2013) Selective search for object recognition. Int J Comput Vis 104(2):154–171CrossRefGoogle Scholar
  34. 34.
    Zitnick CL, Dollr P (2014) Edge boxes: locating object proposals from edges. In: European conference on computer vision, ECCV 2014. Springer, Berlin, pp 391–405Google Scholar
  35. 35.
    Ballard DH (1987) Generalizing the Hough transform to detect arbitrary shapes. Pattern Recogn 13(2):111–122CrossRefzbMATHGoogle Scholar

Copyright information

© The Natural Computing Applications Forum 2017

Authors and Affiliations

  1. 1.Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations