Skip to main content

Fast-PLDN: fast power line detection network

Abstract

Obstacle detection, especially real-time power line detection plays a vital role in the low-altitude flight safety of aircrafts. Most of previous power line detection methods fail to deal with curved power lines due to the small size and unapparent visual features in the complex scene. In the paper, we propose a novel fast power line detection network (Fast-PLDN), a real-time semantic segmentation model, for pixel-wise straight and curved power line detection. Besides, we construct our network with low-high pass block and edge attention fusion module, which extract spatial and semantic information effectively to improve the power line detection result along the boundary. Furthermore, we also build up a new dataset named AIR Power Line dataset based on pixel-wise annotations for power line detection task because public Power Line dataset based on pixel-wise annotations is so limited. Our model can run at 189.6 frames per second (fps) with 71.3% mean intersection over union (mIoU) on AIR Power Line dataset, which outperforms most of the previous power line detection methods and the existing real-time semantic segmentation models.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  1. 1.

    Wu, Q., An, J., Yang, R.: Extraction of power lines from aerial images based on Hough transform. Int. Soc. Opt. Eng. 7862, 78620Q (2010)

    Google Scholar 

  2. 2.

    Ji, J., Chen, G., Sun, L.: A novel Hough transform method for line detection by enhancing accumulator array. Pattern Recogn. Lett. 32(11), 1503–1510 (2011)

    Article  Google Scholar 

  3. 3.

    Fernandes, L.A.F., Oliveira, M.M.: Real-time line detection through an improved Hough transform voting scheme. Pattern Recogn. 41(1), 299–314 (2008)

    Article  Google Scholar 

  4. 4.

    Zhang, J., Shan, H., Cao, X., Yan, P., Li, X.: Pylon line spatial correlation assisted transmission line detection. IEEE Trans. Aerosp. Electron. Syst. 50(4), 2890–2905 (2014)

    Article  Google Scholar 

  5. 5.

    Shan, H., Zhang, J., Cao, X., Li, X., Wu, D.: Multiple auxiliaries assisted airborne power line detection. IEEE Trans. Industr. Electron. 64(6), 4810–4819 (2017)

    Article  Google Scholar 

  6. 6.

    Yuan, Y., Xie, J., Chen, X., Wang, J.: SegFix: model-agnostic boundary refinement for segmentation. In: 2020 European Conference on Computer Vision (ECCV) (2020)

  7. 7.

    Leng, L., Zhang, J., Khan, M.K., Chen, X., Alghathbar, K.: Dynamic weighted discrimination power analysis: a novel approach for face and palmprint recognition in DCT domain. Int. J. Phys. Sci. 5(17), 2543–2554 (2010)

    Google Scholar 

  8. 8.

    Li, Y., Pan, C., Cao, X., Wu, D.: Power line detection by pyramidal patch classification. IEEE Trans. Emerg. Topics Comput. Intell. 3(6), 416–426 (2019)

    Article  Google Scholar 

  9. 9.

    Li, Y., Xiao, Z., Zhen, X., Cao, X.: Attentional information fusion networks for cross-scene power line detection. IEEE Geosci. Remote Sens. Lett. 16(10), 1635–1639 (2019)

    Article  Google Scholar 

  10. 10.

    Leng, L., Li, M., Kim, C., Bi, X.: Dual-source discrimination power analysis for multi-instance contactless palmprint recognition. Multimed. Tools Appl. 76(1), 333–354 (2017)

    Article  Google Scholar 

  11. 11.

    Ioffe, S., Szegedy, C.: Batch normaization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)

  12. 12.

    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: 2011 Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 315–323 (2011)

  13. 13.

    Poudel, R.P., Liwicki, S.: Fast-SCNN: fast semantic segmentation network. In: 2019 British Machine Vision Conference (BMVC) (2019)

  14. 14.

    Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., Sang, N.: BiSeNet: bilateral segmentation network for real-time semantic segmentation. In: 2018 European Conference on Computer Vision (ECCV) (2018)

  15. 15.

    Yu, C., Gao, C., Wang, J., Yu, G., Shen, C., Sang, N.: BiSeNet V2: bilateral network with guided aggregation for real-time semantic segmentation. arXiv:2004.02147 (2020)

  16. 16.

    Zou, X., Xiao, F., Yu, Z., Lee, Y.: Delving deeper into anti-aliasing in ConvNets. In: 2020 British Machine Vision Conference (BMVC) (2020)

  17. 17.

    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: MobileNetV2: inverted residuals and linear bottlenecks. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4510–4520 (2018)

  18. 18.

    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6230–6239 (2017)

  19. 19.

    Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: ICNet for real-time semantic segmentation on high-resolution images. In: 2018 European Conference on Computer Vision (ECCV) (2018)

  20. 20.

    Poudel, R., Bonde, U., Liwicki, S., Zach, S.: Contextnet: exploring context and detail for semantic segmentation in real-time. In: 2018 British Machine Vision Conference (BMVC) (2018)

  21. 21.

    Li, X., Li, X., Zhang, L., Cheng, G., Shi, J., Lin, Z., Tan, S., Tang, Y.: Improving semantic segmentation via decoupled body and edge supervision. In: 2020 European Conference on Computer Vision (ECCV) (2020)

  22. 22.

    Woo, S., Park, J., Lee, J., Kweon, I.S.: CBAM: convolutional block attention module. In: 2018 European Conference on Computer Vision (ECCV) (2018)

  23. 23.

    Howard, A., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)

  24. 24.

    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: 2015 IEEE International Conference on Computer Vision (ICCV), pages 1026–1034 (2015)

  25. 25.

    Grompone von Gioi, R., Jakubowicz, J., Morel, J., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)

    Article  Google Scholar 

  26. 26.

    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Key Research and Development Program of China (2017YFF0107704).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Chenghua Xu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhu, K., Xu, C., Wei, Y. et al. Fast-PLDN: fast power line detection network. J Real-Time Image Proc (2021). https://doi.org/10.1007/s11554-021-01154-3

Download citation

Keywords

  • Power line detection
  • Low-high pass block
  • Edge attention fusion
  • Segmentation