Skip to main content

Deep Hough-Transform Line Priors

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12367))

Included in the following conference series:

Abstract

Classical work on line segment detection is knowledge-based; it uses carefully designed geometric priors using either image gradients, pixel groupings, or Hough transform variants. Instead, current deep learning methods do away with all prior knowledge and replace priors by training deep networks on large manually annotated datasets. Here, we reduce the dependency on labeled data by building on the classic knowledge-based priors while using deep networks to learn features. We add line priors through a trainable Hough transform block into a deep network. Hough transform provides the prior knowledge about global line parameterizations, while the convolutional layers can learn the local gradient-like line features. On the Wireframe (ShanghaiTech) and York Urban datasets we show that adding prior knowledge improves data efficiency as line priors no longer need to be learned from data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/yanconglin/Deep-Hough-Transform-Line-Priors.

  2. 2.

    https://github.com/svip-lab/PPGNet.

References

  1. Almazan, E.J., Tal, R., Qian, Y., Elder, J.H.: MCMLSD: a dynamic programming approach to line segment detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2031–2039 (2017)

    Google Scholar 

  2. Barbu, A., et al.: Objectnet: a large-scale bias-controlled dataset for pushing the limits of object recognition models. In: Advances in Neural Information Processing Systems, pp. 9448–9458 (2019)

    Google Scholar 

  3. Beatty, J.: The Radon Transform and the Mathematics of Medical Imaging. Honors thesis, Digital Commons @ Colby (2012)

    Google Scholar 

  4. Beltrametti, M.C., Campi, C., Massone, A.M., Torrente, M.L.: Geometry of the Hough transforms with applications to synthetic data. CoRR (2019)

    Google Scholar 

  5. Bruna, J., Mallat, S.: Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1872–1886 (2013)

    Article  Google Scholar 

  6. Burns, J.B., Hanson, A.R., Riseman, E.M.: Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell. 4, 425–455 (1986)

    Article  Google Scholar 

  7. Cho, N.G., Yuille, A., Lee, S.W.: A novel linelet-based representation for line segment detection. IEEE Trans. Pattern Anal. Mach. Intell. 40(5), 1195–1208 (2017)

    Article  Google Scholar 

  8. Denis, P., Elder, J.H., Estrada, F.J.: Efficient edge-based methods for estimating manhattan frames in urban imagery. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 197–210. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88688-4_15

    Chapter  Google Scholar 

  9. Do, M.N., Vetterli, M.: The finite Ridgelet transform for image representation. IEEE Trans. Image Process. 12(1), 16–28 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  10. Duda, R.O., Hart, P.E.: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15(1), 11–15 (1972)

    Article  MATH  Google Scholar 

  11. Furukawa, Y., Shinagawa, Y.: Accurate and robust line segment extraction by analyzing distribution around peaks in Hough space. Comput. Vis. Image Underst. 92(1), 1–25 (2003)

    Article  Google Scholar 

  12. Gershikov, E., Libe, T., Kosolapov, S.: Horizon line detection in marine images: which method to choose? In. J. Adv. Intell. Syst. 6(1) (2013)

    Google Scholar 

  13. Guerreiro, R.F., Aguiar, P.M.: Connectivity-enforcing Hough transform for the robust extraction of line segments. IEEE Trans. Image Process. 21(12), 4819–4829 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. He, J., Ma, J.: Radon inversion via deep learning. In: Medical Imaging (2018)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. Hillel, A.B., Lerner, R., Levi, D., Raz, G.: Recent progress in road and lane detection: a survey. Mach. Vis. Appl. 25(3), 727–745 (2014)

    Article  Google Scholar 

  18. Huang, K., Wang, Y., Zhou, Z., Ding, T., Gao, S., Ma, Y.: Learning to parse wireframes in images of man-made environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 626–635 (2018)

    Google Scholar 

  19. Jacobsen, J.H., van Gemert, J., Lou, Z., Smeulders, A.W.: Structured receptive fields in CNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2610–2619 (2016)

    Google Scholar 

  20. Kamat-Sadekar, V., Ganesan, S.: Complete description of multiple line segments using the Hough transform. Image Vis. Comput. 16(9–10), 597–613 (1998)

    Article  Google Scholar 

  21. Kayhan, O.S., van Gemert, J.C.: On translation invariance in CNNs: convolutional layers can exploit absolute spatial location. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14274–14285 (2020)

    Google Scholar 

  22. Lee, S., et al.: VPGNet: vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1947–1955 (2017)

    Google Scholar 

  23. Magnusson, M.: Linogram and other direct fourier methods for tomographic reconstruction. Linköping studies in science and technology: Dissertations, Department of Mechanical Engineering, Linköping University (1993)

    Google Scholar 

  24. Maire, M., Arbelaez, P., Fowlkes, C., Malik, J.: Using contours to detect and localize junctions in natural images. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)

    Google Scholar 

  25. Martin, D.R., Fowlkes, C.C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 530–549 (2004)

    Article  Google Scholar 

  26. Matas, J., Galambos, C., Kittler, J.: Robust detection of lines using the progressive probabilistic Hough transform. Comput. Vis. Image Underst. 78(1), 119–137 (2000)

    Article  Google Scholar 

  27. Min, J., Lee, J., Ponce, J., Cho, M.: Hyperpixel flow: semantic correspondence with multi-layer neural features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3395–3404 (2019)

    Google Scholar 

  28. Nguyen, V.N., Jenssen, R., Roverso, D.: LS-Net: Fast single-shot line-segment detector. CoRR (2019)

    Google Scholar 

  29. Nikolaev, D.P., Karpenko, S.M., Nikolaev, I.P., Nikolayev, P.P.: Hough transform: underestimated tool in the computer vision field. In: Proceedings of the 22th European Conference on Modelling and Simulation, vol. 238, p. 246 (2008)

    Google Scholar 

  30. Niu, J., Lu, J., Xu, M., Lv, P., Zhao, X.: Robust lane detection using two-stage feature extraction with curve fitting. Pattern Recogn. 59, 225–233 (2016)

    Article  Google Scholar 

  31. Pătrăucean, V., Gurdjos, P., von Gioi, R.G.: A parameterless line segment and elliptical arc detector with enhanced ellipse fitting. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7573, pp. 572–585. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33709-3_41

    Chapter  Google Scholar 

  32. Porzi, L., Rota Bulò, S., Ricci, E.: A deeply-supervised deconvolutional network for horizon line detection. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 137–141 (2016)

    Google Scholar 

  33. Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep Hough voting for 3D object detection in point clouds. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9277–9286 (2019)

    Google Scholar 

  34. Rim, D.: Exact and fast inversion of the approximate discrete radon transform from partial data. Appl. Math. Lett. 102, 106159 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  35. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  36. Satzoda, R.K., Trivedi, M.M.: Efficient lane and vehicle detection with integrated synergies (ELVIS). In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 708–713 (2014)

    Google Scholar 

  37. Shelhamer, E., Wang, D., Darrell, T.: Blurring the line between structure and learning to optimize and adapt receptive fields. CoRR (2019)

    Google Scholar 

  38. Sheshkus, A., Ingacheva, A., Arlazarov, V., Nikolaev, D.: Houghnet: neural network architecture for vanishing points detection (2019)

    Google Scholar 

  39. Simon, G., Fond, A., Berger, M.-O.: A-Contrario horizon-first vanishing point detection using second-order grouping laws. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 323–338. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_20

    Chapter  Google Scholar 

  40. Sosnovik, I., Szmaja, M., Smeulders, A.: Scale-equivariant steerable networks. In: International Conference on Learning Representations (2020)

    Google Scholar 

  41. Sun, J., Liang, L., Wen, F., Shum, H.Y.: Image vectorization using optimized gradient meshes. ACM Trans. Graph. (TOG) 26(3), 11-es (2007)

    Google Scholar 

  42. Toft, P.: The Radon Transform: Theory and Implementation. Section for Digital Signal Processing, Technical University of Denmark, Department of Mathematical Modelling (1996)

    Google Scholar 

  43. Urban, G., et al.: Do deep convolutional nets really need to be deep and convolutional? In: International Conference on Learning Representations (2016)

    Google Scholar 

  44. Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2008)

    Article  Google Scholar 

  45. Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: On straight line segment detection. J. Math. Imaging Vis. 32(3), 313 (2008)

    Article  MathSciNet  Google Scholar 

  46. Wei, H., Bing, W., Yue, Z.: X-LineNet: Detecting aircraft in remote sensing images by a pair of intersecting line segments. CoRR (2019)

    Google Scholar 

  47. Wei, Q., Feng, D., Zheng, W.: Funnel transform for straight line detection. CoRR (2019)

    Google Scholar 

  48. Workman, S., Zhai, M., Jacobs, N.: Horizon lines in the wild. In: British Machine Vision Conference (2016)

    Google Scholar 

  49. Xu, Z., Shin, B.S., Klette, R.: Accurate and robust line segment extraction using minimum entropy with Hough transform. IEEE Trans. Image Process. 24(3), 813–822 (2014)

    MathSciNet  MATH  Google Scholar 

  50. Xu, Z., Shin, B.S., Klette, R.: A statistical method for line segment detection. Comput. Vis. Image Underst. 138, 61–73 (2015)

    Article  Google Scholar 

  51. Xue, N., Bai, S., Wang, F., Xia, G.S., Wu, T., Zhang, L.: Learning attraction field representation for robust line segment detection. In: The IEEE Conference on Computer Vision and Pattern Recognition, June 2019

    Google Scholar 

  52. Xue, N., et al.: Holistically-attracted wireframe parsing. In: Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  53. Zhang, Z., et al.: PPGNet: learning point-pair graph for line segment detection. In: Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  54. Zhou, Y., Qi, H., Ma, Y.: End-to-end wireframe parsing. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 962–971 (2019)

    Google Scholar 

  55. Zhou, Y., et al.: Learning to reconstruct 3D manhattan wireframes from a single image. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7698–7707 (2019)

    Google Scholar 

  56. Zou, J.J., Yan, H.: Cartoon image vectorization based on shape subdivision. In: Proceedings of Computer Graphics International 2001, pp. 225–231 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Silvia L. Pintea .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 8706 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, Y., Pintea, S.L., van Gemert, J.C. (2020). Deep Hough-Transform Line Priors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12367. Springer, Cham. https://doi.org/10.1007/978-3-030-58542-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58542-6_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58541-9

  • Online ISBN: 978-3-030-58542-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics