Advertisement

CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive Point Blending

Conference paper
  • 577 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12360)

Abstract

We address the curve lane detection problem which poses more realistic challenges than conventional lane detection for better facilitating modern assisted/autonomous driving systems. Current hand-designed lane detection methods are not robust enough to capture the curve lanes especially the remote parts due to the lack of modeling both long-range contextual information and detailed curve trajectory. In this paper, we propose a novel lane-sensitive architecture search framework named CurveLane-NAS to automatically capture both long-ranged coherent and accurate short-range curve information. It consists of three search modules: a) a feature fusion search module to find a better fusion of the local and global context for multi-level hierarchy features; b) an elastic backbone search module to explore an efficient feature extractor with good semantics and latency; c) an adaptive point blending module to search a multi-level post-processing refinement strategy to combine multi-scale head prediction. Furthermore, we also steer forward to release a more challenging benchmark named CurveLanes for addressing the most difficult curve lanes. It consists of 150K images with 680K labels (The new dataset can be downloaded at http://www.noahlab.com.hk/opensource/vega/#curvelanes). Experiments on the new CurveLanes show that the SOTA lane detection methods suffer substantial performance drop while our model can still reach an 80+% F1-score. Extensive experiments on traditional lane benchmarks such as CULane also demonstrate the superiority of our CurveLane-NAS, e.g. achieving a new SOTA 74.8% F1-score on CULane.

Keywords

Lane detection Autonomous driving Benchmark dataset Neural architecture search Curve lane 

Supplementary material

504470_1_En_41_MOESM1_ESM.pdf (18.4 mb)
Supplementary material 1 (pdf 18886 KB)

References

  1. 1.
    Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. In: ICLR (2017)Google Scholar
  2. 2.
    Cai, H., Chen, T., Zhang, W., Yu, Y., Wang, J.: Efficient architecture search by network transformation. In: AAAI (2018)Google Scholar
  3. 3.
    Cai, H., Zhu, L., Han, S.: ProxylessNAS: direct neural architecture search on target task and hardware. In: ICLR 2019 (2019)Google Scholar
  4. 4.
    Chen, L.C., et al.: Searching for efficient multi-scale architectures for dense image prediction. In: NeurIPS (2018)Google Scholar
  5. 5.
    Chen, Y., Yang, T., Zhang, X., Meng, G., Pan, C., Sun, J.: DetNAS: neural architecture search on object detection. In: NeurIPS (2019)Google Scholar
  6. 6.
    Chen, Z., Liu, Q., Lian, C.: PointLaneNet: efficient end-to-end CNNs for accurate real-time lane detection. In: IV, pp. 2563–2568. IEEE (2019)Google Scholar
  7. 7.
    Chiu, K.Y., Lin, S.F.: Lane detection using color-based segmentation. In: IV, pp. 706–711. IEEE (2005)Google Scholar
  8. 8.
    Gonzalez, J.P., Ozguner, U.: Lane detection using histogram-based segmentation and decision trees. In: 2000 IEEE Intelligent Transportation Systems. Proceedings (Cat. No. 00TH8493), ITSC 2000, pp. 346–351. IEEE (2000)Google Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  10. 10.
    Hou, Y.: Agnostic lane detection. CoRR (2019). http://arxiv.org/abs/1905.03704
  11. 11.
    Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNs by self attention distillation. In: ICCV 2019 (2019)Google Scholar
  12. 12.
    Jiang, C., Xu, H., Zhang, W., Liang, X., Li, Z.: SP-NAS: serial-to-parallel backbone search for object detection. In: CVPR, pp. 11863–11872 (2020)Google Scholar
  13. 13.
    Lee, J.W., Cho, J.S.: Effective lane detection and tracking method using statistical modeling of color and lane edge-orientation. In: 2009 Fourth International Conference on Computer Sciences and Convergence Information Technology, pp. 1586–1591. IEEE (2009)Google Scholar
  14. 14.
    Li, X., Li, J., Hu, X., Yang, J.: Line-CNN: end-to-end traffic line detection with line proposal unit. IEEE Trans. Intell. Transp. Syst. 21, 248–258 (2019)CrossRefGoogle Scholar
  15. 15.
    Liu, C., et al.: Auto-DeepLab: hierarchical neural architecture search for semantic image segmentation. In: CVPR (2019)Google Scholar
  16. 16.
    Liu, C., et al.: Progressive neural architecture search. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 19–35. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01246-5_2CrossRefGoogle Scholar
  17. 17.
    Liu, H., Simonyan, K., Vinyals, O., Fernando, C., Kavukcuoglu, K.: Hierarchical representations for efficient architecture search. In: ICLR (2018)Google Scholar
  18. 18.
    Liu, H., Simonyan, K., Yang, Y.: Darts: differentiable architecture search. In: ICLR (2018)Google Scholar
  19. 19.
    Mamidala, R.S., Uthkota, U., Shankar, M.B., Antony, A.J., Narasimhadhan, A.V.: Dynamic approach for lane detection using google street view and CNN. CoRR (2019). http://arxiv.org/abs/1909.00798
  20. 20.
    Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: AAAI (2018)Google Scholar
  21. 21.
    Pizzati, F., Allodi, M., Barrera, A., García, F.: Lane detection and classification using cascaded CNNs. CoRR (2019). http://arxiv.org/abs/1907.01294
  22. 22.
    Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: AAAI, vol. 33, pp. 4780–4789 (2019)Google Scholar
  23. 23.
    Real, E., et al.: Large-scale evolution of image classifiers. In: ICML (2017)Google Scholar
  24. 24.
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: CVPR, pp. 4510–4520 (2018)Google Scholar
  25. 25.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  26. 26.
    Tan, M., Chen, B., Pang, R., Vasudevan, V., Le, Q.V.: MnasNet: platform-aware neural architecture search for mobile. In: CVPR (2019)Google Scholar
  27. 27.
    Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: ICML (2019)Google Scholar
  28. 28.
    TuSimple: Tusimple lane detection challenge. In: CVPR Workshops (2017)Google Scholar
  29. 29.
    Xie, S., Zheng, H., Liu, C., Lin, L.: SNAS: stochastic neural architecture search. In: ICLR (2019)Google Scholar
  30. 30.
    Xu, H., Yao, L., Zhang, W., Liang, X., Li, Z.: Auto-FPN: automatic network architecture adaptation for object detection beyond classification. In: ICCV (2019)Google Scholar
  31. 31.
    Yao, L., Xu, H., Zhang, W., Liang, X., Li, Z.: SM-NAS: structural-to-modular neural architecture search for object detection. In: AAAI (2020)Google Scholar
  32. 32.
    Yu, F., et al.: BDD100K: a diverse driving video database with scalable annotation tooling. In: CVPR (2020)Google Scholar
  33. 33.
    Zhong, Z., Yan, J., Wu, W., Shao, J., Liu, C.L.: Practical block-wise neural network architecture generation. In: CVPR (2018)Google Scholar
  34. 34.
    Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017)Google Scholar
  35. 35.
    Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR (2018)Google Scholar
  36. 36.
    Zou, Q., Jiang, H., Dai, Q., Yue, Y., Chen, L., Wang, Q.: Robust lane detection from continuous driving scenes using deep neural networks. arXiv preprint arXiv:1903.02193 (2019)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Huawei Noah’s Ark LabBeijingChina
  2. 2.Sun Yat-sen UniversityGuangzhouChina

Personalised recommendations