Skip to main content
SpringerLink
Log in
Menu
Find a journal Publish with us
Search
Cart
  1. Home
  2. Machine Intelligence Research
  3. Article

YOLOP: You Only Look Once for Panoptic Driving Perception

  • Research Article
  • Open Access
  • Published: 07 November 2022
  • volume 19, pages 550–562 (2022)
Download PDF

You have full access to this open access article

Machine Intelligence Research Aims and scope Submit manuscript
YOLOP: You Only Look Once for Panoptic Driving Perception
Download PDF
  • Dong Wu  ORCID: orcid.org/0000-0002-2306-57691,
  • Man-Wen Liao2,
  • Wei-Tian Zhang2,
  • Xing-Gang Wang  ORCID: orcid.org/0000-0001-6732-78232,
  • Xiang Bai3,
  • Wen-Qing Cheng1 &
  • …
  • Wen-Yu Liu2 
  • 1556 Accesses

  • 36 Citations

  • 1 Altmetric

  • Explore all metrics

  • Cite this article

A Correction to this article was published on 27 May 2023

This article has been updated

Abstract

A panoptic driving perception system is an essential part of autonomous driving. A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving. We present a panoptic driving perception network (you only look once for panoptic (YOLOP)) to perform traffic object detection, drivable area segmentation, and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. Our model performs extremely well on the challenging BDD100K dataset, achieving state-of-the-art on all three tasks in terms of accuracy and speed. Besides, we verify the effectiveness of our multi-task learning model for joint training via ablative studies. To our best knowledge, this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS), and maintain excellent accuracy. To facilitate further research, the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.

Download to read the full article text

Working on a manuscript?

Avoid the common mistakes

Change history

  • 27 May 2023

    A Correction to this paper has been published: https://doi.org/10.1007/s11633-023-1452-6

References

  1. S. Q. Ren, K. M. He, R. Girshick, J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, Canada, vol. 1, pp. 91–99, 2015.

    Google Scholar 

  2. A. Bochkovskiy, C. Y. Wang, H. Y. M. Liao. YOLOv4: Optimal speed and accuracy of object detection. DOI: https://doi.org/10.48550/arXiv.2004.10934., 2020

  3. A. Paszke, A. Chaurasia, S. Kim, E. Culurciello. ENet: A deep neural network architecture for real-time semantic segmentation. DOI: https://doi.org/10.48550/arXiv.1606.02147., 2016

  4. H. S. Zhao, J. P. Shi, X. J. Qi, X. G. Wang, J. Y. Jia. Pyramid scene parsing network. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, USA, pp. 6230–6239, 2017. DOI: https://doi.org/10.1109/CVPR.2017.660.

    Google Scholar 

  5. X. G. Pan, J. P. Shi, P. Luo, X. G. Wang, X. O. Tang. Spatial as deep: Spatial CNN for traffic scene understanding. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, USA, pp. 7276–7283, 2018. DOI: https://doi.org/10.1609/aaai.v32i1.12301.

  6. Y. N. Hou, Z. Ma, C. X. Liu, C. C. Loy. Learning lightweight lane detection CNNs by self attention distillation. In Proceedings of IEEE/CVF International Conference on Computer Vision, IEEE, Seoul, Korea, pp.1013–1021, 2019. DOI: https://doi.org/10.1109/ICCV.2019.00110.

    Google Scholar 

  7. C. Y. Wang, A. Bochkovskiy, H. Y. M. Liao. Scaled-YOLOv4: Scaling cross stage partial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Nashville, USA, pp. 13024–13033, 2021. DOI: https://doi.org/10.1109/CVPR46437.2021.01283.

    Google Scholar 

  8. K. M. He, G. Gkioxari, P. Dollár, R. Girshick. Mask R-CNN. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp. 2980–2988, 2017. DOI: https://doi.org/10.1109/ICCV.2017.322.

    Google Scholar 

  9. F. Yu, W. Q. Xian, Y. Y. Chen, F. C. Liu, M. K. Liao, V. Madhavan, T. Darrell. BDD100K: A diverse driving video database with scalable annotation tooling. DOI: https://doi.org/10.48550/arXiv.1805.04687., 2018

  10. R. Girshick, J. Donahue, T. Darrell, J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Columbus, USA, pp. 580–587, 2014. DOI: https://doi.org/10.1109/CVPR.2014.81.

    Google Scholar 

  11. R. Girshick. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, IEEE, Santiago, Chile, pp. 1440–1448, 2015. DOI: https://doi.org/10.1109/ICCV.2015.169.

    Google Scholar 

  12. J. F. Dai, Y. Li, K. M. He, J. Sun. R-FCN: Object detection via region-based fully convolutional networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, pp. 379–387, 2016.

  13. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, A. C. Berg. SSD: Single shot MultiBox detector. In Proceedings of the 14th European Conference on Computer Vision, Springer, Amsterdam, The Netherlands, pp. 21–37, 2016. DOI: https://doi.org/10.1007/978-3-319-46448-0_2.

    Google Scholar 

  14. J. Redmon, S. Divvala, R. Girshick, A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 779–788, 2016. DOI: https://doi.org/10.1109/CVPR.2016.91.

    Google Scholar 

  15. J. Redmon, A. Farhadi. YOLO9000: Better, faster, stronger. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, USA, pp. 6517–6525, 2017. DOI: https://doi.org/10.1109/CVPR.2017.690.

    Google Scholar 

  16. J. Redmon, A. Farhadi. YOLOv3: An incremental improvement. [Online], Availabe: https://arxiv.org/abs/1804.02767, 2018.

  17. J. Long, E. Shelhamer, T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 3431–3440, 2015. DOI: https://doi.org/10.1109/CVPR.2015.7298965.

    Google Scholar 

  18. H. Y. Han, Y. C. Chen, P. Y. Hsiao, L. C. Fu. Using channel-wise attention for deep CNN based real-time semantic segmentation with class-aware edge information. IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 1041–1051, 2021. DOI: https://doi.org/10.1109/TITS.2019.2962094.

    Article  Google Scholar 

  19. D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, L. Van Gool. Towards end-to-end lane detection: An instance segmentation approach. In Proceedings of IEEE Intelligent Vehicles Symposium, IEEE, Changshu, China, pp. 286–291, 2018. DOI: https://doi.org/10.1109/IVS.2018.8500547.

    Google Scholar 

  20. K. W. Duan, L. X. Xie, H. G. Qi, S. Bai, Q. M. Huang, Q. Tian. Location-sensitive visual recognition with cross-IOU loss. [Online], Available: https://arxiv.org/abs/2104.04899v1, 2021.

  21. M. Teichmann, M. Weber, M. Zöllner, R. Cipolla, R. Urtasun. MultiNet: Real-time joint semantic reasoning for autonomous driving. In Proceedings of IEEE Intelligent Vehicles Symposium, IEEE, Changshu, China, pp. 1013–1020, 2018. DOI: https://doi.org/10.1109/IVS.2018.8500504.

    Google Scholar 

  22. Y. Q. Qian, J. M. Dolan, M. Yang. DLT-Net: Joint detection of drivable areas, lane lines, and traffic objects. IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 11, pp. 4670–4679, 2020. DOI: https://doi.org/10.1109/TITS.2019.2943777.

    Article  Google Scholar 

  23. J. Zhang, Y. Xu, B. B. Ni, Z. Y. Duan. Geometric constrained joint lane segmentation and lane boundary detection. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 502–518, 2018. DOI: https://doi.org/10.1007/978-3-030-01246-5_30.

    Google Scholar 

  24. Z. L. Kang, K. Grauman, F. Sha. Learning with whom to share in multi-task feature learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, Bellevue, USA, pp. 521–528, 2011.

  25. C. Y. Wang, H. Y. M. Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, I. H. Yeh. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Seattle, USA, pp. 1571–1580, 2020. DOI: https://doi.org/10.1109/CVPRW50498.2020.00203.

    Google Scholar 

  26. K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904–1916, 2015. DOI: https://doi.org/10.1109/TPAMI.2015.2389824.

    Article  Google Scholar 

  27. T. Y. Lin, P. Dollár, R. Girshick, K. M. He, B. Hariharan, S. Belongie. Feature pyramid networks for object detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, USA, pp. 936–944, 2017. DOI: https://doi.org/10.1109/CVPR.2017.106.

    Google Scholar 

  28. S. Liu, L. Qi, H. F. Qin, J. P. Shi, J. Y. Jia. Path aggregation network for instance segmentation. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 8759–8768, 2018. DOI: https://doi.org/10.1109/CVPR.2018.00913.

    Google Scholar 

  29. T. Y. Lin, P. Goyal, R. Girshick, K. M. He, P. Dollár. Focal loss for dense object detection. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp. 2999–3007, 2017. DOI: https://doi.org/10.1109/ICCV.2017.324.

    Google Scholar 

  30. Z. H. Zheng, P. Wang, W. Liu, J. Z. Li, R. G. Ye, D. W. Ren. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, USA, pp. 12993–13000, 2020. DOI: https://doi.org/10.1609/aaai.v34i07.6999.

  31. I. Loshchilov, F. Hutter. SGDR: Stochastic gradient descent with warm restarts. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.

  32. K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 770–778, 2016. DOI: https://doi.org/10.1109/CVPR.2016.90.

    Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Nos. 61876212 and 1733007), Zhejiang Laboratory, China (No. 2019NB0AB02), and Hubei Province College Students Innovation and Entrepreneurship Training Program, China (No. S202010487058).

Author information

Authors and Affiliations

  1. Hubei Key Laboratory of Smart Internet Technology, School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China

    Dong Wu & Wen-Qing Cheng

  2. School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, 430074, China

    Man-Wen Liao, Wei-Tian Zhang, Xing-Gang Wang & Wen-Yu Liu

  3. School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China

    Xiang Bai

Authors
  1. Dong Wu
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Man-Wen Liao
    View author publications

    You can also search for this author in PubMed Google Scholar

  3. Wei-Tian Zhang
    View author publications

    You can also search for this author in PubMed Google Scholar

  4. Xing-Gang Wang
    View author publications

    You can also search for this author in PubMed Google Scholar

  5. Xiang Bai
    View author publications

    You can also search for this author in PubMed Google Scholar

  6. Wen-Qing Cheng
    View author publications

    You can also search for this author in PubMed Google Scholar

  7. Wen-Yu Liu
    View author publications

    You can also search for this author in PubMed Google Scholar

Corresponding author

Correspondence to Xing-Gang Wang.

Additional information

Dong Wu is a senior undergraduate student in electronic information engineering at School of Electronics Information and Communications, Huazhong University of Science and Technology (HUST), China.

His research interests include computer vision, machine learning and autonomous driving.

Man-Wen Liao is a senior undergraduate student in electronic information engineering at School of Electronics Information and Communications, Huazhong University of Science and Technology, China.

His research interests include computer vision, machine learning, robotics and autonomous driving.

Wei-Tian Zhang is a senior undergraduate student in electronic information engineering at Huazhong University of Science and Technology, China.

Her research interests include computer vision and machine learning.

Xing-Gang Wang received the B. Sc. and Ph. D. degrees in electronics and information engineering from Huazhong University of Science and Technology, China in 2009 and 2014, respectively. He is currently an associate professor with School of Electronic Information and Communications, HUST, China. He services as Associate Editors for Pattern Recognition and Image and Vision Computing and an editorial board member of Electronics.

His research interests include computer vision and machine learning.

Xiang Bai received the B. Sc., M. Sc., and Ph. D. degrees in electronics and information engineering from Huazhong University of Science and Technology, China in 2003, 2005, and 2009, respectively. He is currently a professor with School of Artificial Intelligence and Automation, HUST, China.

His research interests include object recognition, shape analysis, and scene text recognition.

Wen-Qing Cheng received the B. Sc. degree in telecommunication engineering and the Ph. D. degree in electronics and information engineering from Huazhong University of Science and Technology, Wuhan, China in 1985 and 2005, respectively. She is currently a professor and associate dean with School of Electronic Information and Communications, HUST, China.

Her research interests include information systems and e-learning applications.

Wen-Yu Liu received the B. Sc. degree in computer science from Tsinghua University, China in 1986, and the M. Sc. and Ph. D. degrees, in electronics and information engineering from Huazhong University of Science and Technology, China in 1991 and 2001, respectively. He is now a professor of School of Electronic Information and Communications, HUST, China. His research interests include computer vision, multimedia, and machine learning.

The original version of this article was revised due to a retrospective Open Access order

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, D., Liao, MW., Zhang, WT. et al. YOLOP: You Only Look Once for Panoptic Driving Perception. Mach. Intell. Res. 19, 550–562 (2022). https://doi.org/10.1007/s11633-022-1339-y

Download citation

  • Received: 25 March 2022

  • Accepted: 10 May 2022

  • Published: 07 November 2022

  • Issue Date: December 2022

  • DOI: https://doi.org/10.1007/s11633-022-1339-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Driving perception
  • multitask learning
  • traffic object detection
  • drivable area segmentation
  • lane detection

Working on a manuscript?

Avoid the common mistakes

Advertisement

Search

Navigation

  • Find a journal
  • Publish with us

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Publish your research
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our imprints

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support

Not affiliated

Springer Nature

© 2023 Springer Nature