Skip to main content

Video-Based Container Tracking System Using Deep Learning

  • Chapter
  • First Online:
Cyber Physical, Computer and Automation System

Abstract

The process of loading and unloading containers at the port is very important to be automated in order to increase productivity, revenues, efficiency and safety in the logistics transportation process especially in a maritime country like Indonesia. To achieve this, identification and tracking of container positions need to be done accurately so that container transfers can occur precisely and smoothly. In this study, YOLO deep learning is used to detect moving containers. The model is trained using container images, and then the validation and testing process are carried out on the model using other container images. The results of training are stored in several checkpoints which will be compared to get the most accurate model. Obtained YOLO gave good results for tracking containers with MAP value of 68.63% and LAMR of 0.31.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Perkovic, M., Gucma, M., Luin, B., Gucma, L., Brcko, T.: Accommodating larger container vessels using an integrated laser system for approach and berthing. Microprocess. Microsyst. 2, 106–116 (2017)

    Article  Google Scholar 

  2. Yu, M., Qi, X.: Storage space allocation models for inbound containers in an automatic container terminal. Eur. J. Oper. Res. 226(1), 32–45 (2013)

    Article  MathSciNet  Google Scholar 

  3. Henning, K., Bruns, M.: Design of an automatic control system for train-to-train container transfer. IFAC Proc. Vol. 16(4), 165–173 (1983)

    Article  Google Scholar 

  4. Wu, W., Liu, Z., Chen, M., Yang, X., He, X.: An automated vision system for container-code recognition. Expert Syst. Appl. 39(3), 2842–2855 (2012)

    Article  Google Scholar 

  5. Girshick, R, Donahue, J, Darrell, T, et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Comput. Sci, 580–587 (2013)

    Google Scholar 

  6. Girshick, R.: Fast R-CNN. IEEE International Conference on Computer Vision, 1440–1448 (2015)

    Google Scholar 

  7. Wei, L., Dragomir, A.: SSD: Single Shot MultiBox Detector. arXiv preprint arXiv:1512.02325v5 (2016)

  8. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. arXiv preprint arXiv:1506.02640 (2015)

  9. Lan, W., Dang, J., Wang, Y., Wang, S.: Pedestrian detection based on YOLO network model. In: 2018 IEEE International Conference on Mechatronics and Automation (ICMA). doi:https://doi.org/10.1109/icma.2018.8484698 (2018)

  10. Berg, A.: Learning to Analyze what is Beyond the Visible Spectrum (Vol. 2024). Linköping University Electronic Press (2019)

    Google Scholar 

  11. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016)

    Google Scholar 

  12. Rosebrock, A.: Deep learning for computer vision with python: starter bundle. PyImageSearch (2017)

    Google Scholar 

  13. Basha, S.H., Dubey, S.R., Pulabaigari, V., Mukherjee, S.: Impact of fully connected layers on performance of convolutional neural networks for image classification. arXiv preprint arXiv:1902.02771 (2019)

  14. Hendry, & Chen, R.-C.: Automatic license plate recognition via sliding-window Darknet-Yolo deep learning. Image Vision Comput. doi: https://doi.org/10.1016/j.imavis.2019.04.007 (2019).

  15. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vision 88(2), 303–338 (2010)

    Article  Google Scholar 

  16. Braun, M., Krebs, S., Flohr, F., Gavrila, D.M.: The eurocity persons dataset: A novel benchmark for object detection. arXiv preprint arXiv:1805.07193 (2018)

Download references

Acknowledgements

The first author is grateful to the Ministry of Education and Culture of the Universitas Pembangunan Nasional “Veteran” Jawa Timur, Faculty of Computer Science, Indonesia, who funded this research publication. E. Joelianto and P. Siregar are supported by the Ministry of Research, Technology and Higher Education under Higher Education Applied Research Grant 2018-2019, Indonesia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Endra Joelianto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rahmat, B. et al. (2021). Video-Based Container Tracking System Using Deep Learning. In: Joelianto, E., Turnip, A., Widyotriatmo, A. (eds) Cyber Physical, Computer and Automation System. Advances in Intelligent Systems and Computing, vol 1291. Springer, Singapore. https://doi.org/10.1007/978-981-33-4062-6_8

Download citation

Publish with us

Policies and ethics