Advertisement

Forschung im Ingenieurwesen

, Volume 83, Issue 2, pp 163–171 | Cite as

Fahrzeugdetektion mit stationären Kameras zur automatischen Verkehrsüberwachung

  • Malte OeljeklausEmail author
  • Niklas Stannartz
  • Manuel Schmidt
  • Frank Hoffmann
  • Torsten Bertram
Originalarbeiten/Originals
  • 270 Downloads

Zusammenfassung

Menschliche Fahrfehler stellen die Hauptursache für Unfälle im Straßenverkehr dar. Die automatische Verkehrsüberwachung bietet einen Beitrag, um die Vision des unfallfreien Straßenverkehrs zu erreichen. Eine solche Infrastruktur erhöht unmittelbar die Verkehrssicherheit insbesondere vor dem Hintergrund einer langwierigen Durchdringung des Fahrzeugbestandes durch neue Assistenzsysteme. Das Multi Funktionale Detektions System erkennt an Autobahnabfahrten sowie Park- und Rastanlagen potenzielle Falschfahrer bei der Auffahrt auf die falsche Richtungsfahrbahn. Typischerweise werden dabei ortsfeste Messplattformen eingesetzt, hierfür bieten Kamerasensoren besonders günstige Voraussetzungen. Der vorliegende Beitrag betrachtet die Fahrzeugdetektion in Kamerabildern für die Integration in ein Falschfahrerwarnsystem. Für diese Anwendung ist die Realisierung von möglichst schnellen Verarbeitungszeiten entscheidend. Gängige Methoden der kamerabasierten Objektdetektion führen zu diesem Zweck eine vollständige Abtastung des aufgezeichneten Bildes durch. Aufgrund der feststehenden Kameraposition und der bekannten statischen Verkehrselemente lässt sich jedoch der Suchraum deutlich einschränken. Der Ansatz basiert auf einer reduzierten Suchstrategie, welche die vorteilhaften Eigenschaften herkömmlicher Verfahren zur Objektdetektion erhält.

Vehicle detection with stationary cameras for automated traffic monitoring

Abstract

Human errors are the main cause of road traffic accidents. In order to implement the vision of accident-free road traffic, automatic traffic monitoring offers an approach for assistance systems, which are independent of a protracted penetration of the vehicle fleet. For example, the Multi Functional Detection System is used to detect potential wrong-way drivers at motorway exits and rest areas when driving onto the wrong lane. Typically, stationary measuring platforms are used for this purpose, and camera sensors offer particularly favourable conditions. This contribution therefore considers vehicle detection in camera images for integration into a wrong-way driver warning system. For this application the realization of fast processing times is decisive. Conventional methods for the problem of camera-based object detection perform a complete scan of the recorded image for this purpose. Due to the fixed camera position and the known static traffic elements, however, it is possible to significantly limit this search-space. Therefore in the present work, a method is designed which can realize a reduced search strategy while retaining the advantageous properties of conventional methods for object detection.

Literatur

  1. 1.
    Choi W (2015) Near-online multi-target tracking with aggregated local flow descriptor. In: Proceedings of the IEEE, International Conference on Computer Vision, S 3029–3037Google Scholar
  2. 2.
    Cucchiara R, Grana C, Piccardi M, Prati A (2003) Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans Pattern Anal Mach Intell.  https://doi.org/10.1109/TPAMI.2003.1233909 Google Scholar
  3. 3.
    Dai J, Li Y, He K, Sun J (2016) R‑fcn: Object detection via region-based fully convolutional networks. http://arxiv.org/pdf/1605.06409v2 Google Scholar
  4. 4.
    Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338CrossRefGoogle Scholar
  5. 5.
    Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139.  https://doi.org/10.1006/jcss.1997.1504 MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361.  https://doi.org/10.1109/CVPR.2012.6248074
  7. 7.
    Girshick R (2015) Fast r‑cnn. In: Proceedings of the IEEE, International Conference on Computer Vision, S 1440–1448Google Scholar
  8. 8.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014).  https://doi.org/10.1109/CVPR.2014.81
  9. 9.
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, Massachusetts, and London, England. http://www.deeplearningbook.org zbMATHGoogle Scholar
  10. 10.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016).  https://doi.org/10.1109/CVPR.2016.90
  11. 11.
    Huang J, Rathod V, Sun C, Zhu M, Korattikara A, Fathi A, Fischer I, Wojna Z, Song Y, Guadarrama S, Murphy K (2017) Speed/accuracy trade-offs for modern convolutional object detectors. http://arxiv.org/pdf/1611.10012v3 CrossRefGoogle Scholar
  12. 12.
    Ioffe S, Szegedy C (1502) Batch normalization: Accelerating deep network training by reducing internal covariate shift. Corr Abs 03167(2015):448–456Google Scholar
  13. 13.
    Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90.  https://doi.org/10.1145/3065386 CrossRefGoogle Scholar
  14. 14.
    Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: European Conference on Computer Vision. Springer, Cham, S 740–755.  https://doi.org/10.1007/978-3-319-10602-1_48 Google Scholar
  15. 15.
    Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) Ssd: Single shot multibox detector. In: European Conference on Computer Vision. Springer, Cham, S 21–37.  https://doi.org/10.1007/978-3-319-46448-0_2 Google Scholar
  16. 16.
    Multi Funktionales Detektions System. www.mfds.eu/de/das-mfds. Abgerufen am: 26. Okt. 2018
  17. 17.
    Piccardi, M.: Background subtraction techniques: a review. In: P. Wieringa (ed.) 2004 IEEE International Conference on Systems, Man & Cybernetics, pp. 3099–3104. IEEE, Piscataway (N.J.) (op. 2004).  https://doi.org/10.1109/ICSMC.2004.1400815
  18. 18.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016).  https://doi.org/10.1109/CVPR.2016.91
  19. 19.
    Ren S, He K, Girshick R, Sun J (2017) Faster r‑cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149CrossRefGoogle Scholar
  20. 20.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) Imagenet large scale visual recognition challenge. http://arxiv.org/pdf/1409.0575v3 CrossRefGoogle Scholar
  21. 21.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. http://arxiv.org/pdf/1409.1556v6 Google Scholar
  22. 22.
    Statistisches Bundesamt: Unfallentwicklung auf deutschen Strassen 2017 (2017)Google Scholar
  23. 23.
    Suzuki, S., be, K.: Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing 30(1), 32–46 (1985).  https://doi.org/10.1016/0734-189X(85)90016-7
  24. 24.
    Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2016) Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR abs/1602.07261. http://arxiv.org/abs/1602.07261 Google Scholar
  25. 25.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015).  https://doi.org/10.1109/CVPR.2015.7298594
  26. 26.
    Titterington, D.M.: Recursive parameter estimation using incomplete data. Journal of the Royal Statistical Society. Series B (Methodological) pp. 257–267 (1984).  https://doi.org/10.1111/j.2517-6161.1984.tb01296.x
  27. 27.
    Toyama K, Krumm J, Brumitt B, Meyers B (1999) Wallflower: Principles and practice of background maintenance. In: Proceedings of the 7th IEEE International Conference on Computer Vision, vol. 1, pp. 255–261. IEEEGoogle Scholar
  28. 28.
    Uddagiri C, Das T (2011) A survey of techniques for background subtraction and traffic analysis on surveillance video. Univers J Appl Comput Sci Technol 1(3):107–113Google Scholar
  29. 29.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. I–511–I–518. IEEE (8-14 Dec. 2001).  https://doi.org/10.1109/CVPR.2001.990517
  30. 30.
    Wiener N, (1949) Extrapolation, interpolation, and smoothing of stationary time series. With Eng Appl 1:1–123zbMATHGoogle Scholar
  31. 31.
    Yang, B., Yan, J., Lei, Z., Li, S.Z.: Aggregate channel features for multi-view face detection. In: IEEE International Joint Conference on Biometrics, pp. 1–8. IEEE (29. Sept. 2014 - 2. Okt. 2014).  https://doi.org/10.1109/BTAS.2014.6996284
  32. 32.
    Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in neural information processing systems Bd. 27. MIT Press, Cambridge, S 3320–3328Google Scholar
  33. 33.
    Zivkovic Z (2004) Improved adaptive gaussian mixture model for background subtraction. In: Proceedings of the 17th International Conference on Pattern Recognition Bd. 2. IEEE, Piscataway, NJ, S 28–31Google Scholar
  34. 34.
    Zivkovic Z, van der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27(7):773–780.  https://doi.org/10.1016/j.patrec.2005.11.005 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Deutschland, ein Teil von Springer Nature 2019

Authors and Affiliations

  1. 1.TU DortmundDortmundDeutschland

Personalised recommendations