Advertisement

WildDash - Creating Hazard-Aware Benchmarks

  • Oliver Zendel
  • Katrin Honauer
  • Markus Murschitz
  • Daniel Steininger
  • Gustavo Fernández Domínguez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11210)

Abstract

Test datasets should contain many different challenging aspects so that the robustness and real-world applicability of algorithms can be assessed. In this work, we present a new test dataset for semantic and instance segmentation for the automotive domain. We have conducted a thorough risk analysis to identify situations and aspects that can reduce the output performance for these tasks. Based on this analysis we have designed our new dataset. Meta-information is supplied to mark which individual visual hazards are present in each test case. Furthermore, a new benchmark evaluation method is presented that uses the meta-information to calculate the robustness of a given algorithm with respect to the individual hazards. We show how this new approach allows for a more expressive characterization of algorithm robustness by comparing three baseline algorithms.

Keywords

Test data Autonomous driving Validation Testing Safety analysis Semantic segmentation Instance segmentation 

Notes

Acknowledgement

The research was supported by ECSEL JU under the H2020 project grant agreement No. 737469 AutoDrive - Advancing fail-aware, fail-safe, and fail-operational electronic components, systems, and architectures for fully automated driving to make future mobility safer, affordable, and end-user acceptable. Special thanks go to all authors who allowed us to use their video material and Hassan Abu Alhaija from HCI for supplying the instance segmentation example algorithms.

Supplementary material

474211_1_En_25_MOESM1_ESM.pdf (3.6 mb)
Supplementary material 1 (pdf 3646 KB)

References

  1. 1.
    Torralba, A., Efros, A.: Unbiased look at dataset bias. In: CVPR, pp. 1521–1528 (2011)Google Scholar
  2. 2.
    Brostow, G.J., Shotton, J., Fauqueur, J., Cipolla, R.: Segmentation and recognition using structure from motion point clouds. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 44–57. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88682-2_5CrossRefGoogle Scholar
  3. 3.
    Franke, U., Gehrig, S., Rabe, C.: Daimler Böblingen, 6D-Vision. http://www.6d-vision.com. Accessed 15 Nov 2016
  4. 4.
    Scharwächter, T., Enzweiler, M., Franke, U., Roth, S.: Stixmantics: a medium-level model for real-time semantic scene understanding. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 533–548. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_35CrossRefGoogle Scholar
  5. 5.
    Cordts, M., et al.: The cityscapes dataset. In: CVPR Workshop on The Future of Datasets in Vision (2015)Google Scholar
  6. 6.
    Tung, F., Chen, J., Meng, L., Little, J.J.: The raincouver scene parsing benchmark for self-driving in adverse weather and at night. IEEE Robot. Autom. Lett. 2(4), 2188–2193 (2017)CrossRefGoogle Scholar
  7. 7.
    Mighty AI: Mighty AI Sample Data. https://info.mty.ai/semantic-segmentation-data. Accessed 07 Mar 2018
  8. 8.
    Mapillary Research: Mapillary Vistas Dataset. https://www.mapillary.com/dataset/vistas. Accessed 16 Feb 2018
  9. 9.
    University of California, Berkeley, U.: Berkeley deep drive. http://data-bdd.berkeley.edu/. Accessed 07 Mar 2018
  10. 10.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: CVPR (2012)Google Scholar
  11. 11.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: The KITTI Vision Benchmark Suite. http://www.cvlibs.net/datasets/kitti/eval_semantics.php. Accessed 16 Feb 2018
  12. 12.
    Gaidon, A., Wang, Q., Cabon, Y., Vig, E.: Virtual worlds as proxy for multi-object tracking analysis. In: CVPR (2016)Google Scholar
  13. 13.
    Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.: The SYNTHIA dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In: CVPR (2016)Google Scholar
  14. 14.
    Hernandez-Juarez, D., et al.: Slanted stixels: representing San Francisco steepest streets. In: BMVC (2017)Google Scholar
  15. 15.
    Richter, S.R., Hayder, Z., Koltun, V.: Playing for benchmarks. In: ICCV (2017)Google Scholar
  16. 16.
    Zendel, O., Murschitz, M., Humenberger, M., Herzner, W.: CV-HAZOP: introducing test data validation for computer vision. In: ICCV (2015)Google Scholar
  17. 17.
    Transportation Research Board of the National Academy of Sciences: The 2nd Strategic Highway Research Program Naturalistic Driving Study Dataset. Available from the SHRP 2 NDS InSight Data Dissemination web site (2013)Google Scholar
  18. 18.
    Zendel, O., Honauer, K., Murschitz, M., Humenberger, M., Dominguez, G.F.: Analyzing computer vision data - the good, the bad and the ugly. In: CVPR, pp. 6670–6680 (2017)Google Scholar
  19. 19.
    Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)CrossRefGoogle Scholar
  20. 20.
    Yu, F., Koltun, V., Funkhouser, T.: Dilated residual networks. In: CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Oliver Zendel
    • 1
  • Katrin Honauer
    • 1
  • Markus Murschitz
    • 1
  • Daniel Steininger
    • 1
  • Gustavo Fernández Domínguez
    • 1
  1. 1.AIT, Austrian Institute of TechnologyViennaAustria

Personalised recommendations