On Performance Analysis of Optical Flow Algorithms

  • Daniel Kondermann
  • Steffen Abraham
  • Gabriel Brostow
  • Wolfgang Förstner
  • Stefan Gehrig
  • Atsushi Imiya
  • Bernd Jähne
  • Felix Klose
  • Marcus Magnor
  • Helmut Mayer
  • Rudolf Mester
  • Tomas Pajdla
  • Ralf Reulke
  • Henning Zimmer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7474)

Abstract

Literally thousands of articles on optical flow algorithms have been published in the past thirty years. Only a small subset of the suggested algorithms have been analyzed with respect to their performance. These evaluations were based on black-box tests, mainly yielding information on the average accuracy on test-sequences with ground truth. No theoretically sound justification exists on why this approach meaningfully and/or exhaustively describes the properties of optical flow algorithms. In practice, design choices are often made based on unmotivated criteria or by trial and error. This article is a position paper questioning current methods in performance analysis. Without empirical results, we discuss more rigorous and theoretically sound approaches which could enable scientists and engineers alike to make sufficiently motivated design choices for a given motion estimation task.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Horn, B., Schunck, B.: Determining optical flow. In: Artificial Intelligence, vol. 17, pp. 185–204 (1981)Google Scholar
  2. 2.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 1981 DARPA Image Understanding Workshop, pp. 121–130 (1981)Google Scholar
  3. 3.
    Barron, J.L., Fleet, D.J., Beauchemin, S.: Performance of optical flow techniques. International Journal of Computer Vision 12(1), 43–77 (1994)CrossRefGoogle Scholar
  4. 4.
    Otte, M., Nagel, H.: Optical Flow Estimation: Advances and Comparisons. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 801, pp. 51–60. Springer, Heidelberg (1994)Google Scholar
  5. 5.
    McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow (2001), http://of-eval.sourceforge.net/
  6. 6.
    Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., Szeliski, R.: A database and evaluation methodology for optical flow. In: Proc. of the 11th International Conference of Computer Vision (ICCV 2007), pp. 1–8. IEEE (2007)Google Scholar
  7. 7.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1), 1–31 (2011)CrossRefGoogle Scholar
  8. 8.
    Christensen, H., Förstner, W.: Editorial performance characteristics of vision algorithms. Machine Vision and Applications 9(5), 215–218 (1997)CrossRefGoogle Scholar
  9. 9.
    Haralick, R., Klette, R., Stiehl, S., Viergever, M.: Performance characterzation in computer vision (2000)Google Scholar
  10. 10.
    Clark, A., Courtney, P. (eds.): ICVS workshop on performance characterization and benchmarking of vision systems (1999)Google Scholar
  11. 11.
    Förstner, W.: 10 pros and cons against performance characterization of vision algorithms. In: Proc. of ECCV Workshop on Performance Characteristics of Vision Algorithms, pp. 13–29 (1996)Google Scholar
  12. 12.
    Maimone, M., Shafer, S.: A taxonomy for stereo computer vision experiments. In: Proc. of ECCV Workshop on Performance Characteristics of Vision Algorithms, pp. 59–79 (April 1996)Google Scholar
  13. 13.
    Matei, B., Meer, P., Tyler, D.: Performance assessment by resampling: rigid motion estimators. In: Proc. IEEE CS Workshop on Empirical Evaluation of Computer Vision Algorithms, Santa Barbara, California, pp. 72–95 (1998)Google Scholar
  14. 14.
    Klausmann, P., Fries, S., Willersinn, D., Stilla, U., Thönnessen, U.: Application-oriented assessment of computer vision algorithms. In: Handbook of Computer Vision and Applications, vol. 3, pp. 133–152 (1999)Google Scholar
  15. 15.
    Courtney, P., Thacker, N.: Performance characterisation in computer vision: The role of statistics in testing and design. In: Imaging and Vision Systems: Theory, Assessment and Applications. NOVA Science Books (2001)Google Scholar
  16. 16.
    Thacker, N., Lacey, A., Courtney, P., Rees, G.: An empirical design methodology for the construction of machine vision systems. In: Tutorial at ECCV, Copenhagen (2002)Google Scholar
  17. 17.
    Thacker, N.: Using quantitative statistics for the construction of machine vision systems. In: Proceedings of SPIE: Opto-Ireland 2002: Optical Metrology, Imaging, and Machine Vision, vol. 4877, pp. 1–15 (2003)Google Scholar
  18. 18.
    Thacker, N., Clark, A., Barron, J., Ross Beveridge, J., Courtney, P., Crum, W., Ramesh, V., Clark, C.: Performance characterization in computer vision: A guide to best practices. Computer Vision and Image Understanding 109(3), 305–334 (2008)CrossRefGoogle Scholar
  19. 19.
    Luxen, M.: Performance evaluation in natural and controlled environments applied to feature extraction procedures. In: Proc. of 2004 ISPRS Congress. The International Archives of The Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXV, Part B3, pp. 1061–1066 (2004)Google Scholar
  20. 20.
    Lucas, Y., Domingues, A., Driouchi, D., Treuillet, S.: Design of experiments for performance evaluation and parameter tuning of a road image processing chain. EURASIP Journal on Applied Signal Processing, 212 (2006)Google Scholar
  21. 21.
    Vogel, J., Schiele, B.: On Performance Characterization and Optimization for Image Retrieval. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part IV. LNCS, vol. 2353, pp. 49–63. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  22. 22.
    Liang, J., Doermann, D., Li, H.: Camera-based analysis of text and documents: a survey. International Journal on Document Analysis and Recognition 7(2), 84–104 (2005)CrossRefGoogle Scholar
  23. 23.
    Zhao, W., Chellappa, R., Phillips, P., Rosenfeld, A.: Face recognition: A literature survey. ACM Computing Surveys (CSUR) 35(4), 399–458 (2003)CrossRefGoogle Scholar
  24. 24.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. ACM Computing Surveys (CSUR) 38(4), 13 (2006)CrossRefGoogle Scholar
  25. 25.
    Burfoot, D.: Notes on a new philosophy of empirical science. Arxiv preprint arXiv:1104.5466 (2011)Google Scholar
  26. 26.
    Burton, A., Radford, J.: Thinking in perspective: critical essays in the study of thought processes. Methuen (1978)Google Scholar
  27. 27.
    Haussecker, H., Spies, H.: Motion. In: Jähne, B., Haussecker, H., Geissler, P. (eds.) Handbook of Computer Vision and Applications, vol. 2, ch. 13. Academic Press (1999)Google Scholar
  28. 28.
    Warren, D., Strelow, E.: Electronic spatial sensing for the blind: contributions from perception, rehabilitation, and computer vision, vol. 99. Kluwer Academic Print on Demand (1985)Google Scholar
  29. 29.
    Raffel, M., Willert, C., Kompenhans, J.: Postprocessing of PIV data. In: Particle Image Velocimetry, ch. 6. Springer (1998)Google Scholar
  30. 30.
    Sellent, A., Eisemann, M., Magnor, M.: Two Algorithms for Motion Estimation from Alternate Exposure Images. In: Cremers, D., Magnor, M., Oswald, M.R., Zelnik-Manor, L. (eds.) Video Processing and Computational Video. LNCS, vol. 7082, pp. 25–51. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  31. 31.
    Maciaszek, L.: Requirements analysis and system design. Pearson Education (2007)Google Scholar
  32. 32.
    Kossiakoff, A., Sweet, W., Seymour, S., Biemer, S.: Systems engineering principles and practice, vol. 27. Wiley Online Library (2003)Google Scholar
  33. 33.
    Mikhail, E., Bethel, J., McGlone, J.: Introduction to modern photogrammetry, vol. 31. Wiley, New York (2001)Google Scholar
  34. 34.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry. Cambridge University Press (2000)Google Scholar
  35. 35.
    Thormaehlen, T.: Zuverlässige schätzung der kamerabewegung aus einer bildfolge (2006)Google Scholar
  36. 36.
    Frahm, J.-M., Fite-Georgel, P., Gallup, D., Johnson, T., Raguram, R., Wu, C., Jen, Y.-H., Dunn, E., Clipp, B., Lazebnik, S., Pollefeys, M.: Building Rome on a Cloudless Day. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 368–381. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  37. 37.
    Guilherme, N., Avinash, C.: Vision for mobile robot navigation: A survey. IEEE Trans. on Pattern Analysis and Machine Intelligence 24(2), 237–267 (2002)CrossRefGoogle Scholar
  38. 38.
    Ohnishi, N., Imiya, A.: Featureless robot navigation using optical flow. Connection Science 17(1-2), 23–46 (2005)CrossRefGoogle Scholar
  39. 39.
    Zitnick, C., Kang, S., Uyttendaele, M., Winder, S., Szeliski, R.: High-quality video view interpolation using a layered representation. ACM Transactions on Graphics (TOG) 23, 600–608 (2004)CrossRefGoogle Scholar
  40. 40.
    Chen, S., Williams, L.: View interpolation for image synthesis. In: Proc. of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 279–288. ACM (1993)Google Scholar
  41. 41.
    Parsonage, P., Hilton, A., Starck, J.: Efficient dense reconstruction from video. In: Proceedings of the 8th European Conference on Visual Media Production (2011), http://www.cvmp-conference.org/2011-Papers
  42. 42.
    Lang, M., Hornung, A., Wang, O., Poulakos, S., Smolic, A., Gross, M.: Nonlinear disparity mapping for stereoscopic 3d. ACM Transactions on Graphics (TOG) 29(4), 75 (2010)CrossRefGoogle Scholar
  43. 43.
    Garbe, C., Jähne, B.: Reliable estimates of the sea surface heat flux from image sequences. Pattern Recognition, 194–201 (2001)Google Scholar
  44. 44.
    Barron, J., Liptay, A.: Measuring 3-d plant growth using optical flow. Bioimaging 5(2), 82–86 (1997)CrossRefGoogle Scholar
  45. 45.
    Kähler, C., Sammler, B., Kompenhans, J.: Generation and control of tracer particles for optical flow investigations in air. Experiments in Fluids 33(6), 736–742 (2002)Google Scholar
  46. 46.
    Papadakis, N., Mémin, É., et al.: Variational assimilation of fluid motion from image sequence. SIAM Journal on Imaging Science 1(4), 343–363 (2008)MATHCrossRefGoogle Scholar
  47. 47.
    Berthe, A., Kondermann, D., Christensen, C., Goubergrits, L., Garbe, C., Affeld, K., Kertzscher, U.: Three-dimensional, three-component wall-PIV. Experiments in Fluids 48, 983–997 (2010)CrossRefGoogle Scholar
  48. 48.
    Tropea, C., Yarin, A.L., Foss, J.F.: Springer Handbook of Experimental Fluid Mechanics. Springer (2007)Google Scholar
  49. 49.
    Fincham, A., Spedding, G.: Low cost, high resolution dpiv for measurement of turbulent fluid flow. Experiments in Fluids 23(6), 449–462 (1997)CrossRefGoogle Scholar
  50. 50.
    Efros, A., Berg, A., Mori, G., Malik, J.: Recognizing action at a distance. In: Proc. of the 8th International Conference of Computer Vision (ICCV 2003), pp. 726–733. IEEE (2003)Google Scholar
  51. 51.
    Haag, M., Nagel, H.: Combination of edge element and optical flow estimates for 3d-model-based vehicle tracking in traffic image sequences. International Journal of Computer Vision 35(3), 295–319 (1999)CrossRefGoogle Scholar
  52. 52.
    Wolf, W.: Key frame selection by motion analysis. In: Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP 1996), vol. 2, pp. 1228–1231. IEEE (1996)Google Scholar
  53. 53.
    Sudhir, G., Lee, J.: Video annotation by motion interpretation using optical flow streams (1997)Google Scholar
  54. 54.
    Hauptmann, A., Gao, J., Yan, R., Qi, Y., Yang, J., Wactlar, H.: Automated analysis of nursing home observations. IEEE Pervasive Computing 3(2), 15–21 (2004)CrossRefGoogle Scholar
  55. 55.
    Michels, M., Rojas, R., Landgraf, T.: A beehive monitoring system incorporating optical flow as a source of information (2011)Google Scholar
  56. 56.
    Lombardot, B., Luengo-Oroz, M., Melani, C., Faure, E., Santos, A., Peyrieras, N., Ledesma-Carbayo, M., Bourgine, P., de Neurobiologie Alfred Fessard, G., Yvette, F.: Evaluation of four 3d non rigid registration methods applied to early zebrafish development sequences. In: MIAAB MICCAI (2008)Google Scholar
  57. 57.
    Oliva, A., Torralba, A.: Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research 155, 23–36 (2006)CrossRefGoogle Scholar
  58. 58.
    Krajsek, K., Mester, R.: Wiener-Optimized Discrete Filters for Differential Motion Estimation. In: Jähne, B., Mester, R., Barth, E., Scharr, H. (eds.) IWCM 2004. LNCS, vol. 3417, pp. 30–41. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  59. 59.
    Krajsek, K., Mester, R., Scharr, H.: Statistically Optimal Averaging for Image Restoration and Optical Flow Estimation. In: Rigoll, G. (ed.) DAGM 2008. LNCS, vol. 5096, pp. 466–475. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  60. 60.
    Stich, T., Linz, C., Wallraven, C., Cunningham, D., Magnor, M.: Perception-motivated interpolation of image sequences. ACM Transactions on Applied Perception (TAP) 8, 1–25 (2011), http://doi.acm.org/10.1145/1870076.1870079 CrossRefGoogle Scholar
  61. 61.
    Forstner, W.: Reliability analysis of parameter estimation in linear models with applications to mensuration problems in computer vision. Computer Vision, Graphics, and Image Processing 40(3), 273–310 (1987)CrossRefGoogle Scholar
  62. 62.
    Volz, S., Bruhn, A., Valgaerts, L., Zimmer, H.: Modeling temporal coherence for optical flow. In: Proc. of the 13th International Conference of Computer Vision, ICCV 2011 (2011)Google Scholar
  63. 63.
    Becker, F., Lenzen, F., Kappes, J.H., Schnörr, C.: Variational recursive joint estimation of dense scene structure and camera motion from monocular high speed traffic sequences. In: Proc. of the 13th International Conference of Computer Vision, ICCV 2011 (2011)Google Scholar
  64. 64.
    Fleet, D.J., Jepson, A.: Computation of component image velocity from local phase information. International Journal on Computer Vision 5(1), 77–104 (1990)CrossRefGoogle Scholar
  65. 65.
    Jähne, B., Haussecker, H., Geißler, P.E.: Handbook of Computer Vision and Application, vol. 2. Academic Press (1999)Google Scholar
  66. 66.
    Heeger, D.: Model for the extraction of image flow. Journal of the Optical Society of America 4(8), 1455–1471 (1987)CrossRefGoogle Scholar
  67. 67.
    Kondermann, C., Mester, R., Garbe, C.: A Statistical Confidence Measure for Optical Flows. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 290–301. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  68. 68.
    Humayun, A., Mac Aodha, O., Brostow, G.: Learning to find occlusion regions. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2161–2168. IEEE (2011)Google Scholar
  69. 69.
    Bainbridge-Smith, R., Lane, A.: Measuring confidence in optical flow estimation. IET Electronics Letters 32(10), 882–884 (1996)CrossRefGoogle Scholar
  70. 70.
    Bishop, C.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995)Google Scholar
  71. 71.
    Zetzsche, C., Barth, E.: Fundamental limits of linear filters in the visual processing of two-dimensional signals. Vision Research 30(7), 1111–1117 (1990)CrossRefGoogle Scholar
  72. 72.
    Kalkan, S., Calow, D., Felsberg, M., Worgotter, F., Lappe, M., Krüger, N.: Optic flow statistics and intrinsic dimensionality (2004)Google Scholar
  73. 73.
    Felsberg, M., Kalkan, S., Krüger, N.: Continuous dimensionality characterization of image structures. Image and Vision Computing 27(6), 628–636 (2009)CrossRefGoogle Scholar
  74. 74.
    Kondermann, C., Kondermann, D., Garbe, C.S.: Postprocessing of Optical Flows Via Surface Measures and Motion Inpainting. In: Rigoll, G. (ed.) DAGM 2008. LNCS, vol. 5096, pp. 355–364. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  75. 75.
    Kybic, J., Nieuwenhuis, C.: Bootstrap optical flow confidence and uncertainty measure. Computer Vision and Image Understanding 115(10), 1449–1462 (2011)CrossRefGoogle Scholar
  76. 76.
    Black, M., Yacoob, Y., Jepson, A., Fleet, D.: Learning parameterized models of image motion. In: Proc. of IEEE Computer Siciety Conference on Computer Vision and Pattern Recognition (CVPR 1997), pp. 561–567 (1997)Google Scholar
  77. 77.
    Roth, S., Black, M.: On the spatial statistics of optical flow. In: Proc. of International Conference on Computer Vision (ICCV 2005), vol. 1, pp. 42–49 (2005)Google Scholar
  78. 78.
    Sun, D., Roth, S., Lewis, J.P., Black, M.J.: Learning Optical Flow. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 83–97. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  79. 79.
    Mac Aodha, O., Brostow, G.J., Pollefeys, M.: Segmenting video into classes of algorithm-suitability. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 1054–1061 (2010)Google Scholar
  80. 80.
    Gehrig, S., Scharwächter, T.: A real-time multi-cue framework for determining optical flow confidence. In: Proc. of the 13th International Conference of Computer Vision, ICCV 2011 (2011)Google Scholar
  81. 81.
    Amiaz, T., Lubetzky, E., Kiryati, N.: Coarse to over-fine optical flow estimation. Pattern Recogn. 40(9) (2007)Google Scholar
  82. 82.
    Scharr, H.: Optimal Filters for Extended Optical Flow. In: Jähne, B., Mester, R., Barth, E., Scharr, H. (eds.) IWCM 2004. LNCS, vol. 3417, pp. 14–29. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  83. 83.
    Kondermann, D.: Modular Optical Flow Estimation with Applications to Fluid Dynamics. PhD thesis, University of Heidelberg (2009)Google Scholar
  84. 84.
    Blackman, S., Popoli, R.: Design and Analysis of Modern Tracking Systems. Artech House (1999)Google Scholar
  85. 85.
    Bruhn, A., Weickert, J., Feddern, C., Kohlberger, T., Schnörr, C.: Real-Time Optic Flow Computation with Variational Methods, pp. 222–229. Springer, Heidelberg (2003)Google Scholar
  86. 86.
    Bruhn, A., Weickert, J., Feddern, C., Kohlberger, T., Schnörr, C.: Real-time optic flow computation with variational methods. IEEE Trans. of Image Processing 14(5), 608–615 (2005)CrossRefGoogle Scholar
  87. 87.
    Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic Huber-L1 optical flow. In: Proc. of the British Machine Vision Conference (BMVC 2009), London, UK (September 2009)Google Scholar
  88. 88.
    Vaudrey, T., Rabe, C., Klette, R., Milburn, J.: Differences between stereo and motion behaviour on synthetic and real-world stereo sequences, pp. 1–6 (2008)Google Scholar
  89. 89.
    Meister, S., Jähne, B., Kondermann, D.: Outdoor stereo camera system for the generation of real-world benchmark data sets. Optical Engineering 51 (2012)Google Scholar
  90. 90.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Computer Vision and Pattern Recognition (CVPR), Providence, USA (June 2012)Google Scholar
  91. 91.
    Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8 (2008)Google Scholar
  92. 92.
    Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: a database and web-based tool for image annotation. International Journal of Computer Vision 77(1), 157–173 (2008)CrossRefGoogle Scholar
  93. 93.
    Meister, S.: A study on ground truth generation for optical flow. Master’s thesis, University of Heidelberg (2010)Google Scholar
  94. 94.
    Meister, S., Kondermann, D.: Real versus realistically rendered scenes for optical flow evaluation. In: Proceedings of 14th ITG Conference on Electronic Media Technology (2011)Google Scholar
  95. 95.
    Haeusler, R., Klette, R.: Benchmarking Stereo Data (Not the Matching Algorithms). In: Goesele, M., Roth, S., Kuijper, A., Schiele, B., Schindler, K. (eds.) DAGM 2010. LNCS, vol. 6376, pp. 383–392. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  96. 96.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2011), vol. 2, p. 3 (2011)Google Scholar
  97. 97.
    Kaneva, B., Torralba, A., Freeman, W.: Evaluation of image features using a photorealistic virtual world. In: Proc. of the 13th International Conference of Computer Vision, ICCV 2011 (2011)Google Scholar
  98. 98.
    Beizer, B.: Black-box testing: techniques for functional testing of software and systems. John Wiley & Sons, Inc. (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Daniel Kondermann
    • 1
  • Steffen Abraham
    • 2
  • Gabriel Brostow
    • 3
  • Wolfgang Förstner
    • 4
  • Stefan Gehrig
    • 5
  • Atsushi Imiya
    • 6
  • Bernd Jähne
    • 7
  • Felix Klose
    • 8
  • Marcus Magnor
    • 8
  • Helmut Mayer
    • 9
  • Rudolf Mester
    • 10
  • Tomas Pajdla
    • 11
  • Ralf Reulke
    • 12
  • Henning Zimmer
    • 13
  1. 1.Heidelberg Collaboratory for Image Processing, Interdisciplinary Center for Scientific ComputingUniversity of HeidelbergHeidelbergGermany
  2. 2.Robert Bosch GmbHGermany
  3. 3.University College LondonUnited Kingdoms
  4. 4.Bonn UniversityGermany
  5. 5.Daimler AGGermany
  6. 6.Chiba UniversityJapan
  7. 7.Heidelberg UniversityGermany
  8. 8.Technical University BraunschweigGermany
  9. 9.Bundeswehr University MunichGermany
  10. 10.Linköping University (Sweden) and Goethe UniversityFrankfurtGermany
  11. 11.Czech Technical University in PragueCzech Republic
  12. 12.Humbold University BerlinGermany
  13. 13.Saarland UniversityGermany

Personalised recommendations