A Measure for Accuracy Disparity Maps Evaluation

  • Ivan Cabezas
  • Victor Padilla
  • Maria Trujillo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7042)

Abstract

The quantitative evaluation of disparity maps is based on error measures. Among the existing measures, the percentage of Bad Matched Pixels (BMP) is widely adopted. Nevertheless, the BMP does not consider the magnitude of the errors and the inherent error of stereo systems, in regard to the inverse relation between depth and disparity. Consequently, different disparity maps, with quite similar percentages of BMP, may produce 3D reconstructions of largely different qualities. In this paper, a ground-truth based measure of errors in estimated disparity maps is presented. It offers advantages over the BMP, since it takes into account the magnitude of the errors and the inverse relation between depth and disparity. Experimental validations of the proposed measure are conducted by using two state-of-the-art quantitative evaluation methodologies. Obtained results show that the proposed measure is more suited than BMP to evaluate the depth accuracy of the estimated disparity map.

Keywords

Computer vision corresponding points disparity maps quantitative evaluation error measures 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ben Said, L., Bechikn, S., Ghedira, K.: The r-Dominance: A New Dominance Relation for Interactive Evolution Multi-criteria Decision Making. IEEE Trans. On Evolutionary Computation 14(5), 801–818 (2010)CrossRefGoogle Scholar
  2. 2.
    Cabezas, I., Trujillo, M.: A Non-linear Quantitative Evaluation Approach for Disparity Estimation. In: Proc. Intl. Joint Conf. on Computer Vision and Computer Graphics Theory and Applications, pp. 704–709 (2011)Google Scholar
  3. 3.
    Chen, H., Wu, L.: A New Measure of Forecast Accuracy. In: Intl. Conf. on Information and Financial Engineering, pp. 710–712 (2010)Google Scholar
  4. 4.
    Gallup, D., Frahm, J., Mordohai, P., Pollefeys, M.: Variable Baseline/Resolution Stereo. In: Proc. Computer Vision and Pattern Recognition, pp. 1–8 (2008)Google Scholar
  5. 5.
    Hirschmuller, H., Scharstein, D.: Evaluation of Stereo Matching Costs on Images with Radiometric Differences. IEEE Trans. Pattern Analysis and Machine Intelligence, 1582–1599 (2009)Google Scholar
  6. 6.
    Isgro, F., Trucco, E., Xu, L.: Towards Teleconferencing by View Synthesis and Large-Baseline Stereo. In: Proc. Conf. on Image Processing, pp. 198–203 (2001)Google Scholar
  7. 7.
    Kostliva, J., Cech, J., Sara, R.: Feasibility Boundary in Dense and Semi-Dense Stereo Matching. In: Conf. on Comp. Vision and Pattern Recognition, pp. 1–8 (2007)Google Scholar
  8. 8.
    Malpica, W., Bovick, A.: Range Image Quality Assessment by Structural Similarity. In: IEEE Conf. on Acoustics, Speech and Signal Processing, pp. 1149–1152 (2009)Google Scholar
  9. 9.
    Scharstein, D.: Middlebury Stereo Evaluation, http://vision.middlebury.edu/stereo/
  10. 10.
    Scharstein, D., Szeliski, R.: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. Intl. Journal of Computer Vision 47, 7–42 (2002)CrossRefMATHGoogle Scholar
  11. 11.
    Scharstein, D., Szeliski, R.: High-accuracy Stereo Depth Maps using Structured Light. In: Computer Vision and Pattern Recognition, pp. 195–202 (2003)Google Scholar
  12. 12.
    Schreer, O., Fehn, C., Atzpadin, N., Muller, M., Smolic, A., Tanger, R., Kauff, P.: A Flexible 3D TV System for Different Multi-Baseline geometries. In: Proc. Conf. on Multimedia and Expo, pp. 1877–1880 (2006)Google Scholar
  13. 13.
    Shen, Y., Chaohui, L., Xu P., Xu, L.: Objective Quality Assessment of Noised Stereoscopic Image. In: Proc. Third Intl Conf. on Measuring Technology and Mechatronics Automation, pp. 745–747 (2011)Google Scholar
  14. 14.
    Szeliski, R.: Prediction Error as a Quality Metric for Motion and Stereo. In: Proc. Intl. Conf. on Computer Vision, vol. 2, pp. 781–788 (1999)Google Scholar
  15. 15.
    Szeliski, R., Zabih, R.: An Experimental Comparison of Stereo Algorithms. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) ICCV-WS 1999. LNCS, vol. 1883, pp. 1–19. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  16. 16.
    Van der Mark, W., Gavrila, D.: Real-time Dense Stereo for Intelligent Vehicles. IEEE Trans. on Intelligent Transportation Systems 7(1), 38–50 (2006)CrossRefGoogle Scholar
  17. 17.
    Van Veldhuizen, D., Zydallis, D., Lamont, G.: Considerations in Engineering Parallel Multiobjective Evolutionary Algorithms. IEEE Trans. Evolutionary Computation 7(2), 144–173 (2003)CrossRefGoogle Scholar
  18. 18.
    Wang, D., Ding, W., Man, Y., Cui, L.: A Joint Image Quality Assessment Method Based on Global Phase Coherence and Structural Similarity. In: Proc. Intl. Congress on Image and Signal Processing, pp. 2307–2311 (2010)Google Scholar
  19. 19.
    Wang, Z., Bovik, A., Sheikn, H., Simocell, E.: Image Quality Assessment: From Error visibility to Structural Similarity. IEEE Trans. on Image Processing 13(4), 600–612 (2004)CrossRefGoogle Scholar
  20. 20.
    Zhang, Z., Hou, C., Shen, L., Yang, J.: An Objective Evaluation for Disparity map Based on the Disparity Gradient and Disparity Acceleration. In: Proc. Intl. Conf. on Information Technology and Computer Science, pp. 452–455 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Ivan Cabezas
    • 1
  • Victor Padilla
    • 1
  • Maria Trujillo
    • 1
  1. 1.Escuela de Ingeniería de Sistemas y ComputaciónUniversidad del Valle, Ciudadela Universitaria MelendezCaliColombia

Personalised recommendations