Stereo-Matching in the Context of Vision-Augmented Vehicles
Stereo matching accuracy is determined by comparing results with ground truth. However, the kind of detail remains unspecified in regions where a stereo matcher is more accurate. By identifying feature points we are identifying regions where the data cost used can easily represent features. We suggest to use feature matchers for identifying sparse matches of high confidence, and to use those for guiding a belief-propagation mechanism.
Extensive experiments, also including a semi-global stereo matcher, illustrate achieved performance. We also test on data just recently made available for a developing country, which comes with particular challenges not seen before. Since KITTI ground truth is sparse, for most of identified feature points ground truth is actually missing. By using our novel stereo matching method (called FlinBPM) we derive our own ground truth and compare it with results obtained by other matching approaches including our novel stereo matching method (called WlinBPM).
Based on this we were able to identify circumstances in which a census transform fails to define an appropriate data cost measure. There is not a single all-time winner in the set of considered stereo matchers, but there are specific benefits when applying one of the discussed stereo matching strategies. This might point towards a need of adaptive solutions for vision-augmented vehicles.
- 1.CCSAD dataset. CIMAT, Guanajuato, http://camaron.cimat.mx/Personal/jbhayet/ccsad-dataset (2015)
- 4.Franke, U., Joos, A.: Real-time stereo vision for urban traffic scene understanding. In: Proceedings of IEEE Symposium on Intelligent Vehicles, pp. 273–278 (2000)Google Scholar
- 5.Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving?. The KITTI Vision benchmark suite. In: Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (2012)Google Scholar
- 7.Hirschmüller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of IEEE International Conference on Computer Vision Pattern Recognition, vol. 2, pp. 807–814 (2005)Google Scholar
- 8.Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings IEEE Conference on Computer Vision Pattern Recognition (2015)Google Scholar
- 9.Khan, W., Suaste, V., Caudillo, D., Klette, R.: Belief propagation stereo matching compared to iSGM on binocular or trinocular video data. In: Proceedings of IEEE Symposium on Intelligent Vehicles (2013)Google Scholar
- 10.Khan, W., Klette, R.: Stereo accuracy for collision avoidance for varying collision trajectories. In: Proceedings of IEEE Symposium on Intelligent Vehicles (2013)Google Scholar
- 12.Park, S., Jeong, H.: A fast and parallel belief computation structure for stereo matching. In: Proceedings of IASTED European Conference on Internet Multimedia Systems Applications, pp. 284–289 (2007)Google Scholar