Skip to main content
Log in

Order Statistics of RANSAC and Their Practical Application

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

For statistical analysis purposes, RANSAC is usually treated as a Bernoulli process: each hypothesis is a Bernoulli trial with the outcome outlier-free/contaminated; a run is a sequence of such trials. However, this model only covers the special case where all outlier-free hypotheses are equally good, e.g. generated from noise-free data. In this paper, we explore a more general model which obviates the noise-free data assumption: we consider RANSAC a random process returning the best hypothesis, \(\delta _1\), among a number of hypotheses drawn from a finite set (\(\Theta \)). We employ the rank of \(\delta _1\) within \(\Theta \) for the statistical characterisation of the output, present a closed-form expression for its exact probability mass function, and demonstrate that \(\beta \)-distribution is a good approximation thereof. This characterisation leads to two novel termination criteria, which indicate the number of iterations to come arbitrarily close to the global minimum in \(\Theta \) with a specified probability. We also establish the conditions defining when a RANSAC process is statistically equivalent to a cascade of shorter RANSAC processes. These conditions justify a RANSAC scheme with dedicated stages to handle the outliers and the noise separately. We demonstrate the validity of the developed theory via Monte-Carlo simulations and real data experiments on a number of common geometry estimation problems. We conclude that a two-stage RANSAC process offers similar performance guarantees at a much lower cost than the equivalent one-stage process, and that a cascaded set-up has a better performance than LO-RANSAC, without the added complexity of a nested RANSAC implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. Since \(r_1\) is the rank in \(\Theta \) (and not in \(\Delta \)), it is not necessarily 1.

  2. In a box-and-whisker diagram, the whiskers indicate the 5th and 95th percentiles, whereas the box, 25th and 75th. The line going through the box is the median, and the mean is marked by a square.

References

  • Arnold, B. C., Balakrishnan, N., & Nagaraja, H. N. (2008). A first course in order statistics. New York: Wiley.

    Book  MATH  Google Scholar 

  • Chum, O., & Matas, J. (2005). Matching with PROSAC-progressive sample consensus. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 220–226).

  • Chum, O., & Matas, J. (2008). Optimal randomized RANSAC. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(8), 1472–1482.

    Article  Google Scholar 

  • Chum, O., Matas, J., & Kittler, J. (2003). Locally optimized RANSAC. Lecture notes in computer science (Vol. 2781, pp. 236–243). Berlin: Springer.

    Google Scholar 

  • CMP. (2013). Czech Technical University, Center for Machine Perception Datasets. Retrieved February 6, 2013 from http://cmp.felk.cvut.cz/.

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  MathSciNet  Google Scholar 

  • Haralick, R. M., Lee, C.-N., Ottenberg, K., & Nolle, M. (1991). Analysis and the solutions of the three point perspective pose estimation problem. Proceedings CVPR’91 (pp. 592–598).

  • Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision (2nd ed.). Cambridge, UK Cambridge University Press.

  • Hughes-Hallett, D., McCallum, W., Gleason, A. et al. (1998). Calculus: Single and multivariable (4th ed.). Wiley.

  • İmre, E., Guillemaut, J.-Y., & Hilton, A. (2010). Moving camera registration for multiple camera setups in dynamic scenes. In Proceedings of the 21st British Machine Vision Conference (BMVC) (pp. 1–12).

  • İmre, .E., Guillemaut, J.-Y., & Hilton, A. (2011). Calibration of nodal and free-moving cameras in dynamic scenes for post-production. In Proceedings-2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT) (pp. 260–267).

  • Johnson, N. L., Kemp, A. W., & Kotz, S. (2005). Univariate discrete distributions (3rd ed.). Hoboken, NJ: Wiley.

  • Johnson, N. L., Kotz, S., & Balakrishnan, N. (1995). Continuous univariate distributions (vol. 2). New York, NY: Wiley.

  • Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

    Article  Google Scholar 

  • Nistér, D. (2003). Preemptive RANSAC for live structure and motion estimation. In Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV) (pp. 199–206).

  • Pollefeys, M. (2013). Leuven Castle. Retrieved February 6, 2013, from http://www.cs.unc.edu/marc/data/castlejpg.zip.

  • Powell, M. J. D. (1970). A hybrid method for nonlinear equations. Numerical Methods for Nonlinear Algebraic Equations, 7, 87–114.

    Google Scholar 

  • Raguram, R., Frahm, J., & Pollefeys, M. (2009). Exploiting uncertainty in random sample consensus. In Proceedings of the 12th IEEE International Conference on Computer Vision (ICCV) (pp. 2074–2081).

  • Tordoff, B., & Murray, D. W. (2005). Guided-MLESAC: Faster image transform estimation by using matching priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1523–1535.

  • Torr, P. H. S., & Zisserman, A. (2000). MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding, 78(1), 138–156.

    Article  Google Scholar 

  • Tran, Q.-H., Chin, T.-J., Carneiro, G., Brown, M. S., & Suter, D. (2012). In defence of RANSAC for outlier rejection in deformable registration (pp. 274–287).

  • VGG. (2013). Oxford Visual Geometry Group Datasets. Retrieved February 6, 2013, from http://www.robots.ox.ac.uk/vgg/data.

  • Wald, A. (1945). Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2), 117–186.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work is supported by the Technology Strategy Board(TSB) projects i3Dlive: interactive 3D methods for live-action media (TP/11/CII/6/I/AJ307D), “SYMMM: Synchronising Multimodal Movie Metadata”(11702-76150), and the European Commission ICT-7th Framework Program project “IMPART: Intelligent Management Platform for Advanced Real-Time Media Processes” (316564).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Evren İmre.

Additional information

Communicated by K. Ikeuchi.

Appendix: RANSAC and Nonlinear Minimisation

Appendix: RANSAC and Nonlinear Minimisation

The existing literature evaluates RANSAC as a standalone estimator. However, in practice, RANSAC is often employed as a component within larger pipelines, for generating an initial estimate for a subsequent nonlinear optimisation stage (Hartley and Zisserman 2003). When used for this purpose, RANSAC only needs to find a hypothesis somewhere in a good basin; the nonlinear optimisation stage can take the solution to the local minimum. This raises an interesting question: how is the final, post-optimisation performance affected by the choice of RANSAC variant?

As a motivating example, we repeated the experiments in Sect. 6.1 for P2P, P3P, H4 and F7, with the parameters \(\sigma _f^2=1, \sigma _s^2=2.5\times 10^{-5}\), and \(\rho =0.5\). However, in each case, RANSAC was followed by a refinement via Powell’s dog leg (PDL) algorithm (Powell 1970). The median validation errors after the application of PDL are reported in Table 4. The results indicate that PDL acts as an equaliser, mostly eliminating the performance differences observed in Sect. 6. However, a poorer initial estimate means more PDL iterations. Therefore, using B-RANSAC instead of, for instance, TS-RANSAC involves a trade-off between RANSAC and PDL iterations. Since the computational cost of the minimal solvers utilised in this paper are relatively cheap, compared to the operations involved in nonlinear minimisation (e.g. computation of the Jacobian for each correspondence), such a trade-off is not advisable. However, for more expensive hypothesis generators, it may be beneficial to consider RANSAC and the nonlinear optimisation stage jointly.

Table 4 Median validation error after nonlinear minimisation

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

İmre, E., Hilton, A. Order Statistics of RANSAC and Their Practical Application. Int J Comput Vis 111, 276–297 (2015). https://doi.org/10.1007/s11263-014-0745-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-014-0745-1

Keywords

Navigation