Skip to main content
Log in

Automatic Line Extraction in Uncalibrated Omnidirectional Cameras with Revolution Symmetry

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Revolution symmetry is a realistic assumption for modelling the majority of catadioptric and dioptric cameras. In central systems it can be described by a projection model based on radially symmetric distortion. In these systems straight lines are projected on curves called line-images. These curves have in general more than two degrees of freedom and their shape strongly depends on the particular camera configuration. Therefore, the existing line-extraction methods for this kind of omnidirectional cameras require the camera calibration by contrast with the perspective case where the calibration is not involved in the shape of the projected line-image. However, this drawback can be considered as an advantage because the shape of the line-images can be used for self-calibration. In this paper, we present a novel method to extract line-images in uncalibrated omnidirectional images which is valid for radially symmetric central systems. In this method we propose using the plumb-line constraint to find closed form solutions for different types of camera systems, dioptric or catadioptric. The inputs of the proposed method are points belonging to the line-images and their intensity gradient. The gradient information allows to reduce the number of points needed in the minimal solution improving the result and the robustness of the estimation. The scheme is used in a line-image extraction algorithm to obtain lines from uncalibrated omnidirectional images without any assumption about the scene. The algorithm is evaluated with synthetic and real images showing good performance. The results of this work have been implemented in an open source Matlab toolbox for evaluation and research purposes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

Notes

  1. In the case of an orthogonal system the derivative of the function at \({\hat{r}} = {\hat{r}}_{vl}\) is \(\infty \) meaning that this is not the proper point for linearization.

  2. webdiis.unizar.es/%7Ebermudez/suppMat.html.

  3. webdiis.unizar.es/%7Ebermudez/toolbox.html.

References

  • Alemán-Flores, M., Alvarez, L., Gomez, L., & Santana-Cedrés, D. (2014). Line detection in images showing significant lens distortion and application to distortion correction. Pattern Recognition Letters, 36, 261–271.

    Article  Google Scholar 

  • Alvarez, L., Gómez, L., & Sendra, J. (2009). An algebraic approach to lens distortion by line rectification. Journal of Mathematical Imaging and Vision, 35(1), 36–50.

    Article  MathSciNet  Google Scholar 

  • Baker, S., & Nayar, S. K. (1999). A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision, 35(2), 175–196.

    Article  Google Scholar 

  • Barreto, J. P., & Araujo, H. (2005). Geometric properties of central catadioptric line images and their application in calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 1327–1333.

    Article  Google Scholar 

  • Barreto, J. P., & Araujo, H. (2006). Fitting conics to paracatadioptric projections of lines. Computer Vision and Image Understanding, 101(3), 151–165.

    Article  Google Scholar 

  • Bazin, J. C., Demonceaux, C., Vasseur, P., & Kweon, I. (2010). Motion estimation by decoupling rotation and translation in catadioptric vision. Computer Vision and Image Understanding, 114(2), 254–273.

    Article  Google Scholar 

  • Bermudez-Cameo, J., Lopez-Nicolas, G., & Guerrero, J. J. (2012a). A unified framework for line extraction in dioptric and catadioptric cameras. In 11th Asian Conference on Computer Vision, (ACCV), vol. 7727.

  • Bermudez-Cameo, J., Puig, L., & Guerrero, J. J. (2012b). Hypercatadioptric line images for 3D orientation and image rectification. Robotics and Autonomous Systems, 60(6), 755–768.

    Article  Google Scholar 

  • Bermudez-Cameo, J., Lopez-Nicolas, G., Guerrero, & J. J. (2013). Line extraction in uncalibrated central images with revolution symmetry. In 24th British Machine Vision Conference (BMVC).

  • Brown, D. (1971). Close-range camera calibration. Photogrammetric Engineering, 37(8), 855–866.

    Google Scholar 

  • Bukhari, F., & Dailey, M. N. (2013). Automatic radial distortion estimation from a single image. Journal of Mathematical Imaging and Vision, 45(1), 31–45.

    Article  MathSciNet  Google Scholar 

  • Courbon, J., Mezouar, Y., Eck, L., & Martinet, P. (2007). A generic fisheye camera model for robotic applications. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1683–1688).

  • Cucchiara, R., Grana, C., Prati, A., & Vezzani, R. (2003). A hough transform-based method for radial lens distortion correction. In 12th International Conference on Image Analysis and Processing (ICIAP) (pp. 182–187).

  • Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14–24.

    Article  Google Scholar 

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  MathSciNet  Google Scholar 

  • Fitzgibbon, A. W. (2001). Simultaneous linear estimation of multiple view geometry and lens distortion. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1 (pp. I–125).

  • Gasparini, S., & Caglioti, V. (2011). Line localization from single catadioptric images. International Journal of Computer Vision, 94(3), 361–374.

    Article  MathSciNet  Google Scholar 

  • Gasparini, S., Sturm, P., & Barreto, J. P. (2009). Plane-based calibration of central catadioptric cameras. In IEEE 12th International Conference on Computer Vision (ICCV) (pp. 1195–1202).

  • Geyer, C., & Daniilidis, K. (2000). A unifying theory for central panoramic systems and practical applications. In 6th European Conference on Computer Vision, (ECCV), vol. 2 (pp. 445–461).

  • Geyer, C., & Daniilidis, K. (2001). Catadioptric projective geometry. International Journal of Computer Vision, 45(3), 223–243.

    Article  MATH  Google Scholar 

  • Kannala, J., & Brandt, S. (2006). A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8), 1335–1340.

    Article  Google Scholar 

  • Kannala, J., Brandt, S. S., & Heikkilä, J. (2008a). Self-calibration of central cameras by minimizing angular error. In 3rd International Conference on Computer Vision Theory and Applications (VISAPP) (pp. 28–35).

  • Kannala, J., Heikkilä, J., & Brandt, S. S. (2008b). Geometric camera calibration. Wiley encyclopedia of computer science and engineering. Hoboken: Wiley.

    Google Scholar 

  • Kingslake, R. (1989). A history of the photographic lens. San Diego: Academic Press.

    Google Scholar 

  • Mei, C., & Rives, P. (2007). Single viewpoint omnidirectional camera calibration from planar grids. In International Conference on Robotics and Automation (ICRA) (pp. 3945–3950).

  • Melo, R., Antunes, M., Barreto, J., Falco, G., & Gonalves, N. (2013). Unsupervised intrinsic calibration from a single frame using a “plumb-line” approach. In IEEE 14th International Conference on Computer Vision (ICCV) (pp. 1–6).

  • Puig, L., Bastanlar, Y., Sturm, P., Guerrero, J. J., & Barreto, J. (2011). Calibration of central catadioptric cameras using a dlt-like approach. International Journal of Computer Vision, 93(1), 101–114.

    Article  Google Scholar 

  • Puig, L., Bermudez-Cameo, J., Sturm, P., & Guerrero, J. J. (2012). Calibration of omnidirectional cameras in practice. A comparison of methods. Computer Vision and Image Understanding, 116, 120–137.

    Article  Google Scholar 

  • Ray, S. (2002). Applied photographic optics: Lenses and optical systems for photography, film, video, electronic and digital imaging. Oxford: Focal Press.

    Google Scholar 

  • Rosten, E., & Loveland, R. (2011). Camera distortion self-calibration using the plumb-line constraint and minimal hough entropy. Machine Vision and Applications, 22(1), 77–85.

    Article  Google Scholar 

  • Scaramuzza, D., Martinelli, A., & Siegwart, R. (2006). A toolbox for easily calibrating omnidirectional cameras. In International Conference on Ingelligent Robots and Systems (IROS) (pp. 5695–5701).

  • Schneider, D., Schwalbe, E., & Maas, H. G. (2009). Validation of geometric models for fisheye lenses. Journal of Photogrammetry and Remote Sensing, 64(3), 259–266.

  • Stevenson, D., & Fleck, M. (1996). Nonparametric correction of distortion. In 3rd IEEE Workshop on Applications of Computer Vision (WACV) (pp. 214–219).

  • Strand, R., & Hayman, E. (2005). Correcting radial distortion by circle fitting. In 16th British Machine Vision Conference (BMVC).

  • Sturm, P., Ramalingam, S., Tardif, J. P., Gasparini, S., & Barreto, J. P. (2011). Camera models and fundamental concepts used in geometric computer vision. Foundations and Trends in Computer Graphics and Vision, 6(1–2), 1–183.

  • Swaminathan, R., & Nayar, S. K. (2000). Nonmetric calibration of wide-angle lenses and polycameras. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10), 1172–1178.

  • Tardif, J., Sturm, P., & Roy, S. (2006). Self-calibration of a general radially symmetric distortion model. In 9th European Conference on Computer Vision (ECCV) (pp. 186–199).

  • Thormählen, T., Broszio, H., & Wassermann, I. (2003). Robust line-based calibration of lens distortion from a single view. In Proceedings of MIRAGE (pp. 105–112).

  • Wang, A., Qiu, T., & Shao, L. (2009). A simple method of radial distortion correction with centre of distortion estimation. Journal of Mathematical Imaging and Vision, 35(3), 165–172.

    Article  MathSciNet  Google Scholar 

  • Wu Y., & Hu Z. (2005) Geometric invariants and applications under catadioptric camera model. In 10th IEEE International Conference on Computer Vision (ICCV), vol. 2 (pp. 1547–1554).

  • Ying, X., & Hu, Z. (2004a). Can we consider central catadioptric cameras and fisheye cameras within a unified imaging model?. In 8th European Conference on Computer Vision (ECCV).

  • Ying, X., & Hu, Z. (2004b). Catadioptric line features detection using hough transform. In 17th International Conference on Pattern Recognition (ICPR), vol. 4 (pp. 839–842).

  • Ying, X., & Zha, H. (2005). Simultaneously calibrating catadioptric camera and detecting line features using hough transform. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 412–417).

Download references

Acknowledgments

This work was supported by the Spanish Project VINEA DPI2012-31781 and FEDER funds. First author was supported by the FPU program AP2010-3849. Thanks to J. P. Barreto from ISR Coimbra for the set of high resolution paracatadioptric images.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. Bermudez-Cameo.

Additional information

Communicated by C. Schnörr.

Appendices

Appendix 1: Polynomials Describing Line-Images

Some of the line-images in central systems with revolution symmetry can be expressed as polynomials. In this Appendix we show the description of these line images using polynomials for Catadioptric systems, Stereographic-Fisheye, Orthogonal-Fisheye and Equisolid-Fisheye systems.

Catadioptric and Stereographic-Fisheye

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\hat{x}}^2&{\hat{x}} {\hat{y}}&{\hat{y}}^2&{\hat{x}}&{\hat{y}}&1 \end{array}\right) {\varvec{\varOmega }}_{cata}=0 \end{aligned}$$
(34)

where

$$\begin{aligned} {\varvec{\varOmega }}_{cata} = \left( \begin{array}{c} n_x^2 \sin \chi ^2 - n_z^2 \cos \chi ^2 \\ 2 n_x n_y \sin \chi ^2\\ n_y^2 \sin \chi ^2 - n_z^2 \cos \chi ^2 \\ 2 f \sin \chi n_x n_z \\ 2 f \sin \chi n_y n_z \\ f^2 \sin \chi ^2 n_z^2 \\ \end{array}\right) \end{aligned}$$
(35)

Ortogonal-Fisheye

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\hat{x}}^2&{\hat{x}} {\hat{y}}&{\hat{y}}^2&{\hat{x}}&{\hat{y}}&1 \end{array}\right) {\varvec{\varOmega }}_{ortho}=0 \end{aligned}$$
(36)

where

$$\begin{aligned} {\varvec{\varOmega }}_{ortho} = \left( \begin{array}{c} -n_x^2 - n_z^2 \\ -2 n_x n_y \\ -n_y^2 - n_z^2 \\ 0 \\ 0 \\ f^2 n_z^2 \\ \end{array}\right) \end{aligned}$$
(37)

Equisolid-Fisheye

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} {\hat{x}}^4&{\hat{x}}^3 {\hat{y}}&{\hat{x}}^2 {\hat{y}}^2&{\hat{x}}^2&{\hat{x}} {\hat{y}}^3&{\hat{x}} {\hat{y}}&{\hat{y}}^4&{\hat{y}}^2&1 \end{array}\right) {\mathbf {Q}}=0 \end{aligned}$$
(38)

where

$$\begin{aligned} {\mathbf {Q}}=\left( \begin{array}{c} - 4 \left( n_x^2 + n_z^2 \right) \\ -8 n_x n_y \\ - 4 n_x^2 - 4 n_y^2 - 8 n_z^2 \\ 4 f^2 \left( n_x^2 + n_z^2\right) \\ -8 n_x n_y \\ 8 f^2 n_x n_y \\ - 4 \left( n_y^2 + n_z^2 \right) \\ 4 f^2 \left( n_y^2 + n_z^2 \right) \\ -f^4 n_z^2\\ \end{array}\right) \end{aligned}$$
(39)

Appendix 2: Computing the Focal Distance in Hypercatadioptric Systems

In this Appendix we expand a way for computing the focal length in hypercatadioptic systems. This allows us to compute both calibration parameters. Instead of using the plumb-line constraint or a combination between plum-line and gradient constraints the normal \({\mathbf {n}}\) can be computed from a pair of points using the gradient constraint (23).

$$\begin{aligned} \left( {\begin{array}{c@{\quad }c@{\quad }c} {-\nabla I_{y1}}&{}{\pm \nabla I_{x1}}&{}{\frac{{\partial {\hat{\alpha }} _1 }}{{\partial {\hat{r}}}}\frac{1}{{\hat{r}}_1} \left( {\hat{x}}_1\nabla I_{y1} - {\hat{y}}_1\nabla I_{x1} \right) } \\ {-\nabla I_{y2}}&{}{\pm \nabla I_{x2}}&{}{\frac{{\partial {\hat{\alpha }} _2 }}{{\partial {\hat{r}}}}\frac{1}{{\hat{r}}_2} \left( {\hat{x}}_2\nabla I_{y2} - {\hat{y}}_2\nabla I_{x2} \right) } \\ \end{array}} \right) {\mathbf {n}}=\left( {\begin{array}{c} 0 \\ 0 \\ \end{array}} \right) \end{aligned}$$
(40)

In practice, the solution is noisy and does not imply a real advantage with respect to the two point’s location approach (11).

The previous constraint (40) using two points and their gradients is enough to define a line-image. Therefore, when adding a third point with (23) in (40) one of the equations can be expressed in combination of the other two. In practice this means that,

$$\begin{aligned} \varpi _1 \frac{{\partial {\hat{\alpha }} _1 }}{{\partial {\hat{r}}}}\frac{1}{{{\hat{r}}_1 }} + \varpi _2 \frac{{\partial {\hat{\alpha }} _2 }}{{\partial {\hat{r}}}}\frac{1}{{{\hat{r}}_2 }} + \varpi _3 \frac{{\partial {\hat{\alpha }} _3 }}{{\partial {\hat{r}}}}\frac{1}{{{\hat{r}}_3 }} = 0 \end{aligned}$$
(41)

where

$$\begin{aligned} \varpi _1&= \left( {\nabla I_{x2}\nabla I_{y3}-\nabla I_{x3} \nabla I_{y2}}\right) \left( {\hat{x}}_1\nabla I_{y1}-{\hat{y}}_1\nabla I_{x1}\right) \end{aligned}$$
(42)
$$\begin{aligned} \varpi _2&= \left( {\nabla I_{x3}\nabla I_{y1}-\nabla I_{x1} \nabla I_{y3}}\right) \left( {\hat{x}}_2\nabla I_{y2}-{\hat{y}}_2\nabla I_{x2}\right) \end{aligned}$$
(43)
$$\begin{aligned} \varpi _3&= \left( {\nabla I_{x1}\nabla I_{y2}-\nabla I_{x2} \nabla I_{y1}}\right) \left( {\hat{x}}_3\nabla I_{y3}-{\hat{y}}_3\nabla I_{x3}\right) \end{aligned}$$
(44)

This constraint is similar to (13) but using gradients (notice that location information is also used).

As noted before, gradient information is noisier than location information, therefore there is no advantage in using this constraint instead of (13). However, there is a case in which this constraint is useful. The constraint (13) is solved for each system in Table 3. Most of the devices taken into account have a single calibration parameter defining distortion. This is the case of equiangular, stereographic, orthogonal and equisolid. The parabolic case is defined by two parameters \(f\) and \(p\) but a coupled parameter \(\hat{r}_{vl} = fp\) can be used instead. In these cases the constraint involving three points can be used to estimate the calibration of the system. However, in the hyperbolic case the two parameters \(\chi \) and \(f\) can not be coupled and only one of them can be estimated from this constraint.

By contrast, when simplifying (41) for hypercatadioptric systems we found that mirror parameter \(\chi \) is not involved:

$$\begin{aligned} \varpi _1 \frac{1}{{\sqrt{{\hat{r}}_1 ^2 + f^2 } }} + \varpi _2 \frac{1}{{\sqrt{{\hat{r}}_2 ^2 + f^2 } }} + \varpi _3 \frac{1}{{\sqrt{{\hat{r}}_3 ^2 + f^2 } }} = 0 \end{aligned}$$
(45)

As consequence, the focal distance \(f\) can be computed from the gradient orientation and the location of three points lying on a line-image.

Equation (45) can be expressed as a polynomial of degree 8 (but bi-quartic),

$$\begin{aligned} \sum \limits _{m = 0}^4 \beta _m f^{2m} = 0 \end{aligned}$$
(46)

where \(\beta _m = \beta _{m,123} + \beta _{m,213} + \beta _{m,312}\) and

$$\begin{aligned} \beta _{0,ijk}&= - \varpi _i^4 {\hat{r}}_j^4 {\hat{r}}_k^4 + 2 {\hat{r}}_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 \varpi _j^2 \varpi _k^2\end{aligned}$$
(47)
$$\begin{aligned} \beta _{1,ijk}&= 2\left( 2 {\hat{r}}_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 + {\hat{r}}_i^4 \left( \varpi _j^2 + \varpi _k^2 \right) \right) \varpi _j^2 \varpi _k^2 +\ldots \nonumber \\&-\,2 \varpi _i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_j^2+{\hat{r}}_k^2 \right) \end{aligned}$$
(48)
$$\begin{aligned} \beta _{2,ijk}&= {\hat{r}}_j^2 {\hat{r}}_k^2 \left( 4 \varpi ^2 \left( \varpi _j^2 + \varpi _k^2 - \varpi _i^2 \right) + 2 \varpi _j^2 \varpi _k^2 \right) \ldots \nonumber \\&-\,{\hat{r}}_i^4 \left( \varpi _j^2-\varpi _k^2 \right) ^2\end{aligned}$$
(49)
$$\begin{aligned} \beta _{3,ijk}&= {\hat{r}}_i^2 \left( \varpi _i^2 \left( \varpi _j^2+\varpi _k^2 \right) + 2 \varpi _j^2 \varpi _k^2 - \left( \varpi _j^4 + \varpi _k^4 \right) \right) \nonumber \\ \end{aligned}$$
(50)
$$\begin{aligned} \beta _{4,ijk}&= - \varpi _i^4 + 2 \varpi _j^2 \varpi _k^2 \end{aligned}$$
(51)

This equation has four solutions (because the negative values for f have not sense).

Appendix 3: Coefficients for the Equisolid-Fisheye Plumb-Line Equation

In this Appendix we present the coefficient of the 16th degree polynomial to solve the equisolid plumb-line equation (19).

$$\begin{aligned} \sum \limits _{m = 0}^8 \omega _m {\hat{r}}_{vl}^{2m} = 0 \end{aligned}$$
(52)

where \(\omega _m = \omega _{m,123} + \omega _{m,213} + \omega _{m,312}\) and

$$\begin{aligned} \omega _{0,ijk}&= -l_i^4 {\hat{r}}_i^8 {\hat{r}}_j^4 {\hat{r}}_k^4 + 2 {\hat{r}}_i^4 l_j^2 l_k^2 {\hat{r}}_j^6 {\hat{r}}_k^6\end{aligned}$$
(53)
$$\begin{aligned} \omega _{1,ijk}&= 4 \left( {\hat{r}}_i^2 \left( {\hat{r}}_j^2 + {\hat{r}}_k^2 \right) + {\hat{r}}_j^2 {\hat{r}}_k^2 \right) \left( l_i^4 {\hat{r}}_i^6 {\hat{r}}_k^2 -2 l_j^2 l_k^2 {\hat{r}}_i^2 {\hat{r}}_j^4 {\hat{r}}_k^4 \right) \end{aligned}$$
(54)
$$\begin{aligned} \omega _{2,ijk}&= -16 l_i^4 {\hat{r}}_i^4 \left( {\hat{r}}_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_j^2 +{\hat{r}}_k^2 \right) + {\hat{r}}_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 \right) +\ldots \nonumber \\&+\, 32 l_j^2 l_k^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_j^2 +{\hat{r}}_k^2 \right) + {\hat{r}}_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 \right) +\ldots \nonumber \\&- \, 2 l_i^4 {\hat{r}}_i^4 \left( 2 {\hat{r}}_i^4 \left( {\hat{r}}_j^4 + {\hat{r}}_k^4 \right) + 3 {\hat{r}}_j^4 {\hat{r}}_k^4 \right) +\ldots \nonumber \\&+\, 10 l_j^2 l_k^2 {\hat{r}}_j^2 {\hat{r}}_k^2 {\hat{r}}_i^2 \left( {\hat{r}}_j^4 + {\hat{r}}_k^4 \right) \nonumber \\&+\,\ldots 8 l_j^2 l_k^2 {\hat{r}}_j^6 {\hat{r}}_k^6\end{aligned}$$
(55)
$$\begin{aligned} \omega _{3,ijk}&= 16 l_i^4 {\hat{r}}_i^6 \left( {\hat{r}}_i^2 {\hat{r}}_j^2 + {\hat{r}}_i^2 {\hat{r}}_k^2 + {\hat{r}}_j^4 + {\hat{r}}_k^4 + 4 {\hat{r}}_j^2 {\hat{r}}_k^2 \right) +\ldots \nonumber \\&-\, 8 l_i^2 {\hat{r}}_i^6 \left( 4 \left( l_j^2 {\hat{r}}_j^4 + l_k^2 {\hat{r}}_k^2 \right) + 5 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( l_j^2 + l_k^2 \right) \right) +\ldots \nonumber \\&-\, 8 {\hat{r}}_i^2 {\hat{r}}_j^4 {\hat{r}}_k^4\left( 5 l_i^2\left( l_j^2 +l_k^2\right) + 16 l_j^2 l_j^2 \right) +\ldots \nonumber \\&+\, 4 l_i^4 \left( 6 {\hat{r}}_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_j^2 + {\hat{r}}_k^2 \right) + {\hat{r}}_i^2 {\hat{r}}_j^4 {\hat{r}}_k^4 \right) +\ldots \nonumber \\&-\,4 l_i^2 {\hat{r}}_i^6 \left( l_j^2 {\hat{r}}_k^4 + l_k^2 {\hat{r}}_j^4 \right) \end{aligned}$$
(56)
$$\begin{aligned} \omega _{4,ijk}&= +128 l_j^2 l_k^2 {\hat{r}}_j^4 {\hat{r}}_k^4 + 160 {\hat{r}}_i^4 l_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( l_j^2 + l_k^2 \right) +\ldots \nonumber \\&-\, l_i^4 \left( {\hat{r}}_j^4 {\hat{r}}_k^4 +\,64 {\hat{r}}_i^6 \left( {\hat{r}}_j^2+{\hat{r}}_k^2\right) \right) +\ldots \nonumber \\&-\,16 l_i^4\left( {\hat{r}}_i^8 + {\hat{r}}_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \left( {\hat{r}}_j^2+{\hat{r}}_k^2\right) \right) +\ldots \nonumber \\&+\, 4 {\hat{r}}_i^6 l_i^2 \left( 4\left( l_j^2 {\hat{r}}_k^2 + l_k^2 {\hat{r}}_j^2 \right) + 10 \left( l_j^2 {\hat{r}}_j + l_k^2 {\hat{r}}_k^2 \right) \right) \nonumber \\&+\ldots {\hat{r}}_i^4 l_j^2 l_k^2 \left( 16 \left( {\hat{r}}_j^2 + {\hat{r}}_k^2 \right) + 50 {\hat{r}}_j^2 {\hat{r}}_k^2 \right) +\ldots \nonumber \\&-\, 24 l_i^4 {\hat{r}}_i^4 \left( \left( {\hat{r}}_j^4+{\hat{r}}_k^4 \right) + 4 {\hat{r}}_j^2 {\hat{r}}_k^2 \right) \end{aligned}$$
(57)
$$\begin{aligned} \omega _{5,ijk}&= 64 l_i^2 {\hat{r}}_i^2 \left( l_i^2 {\hat{r}}_i^4 + l_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 - {\hat{r}}_i^2 \left( l_j^2 {\hat{r}}_k^2 + l_k^2 {\hat{r}}_j^2 \right) \right) +\ldots \nonumber \\&+\,32 l_i^2 {\hat{r}}_i^2 \left( 3 l_i^2 {\hat{r}}_i^2 \left( {\hat{r}}_j^2 +{\hat{r}}_k^2 \right) - 5 {\hat{r}}_i^2 \left( l_j^2 {\hat{r}}_j^2 +l_k^2 {\hat{r}}_k^2 \right) \right) +\ldots \nonumber \\&-\,200 l_j^2 l_k^2 {\hat{r}}_i^2 {\hat{r}}_j^2 {\hat{r}}_k^2 \nonumber \\&+\,\ldots 16 l_i^2 {\hat{r}}_i^2 \left( l_i^2 \left( {\hat{r}}_j^4 +\,{\hat{r}}_k^4 \right) - {\hat{r}}_i^4 \left( l_j^2 + l_k^2 \right) \right) +\ldots \nonumber \\&+\, \left( {\hat{r}}_j^2 +{\hat{r}}_k^2 \right) \left( 4 l_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 - 20 {\hat{r}}_i^4 l_j^2 l_k^2 \right) \end{aligned}$$
(58)
$$\begin{aligned} \omega _{6,ijk}&= 64 l_i^2 {\hat{r}}_i^2 \left( - l_i^2 \left( {\hat{r}}_j^2 + {\hat{r}}_k^2 \right) + {\hat{r}}_i^2 \left( l_j^2+l_k^2 \right) \right) \nonumber \\&+\,\ldots 16 l_i^2 {\hat{r}}_i^2 \left( -6 l_i^2 {\hat{r}}_i^2 + 5 \left( l_j^2 {\hat{r}}_k^2 + l_k^2 {\hat{r}}_j^2 \right) \right) +\ldots \nonumber \\&+\,4 \left( 50 l_j^2 {\hat{r}}_j^2 l_k^2 {\hat{r}}_k^2 - 4 l_i^4 {\hat{r}}_j^2 {\hat{r}}_k^2 - l_i^4 \left( {\hat{r}}_j^4+{\hat{r}}_k^4 \right) + 2 {\hat{r}}_i^4 l_j^2 l_k^2\right) \nonumber \\ \end{aligned}$$
(59)
$$\begin{aligned} \omega _{7,ijk}&= 16 \left( 4 l_i^4 {\hat{r}}_i^2 + l_i^4 {\hat{r}}_j^2 + l_i^4 {\hat{r}}_k^2 -2 {\hat{r}}_i^2 l_j^2 l_k^2 - 5 l_i^2 {\hat{r}}_i^2 \left( l_j^2 + l_k^2 \right) \right) \nonumber \\ \end{aligned}$$
(60)
$$\begin{aligned} \omega _{8,ijk}&= 16 \left( -l_i^4 + 2 l_j^2 l_k^2 \right) \end{aligned}$$
(61)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bermudez-Cameo, J., Lopez-Nicolas, G. & Guerrero, J.J. Automatic Line Extraction in Uncalibrated Omnidirectional Cameras with Revolution Symmetry. Int J Comput Vis 114, 16–37 (2015). https://doi.org/10.1007/s11263-014-0792-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-014-0792-7

Keywords

Navigation