Skip to main content

Camera Calibration

  • Chapter
  • First Online:
3-D Computer Vision

Abstract

Camera calibration has the purpose to use the feature point coordinates (X, Y, Z) of a given 3-D space object and its image coordinates (x, y) in 2-D image space to calculate the internal and external parameters of the camera, thereby establishing the quantitative relationship between the objective scene and the captured image. This chapter will introduce the basic linear camera model, gives a typical calibration procedure, and discusses the internal and external parameters of the camera. This chapter will discuss typical nonlinear camera models, analyzes various types of distortions in detail, and summarizes the criteria and results for the classification of calibration methods. This chapter will introduce the traditional camera calibration method, analyzes its characteristics, and describes an example of a typical two-stage calibration method. In addition, an improved method is analyzed. This chapter will also introduce the self-calibration method (including the calibration method based on active vision). In addition to analyzing the advantages and disadvantages of this type of method, a simple calibration method is also specifically introduced.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Forsyth D, Ponce J. Computer Vision: A Modern Approach. Prentice Hall. 2003.

    Google Scholar 

  2. Weng J Y, Cohen P, Hernion M. Camera calibration with distortion models and accuracy evaluation. IEEE-PAMI, 1992, 14(10): 965-980.

    Google Scholar 

  3. Tsai R Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the shelf TV camera and lenses. Journal of Robotics and Automation, 1987, 3(4): 323-344.

    Google Scholar 

  4. Zhang Y-J. Image Engineering, Vol.3: Image Understanding. De Gruyter, 2017.

    Google Scholar 

  5. Faugeras O. Three-dimensional Computer Vision: A Geometric Viewpoint. MIT Press. 1993.

    Google Scholar 

  6. Ma S D, Zhang Z Y. Computer Vision — Computational theory and algorithm foundation. Science Press, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Self-Test Questions

Self-Test Questions

The following questions include both single-choice questions and multiple-choice questions, so each option must be judged.

  1. 2.1

    Linear Camera Model

    1. 2.1.1

      The camera calibration method introduced in Sect. 2.1 needs to obtain more than 6 spatial points with known world coordinates because (·)

      1. (a)

        There are 12 unknowns in the camera calibration equations.

      2. (b)

        The rotation and translation of the camera need three parameters to describe

      3. (c)

        The world coordinates are 3-D, and the image plane coordinates are 2-D

      4. (d)

        The transformation matrix from real- world coordinates to image plane coordinates is a 3 × 3 matrix

    [Hint] Note that some parameters are related.

    1. 2.1.2

      In camera calibration, (·).

      1. (a)

        The relationship between the camera’s internal parameters and external parameters can be established.

      2. (b)

        The obtained parameters can also be determined by the measurement of the camera.

      3. (c)

        It is needed to determine both the internal parameters of the camera and the external parameters of the camera.

      4. (d)

        It is to determine the transformation type from a given world point W(X, Y, Z) to its image plane coordinates (x, y).

    [Hint] Consider the purpose and specific steps of camera calibration.

    1. 2.1.3

      In camera calibration, (·).

      1. (a)

        The internal parameters must be determined first and then the external parameters.

      2. (b)

        The external parameters must be determined first and then the internal parameters.

      3. (c)

        The internal parameters and external parameters must be determined at the same time.

      4. (d)

        The internal parameters and external parameters can be determined at the same time.

    [Hint] Pay attention to the exact meaning and subtle differences of different text descriptions.

  2. 2.2

    Non-linear Camera Model

    1. 2.2.1

      Due to lens distortion, (·).

      1. (a)

        The projection from 3-D space to 2-D image plane cannot be described by a linear model.

      2. (b)

        The distortion error generated will be more obvious near the optical axis.

      3. (c)

        The distortion error generated in the image plane will be more obvious at the place which is far from the center.

      4. (d)

        The object point in the 3-D space can be determined according to the pixel coordinates of the 2-D image plane.

    [Hint] Distortion causes the projection relationship to no longer be a linear projection relationship.

    1. 2.2.2

      For radial distortion, (·).

      1. (a)

        The deviation caused is often symmetrical about the main optical axis of the camera lens.

      2. (b)

        The positive one is called barrel distortion.

      3. (c)

        The negative one is called pincushion distortion.

      4. (d)

        The barrel distortion caused in the image plane is more obvious at a place away from the optical axis.

    [Hint] The radial distortion is mainly caused by the curvature error of the lens surface.

    1. 2.2.3

      In lens distortion, (·).

      1. (a)

        The distortion of the thin prism only causes radial deviation.

      2. (b)

        The eccentric distortion originates from the discrepancy between the optical center and geometric center of the optical system.

      3. (c)

        The tangential distortion mainly comes from the non-collinear optical centers of the lens group.

      4. (d)

        The centrifugal distortion includes both radial distortion and tangential distortion.

    [Hint] Some distortions are combined distortions.

    1. 2.2.4

      According to the non-linear camera model, in the conversion from 3-D world coordinates to computer image coordinates, (·).

      1. (a)

        The non-linearity comes from the lens radial distortion coefficient k.

      2. (b)

        The non-linearity comes from the distance between a point in the image and the optical axis point of the lens.

      3. (c)

        The non-linearity comes from the image plane coordinates (x’, y’) being affected by the lens radial distortion.

      4. (d)

        The non-linearity comes from the actual image plane coordinates (x*, y*) being affected by the lens radial distortion.

    [Hint] Not every step of the non-linear camera model is non-linear.

  3. 2.3

    Traditional Calibration Methods

    1. 2.3.1

      According to Fig. 2.7, (·).

      1. (a)

        The calibration process is consistent with the imaging process.

      2. (b)

        There are coefficients to be calibrated for each step of the coordinate system conversion.

      3. (c)

        There are more internal parameters to be calibrated than external parameters.

      4. (d)

        There are always four steps in the conversion from the world coordinate system to the computer image coordinate system.

    [Hint] Pay attention to the meaning of each step of conversion and content.

    1. 2.3.2

      In the two-stage calibration method, (·).

      1. (a)

        Calculate R and T in Step 1, and calculate other parameters in Step 2.

      2. (b)

        Calculate all external parameters in Step 1, and calculate all internal parameters in Step 2.

      3. (c)

        The k corresponding to radial distortion is always calculated in Step 2.

      4. (d)

        Uncertain image scale factor μ is always calculated in Step 1.

    [Hint] Uncertain image scale factor μ may also be known in advance.

    1. 2.3.3

      When improving the accuracy of the two-stage calibration method, the tangential distortion of the lens is also considered, so (·).

      1. (a)

        Eight reference points are needed for calibration

      2. (b)

        Ten reference points are rdequired for calibration

      3. (c)

        There can be up to 12 parameters to be calibrated

      4. (d)

        There can be up to 15 parameters to be calibrated

    [Hint] The numbers of distortion parameters considered here are 4.

  4. 2.4

    Self-Calibration Methods

    1. 2.4.1

      Self-calibration method (·).

      1. (a)

        No needs to resort to known calibration materials

      2. (b)

        Always needs to collect multiple images for calibration

      3. (c)

        Can only calibrate the internal parameters of the camera

      4. (d)

        Is not very highly robust when it is implemented with active vision technology

    [Hint] Analyze the basic principles of self-calibration.

    1. 2.4.2

      Under the ideal situation of uncertain image scale factor μ = 1, (·).

      1. (a)

        The camera model is linear.

      2. (b)

        If the number of sensor elements in the X direction is increased, the number of rows of pixels will also increase.

      3. (c)

        If the number of samples along the X direction made by the computer in a row is increased, the number of rows of pixels will also increase.

      4. (d)

        The image plane coordinates represented in physical units (such as millimeters) are also the computer image coordinates in pixels.

    [Hint] Note that the uncertain image scale factor is introduced in the transformation from the image plane coordinate system x′y′ to the computer image coordinate system MN.

    1. 2.4.3

      To calibrate the camera according to the self-calibration method introduced in Sect. 2.4, (·).

      1. (a)

        The camera needs to do three pure translation movements.

      2. (b)

        The camera needs to do four pure translation movements.

      3. (c)

        The camera needs to do five pure translation movements.

      4. (d)

        The camera needs to do six pure translation movements.

    [Hint] Analyze the number of equations that can be obtained when the method is calibrated and the number of unknowns to be calculated.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zhang, YJ. (2023). Camera Calibration. In: 3-D Computer Vision. Springer, Singapore. https://doi.org/10.1007/978-981-19-7580-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7580-6_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7579-0

  • Online ISBN: 978-981-19-7580-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics