Abstract
This chapter describes three basic components of a computer vision system. The geometry and photometry of the used cameras needs to be understood (to some degree). For modelling the projective mapping of the 3D world into images, and for the steps involved in camera calibration, we have to deal with several coordinate systems. By calibration we map recorded images into normalized (e.g. geometrically rectified) representations, thus simplifying subsequent vision procedures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The subscript “s” comes from “sensor”; the camera is a particular sensor for measuring data in the 3D world. A laser range-finder or radar are other examples of sensors.
- 2.
For readers who prefer to define a wide angle accurately: let it be any angle greater than this particular α=104.25∘, with 360∘ as an upper bound.
- 3.
Catadioptric: pertaining to, or involving both the reflection and the refraction of light; dioptric: relating to the refraction of light.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag London
About this chapter
Cite this chapter
Klette, R. (2014). Cameras, Coordinates, and Calibration. In: Concise Computer Vision. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-6320-6_6
Download citation
DOI: https://doi.org/10.1007/978-1-4471-6320-6_6
Publisher Name: Springer, London
Print ISBN: 978-1-4471-6319-0
Online ISBN: 978-1-4471-6320-6
eBook Packages: Computer ScienceComputer Science (R0)