Abstract
Vignetting refers to the fall-off pixel intensity from the centre towards the edges of the image. This effect is undesirable in image processing and analysis. In the literature, the most commonly used methods of vignetting correction assume radial characteristic of vignetting. In the case of camera lens systems with non-radial vignetting, such approach leads to insufficient correction. Additionally, the majority of vignetting correction methods need a reference image acquired from a uniformly illuminated scene, what can be difficult to achieve. In this paper, we propose a new method of vignetting correction based on the local parabolic model of non-radial vignetting and compensation of non-uniformity of scene luminance. The new method was tested on camera lens system with non-radial vignetting and non-uniformly illuminated scene. In these conditions, the proposed method gave the best correction results among the tested methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The images are important sources of information about the surrounding environment. Imaging quality depends on many factors, which are prone to radiometric problems. One of them is vignetting, which refers to the fall-off of pixel intensity from the centre towards the edges of the image. Depending on the cause of vignetting, we can distinguish several types of vignetting [11].
The causes of vignetting are listed below in the order corresponding to a light path from a scene to an image sensor. Mechanical vignetting refers to the light fall-off due to the light path blockage by elements of camera lens system, typically by an additional filter or hoods mounted on a lens. Optical vignetting refers to the light fall-off caused by the blockage of off-axis incident light inside the lens body. The amount of blocked light depends on the physical dimensions of a lens [2]. Natural vignetting refers to the light fall-off related to the geometry of the image-forming system. It is usually described by the \(cos^4\, law\), which specifies the drop in light intensity depending on the angle formed between a ray of light entering the lens and the optical axis of the lens [13]. Pixel vignetting refers to the light fall-off related to the angular sensitivity of the image sensor pixel. It is caused by the physical dimensions of a single pixel, in particular, the length of the ’tunnel’ before the light reaches the photodiode [5]. Light incident on the pixel at an angle is partially occluded by the sides of the well. It is very difficult to determine the impact of different types of vignetting on image without an accurate knowledge about construction of the camera lens system. In the article, the vignetting phenomenon is understood as a light fall-off caused by each of the above vignetting types with the exception of mechanical vignetting.
The effect of vignetting on the image is undesirable in image processing and analysis, particularly in areas such as: image denoising [6], image segmentation [23], microscopic image analysis [17, 18], sky image analysis [20], visual surveillance [10], motion analysis in video sequences [1] and panoramic images [9, 16]. Therefore, from the viewpoint of image processing, it is important to reduce vignetting in image.
In this paper, we propose a new correction method of vignetting in images, especially non-radial vignetting, based on the local parabolic model of vignetting function. The methods presented so far in the literature are designed with the radial fall-off in mind. This has a significant influence on the accuracy of the vignetting correction of real images in which vignetting is not always radial. The proposed procedure contains also a stage for compensation of non-uniform luminance of a reference target. The new method was tested on images of different scenes acquired with the usage of two camera lens systems and different lighting and viewing conditions.
The presentation of the proposed method is preceded by a description of vignetting correction methods (Sect. 2). The proposed method has been presented in Sect. 3. The Sects. 4 and 5 describe, respectively, the experiments and the vignetting correction results of the new and known in the literature methods. A brief summary in last section concludes the article.
2 Vignetting correction methods
In stage of image acquisition, the vignetting can be reduced to a certain extent by removing additional filters, setting longer focal length or smaller aperture. Of course, these actions do not correct all types of vignetting and are not always possible to do. Therefore, a computational method for vignetting correction is used during preprocessing of acquired image. Most of these methods of vignetting correction require to estimate a mathematical model of vignetting. We can divide modelling methods into two groups [22]: physically based models and approximation of vignetting functions.
Physically based models are trying to find relations between the light emitted from the object and pixel intensity fall-off on the camera sensor. These approximations are directly related to the types of vignetting, which are estimated, for example natural vignetting [13] or optical vignetting [2]. Therefore, these methods need detailed data about the physical parameters of a lens or a camera. Such data are often not available for the end user and are difficult to determine. Therefore, such methods are not easy to use in practice.
The methods of approximation of the vignetting functions can be divided into two subgroups: reference target methods and image-based methods. The first group uses a reference image to determine a vignetting function. This image shows only the vignetting. The acquisition of such image requires appropriate measurement conditions and especially requires uniform luminance of the scene. The vignetting function is obtained in the process of approximation with the use of parametric models, e.g. polynomial model [3, 12, 19], exponential polynomial model [19], hyperbolic cosine model [22], Gaussian function [15] and radial polynomial model [4]. Due to vignetting nature, the last three methods need to assume the coordinates of the pixel representing a central point of the radial fall-off of pixel intensity. The coordinates of this point can be determined with the use of additional methods, which make the process of vignetting function estimation more complex [21]. The main problem of reference target methods is the acquisition of the reference image. The quality of this image depends mainly on the luminance uniformity of the scene, which is difficult to achieve.
The image-based methods use a set of not entirely overlapping images (shifted images) of the same reference scene. These images are used for calculating the vignetting function. In general, it is done by minimizing the objective function [11, 14, 22] which depends on, i.e. the differences between values of corresponding pixels on different images, which represent the same scene point. There are also the image-based methods that use a single image to estimate the vignetting function [7, 23]. The big advantage of these methods is the possibility of usage as the reference scene, a scene with uneven luminance and even natural images. The image-based methods usually assume radial fall-off of pixel intensity from the image centre. Although, such assumption does not correspond with all camera lens systems. The effectiveness of these methods depends on precision of localization of corresponding pixels and usually use additional image processing methods, e.g. image segmentation [23]. In most cases, all compared images require an acquisition in the same scene conditions and any change in the scene (e.g. position of objects) may influence the outcome vignetting function. The effectiveness of these methods strongly depends on uniformity of scene luminance.
3 The procedure of vignetting correction
The proposed procedure combines the advantages of both groups of vignetting correction methods based on approximation of vignetting functions. It has the precision of the reference target methods, but it can be also used for any stable light conditions like in case of image-based methods. The procedure to determine the vignetting function requires an image of reference target and measurement of luminance distribution of the same target. The vignetting function is estimated with the use of the proposed method of approximation which can fit to the non-radial vignetting. Flow chart of the proposed vignetting correction procedure is shown in Fig. 1.
The first step of the procedure after image acquisition of reference target \(I_\mathrm{ref}\) is a conversion of this colour image into greyscale image \(I_c\). The next step is an image luminance compensation. We use the image of measured luminance \(I_L\) to compensate a non-uniformity of luminance mapped onto the camera’s image. In this way, the camera image I will present intensities of light fall-off only. The image I allows to approximate the vignetting \(I^*_v\). The final step of the procedure is not used only for reference image \(I_\mathrm{ref}\), but mainly on any scene image \(I_\mathrm{work}\) acquired with the same camera lens settings as \(I_\mathrm{ref}\). Below, the important steps of the procedure are described with details.
The goal of greyscale image conversion is an approximation of the scene luminance. The colour values of reference image were converted in the following way:
where \(I_{c}\) is the greyscale image of \(I_\mathrm{ref}\) and R, G, B are colour components of image \(I_\mathrm{ref}\) described in RGB camera colour space. The conversion method uses a standard conversion from sRGB space to Y channel of CIE XYZ for illuminant D65.
The luminance compensation of the greyscale image \(I_{c}\) uses information about scene luminance \(I_L\) from measuring device (2D colorimeter). This important step allows to eliminate the non-uniformities of image \(I_{c}\) associated with the light and the object:
where \(I_L\) is measured luminance reference image, \(\bar{I}_L\) is mean value of luminance in reference image \(I_L\), \(\dfrac{\bar{I}_L}{I_L(x,y)}\) is scaling factor, which provides the same average value of image pixels before and after correction and (x, y) are image coordinates. In this way, the image I will present only the light fall-off.
One of the most important steps of vignetting correction is a fitting of the approximation function to the image I(x, y) according to the following criteria:
where \(\mathbf {a}\) represents the parameters of the approximation function.
The proposed model of vignetting uses a polynomial function based on image \(I_v\) pixel intensity. The function \(f_{xy}\) describes the camera response in the form of intensity level to incident light with radiance level L(x, y). The values of the pixel in the image acquired by the camera can be described as a function of the Taylor series for the assumed level of radiance \(L_0\) of infinite sum:
For each pixel, the model will be represented by a number of functions depending on the L(x, y). The above formula can be simplified by joining parameters with similar power of L(x, y):
where \(a_i\) are the parameters of the polynomial.
The accurate mathematical model of the image I, from the point of view of vignetting correction, is not necessary and can be replaced by its simplified model. Therefore, the approximation using a locally fitted polynomial models will be limited to the second-order along the respective rows \(I_x\) and columns \(I_y\) of the image. Assumed degree of regression is consistent with the intended shape of the vignetting function in the form of parabolas along individual lines. The proposed local parabolic model of vignetting function \(I_v\) for each pixel coordinates can be expressed as:
The final step of vignetting correction procedure is an image correction. It is done according to following way:
where \(I_\mathrm{out}\) is the image after vignetting correction, \(I_\mathrm{work}\) is the image acquired by the camera, \(I^*_v\) is vignetting function.
The determined vignetting function is optimized to specific settings of the lens. If we change the setting of the lens, we have to repeat a process of approximation of vignetting functions. The formula (7) can be used on colour images by applying correction independently on each colour channel of the an image.
4 The experiment
The aim of the experiment was to determine the quality of the proposed vignetting correction procedure. For this purpose, on the basis of literature, the best methods of fitting vignetting function to image I were selected. Their quality of vignetting correction and dependence on luminance compensation process were determined in experiments. The last point of experiments was an examination of the vignetting correction results of natural images. To achieve these objectives, the laboratory equipment, which is described below, was used.
The experiment set-up consists of five major elements: the camera with the lens, the light sources, the reference chart and the measurement equipment (Table 1). In the experiments, the Basler aca1300-gc digital camera equipped with a Sony ICX445 1/3” image sensor with progressive scan and a Bayer colour filter array with BG pattern was used. The optical system consists of a NET New Electronic Technology SV-1214H lens with fixed focal length 12 mm, manual iris aperture (f-stop) ranging from 1.4 to 16 and manual, and lockable focus. The lens is equipped with C-mount and supports cameras with the size of the sensor up to 2/3”. During experiments, the lens aperture was set to f/1.4. The camera parameters were set within application created in Microsoft Visual Studio using Pylon and FreeImage libraries. Image processing and visualization of the results were made in MATLAB. Image demosaicing was realised with bilinear method on RAW images. The digital still camera Canon EOS 650D with the lens Canon EF-S 18-55 mm f/3.5-5.6 IS II was used only in preliminary comparative experiment. Before performing the vignetting, correction procedure is necessary to check a linearity of camera response function [9]. Therefore, the linearity of both tested cameras has been checked and there was no need to improve their linearity.
The Basler camera has worked under different light conditions: fluorescent light (CRI \(R_a=86\)) from JustNormlicht Proof light (Setup 1), in D55 light (CRI \(R_a=96\)) from JustNormlicht LED Color Viewing Light (Setup 2) and fluorescent light (CRI \(R_a=83\)) from Bowens BW-3200 (Setup 3). The measurement viewing conditions were \(d/0^{\circ }\), \(0^{\circ }/45^{\circ }\) and \(45^{\circ }/0^{\circ }\) (Fig. 2) [8]. The non-uniformity of lights did not exceed 1 % (\(d/0^{\circ }\)), 3 % (\(0^{\circ }/45^{\circ }\)), 10 % (\(45^{\circ }/0^{\circ }\)) the average value. As a reference image was used image of Sekonic grey card Test Card II Photo.
The colorimeter Konica Minolta CA-2000 was used in the measuring of the light non-uniformity. Measurements of angles and distances were made by use of laser distance meter Leica DISTO D810. The laboratory was provided with a darkroom which allowed to cut-off an influence of external light and an air conditioning which allowed to maintain a fixed temperature.
In the experiments, different light conditions were used to check the usefulness of the luminance measurement in the luminance compensation process. The quality of compensation process was tested for different setups. The difference between images was determined with following measures:
-
MAE (mean absolute error):
$$\begin{aligned} \hbox {MAE}(I_{\alpha },I_{\beta }) = \dfrac{1}{MN}\sum _{x=1}^M\sum _{y=1}^N\Big |I_{\alpha }(x,y)-I_{\beta }(x,y)\Big |, \end{aligned}$$(8) -
RMSE (root mean square error):
$$\begin{aligned} \hbox {RMSE}(I_{\alpha },I_{\beta }) = \root \of {\dfrac{1}{MN}\sum _{x=1}^M\sum _{y=1}^N\big (I_{\alpha }(x,y)-I_{\beta }(x,y)\big )^2}, \end{aligned}$$(9)
where \(M \times N\) is the image resolution, \(I_{\alpha }\) and \(I_{\beta }\) are compared images.
Most of the images presented in the article were made using Setup 1. The greylevel values of images were scaled to 8-bit representation (0–255). Each image used in calculations was averaged from 10 images. Most of the presented figures of vignetting functions were quantized to 10 levels of intensities.
5 Experimental results and analysis
The characteristic of vignetting for each camera lens system can be different. These differences are not only limited to the scale of light fall-off, but also to the shape of their characteristics. In the case of Basler camera and NET lens, we can see the light fall-off is not radial (Fig. 3).
The proposed local parabolic model of vignetting function was compared with other models existing in the literature: the second-order polynomial model [19], second-order exponential polynomial model [19], hyperbolic cosine model [22], the Gaussian function [18] and the third-order radial polynomial model [4]. The methods described in the literature use global optimization, in contrast to local parabolic model. In the case of Basler-NET system, the best accuracy of fitting data to model has local parabolic model (Table 2). The obtained vignetting functions are different in strength of light fall-off in the corners of the image from 18 to 22 %. Depending on the method, the coordinates of the image centre were slightly shifted. In the case of Canon system, the best results of data fitting have the radial model. But if the shape of vignetting changes from radial (Canon system) to non-radial (Basler-NET system), the accuracy of the model is also changed. In this case, radial model is not able to adapt to the non-radial shape of vignetting (Fig. 4). In the case of tested camera lens systems, the local parabolic model has adapted better to change in shape of vignetting than other methods. The local parabolic model has the best quality of data fitting of Basler-NET system and good quality in the case of Canon system. It was a result of fitting model to the data and to a lesser extent the knowledge about the specific characteristic of vignetting. This shows a need to check the shape of vignetting characteristics before choosing vignetting function model (Fig. SM1, Supplementary Material). The performed tests also showed that the proposed method is faster than other considered methods (see SM).
In order to verify the quality of vignetting correction, two following measures were calculated (Fig. 5):
where \(I_\mathrm{out}\) and \(I'_\mathrm{out}\) are images after vignetting correction procedure with, respectively, active and inactive luminance compensation process, \(\bar{I}_\mathrm{out}\) and \(\bar{I'}_\mathrm{out}\) are mean values of images \(I_\mathrm{out}\) and \(I'_\mathrm{out}\).
The reference image from Setup 1 was used during calculation, because it presents uniformly illuminated flat white surface and luminance distortions of this image are caused mostly by vignetting and camera noise. Therefore, each value of measures \(|\Delta \hbox {MAE}|\) and \(|\Delta \hbox {RMSE}|\) shows a difference between flat image and the image corrected with vignetting function approximated for each method and set-up. In the case of Setup 1 with uniform light, the luminance compensation process obtains slightly worse results of vignetting correction. This is due to multiplying of noises of two imposed images. For the Setup 2, luminance compensation has a small positive influence on vignetting correction results, which is caused by small non-uniformities of luminance (\({<}3\,\%\) of mean value). However, in case of Setup 3, wherein non-uniformities exceed 10 % of the mean value, the luminance compensation process becomes necessary to estimate vignetting function. The influence of luminance compensation process on vignetting correction depends on used vignetting model. In the case of radial polynomial function and Gaussian function, the luminance compensation has a small influence on vignetting correction results. It is caused by difficulties in fitting of these functions to non-radial nature of Basler-NET system vignetting. Therefore, these functions have a large fitting error regardless of the presence of the compensation process in vignetting correction procedure. In the other tested models, the application of luminance compensation process in Setup 2 and 3 has significantly improved the correction results.
The quality of vignetting correction was tested on natural images (Fig. 6 and Fig. SM1–SM8). In this experiment, the vignetting function was approximated by local parabolic model. The differences between corrected image and uncorrected image were significant and in case of image:
-
Trees: \(\hbox {MAE}(I_\mathrm{work},I_\mathrm{out}) = 12.79\) and \(\hbox {RMSE}(I_\mathrm{work},I_\mathrm{out}) = 16.75\),
-
Building: \(\hbox {MAE}(I_\mathrm{work},I_\mathrm{out}) = 8.11\) and \(\hbox {RMSE}(I_\mathrm{work},I_\mathrm{out}) = 11.40\).
The differences between both images were large, and the lack of vignetting correction can cause difficulties in the further use of the images.
6 Conclusions
Each type of lens and camera can have additional distortions connected with its specific design. If we change a lens setting, we will also change a characteristic of light fall-off in the image. Most commonly described in the literature radial polynomial model is not always the best choice for vignetting correction and in some cases should be replaced by the local parabolic model. Our research shows that the vignetting correction should be preceded by checking the characteristic of the vignetting. Additionally, performed experiments demonstrate that the use of the image luminance compensation of luminance non-uniformity of reference target allows to obtain good vignetting correction results. The shape of the vignetting function depends on the lens and the camera, but the quality of the correction depends largely on the method of the approximation and quality of the reference image. In the case of Basler-NET system, which is a good example of a lens with non-radial vignetting function, the proposed method provides an effective vignetting correction procedure and best correction results among tested methods.
References
Altunbasak, Y., Mersereau, R.M., Patti, A.J.: A fast parametric motion estimation algorithm with illumination and lens distortion correction. IEEE Trans. Image Process. 12(4), 395–408 (2003)
Asada, N., Amano, A., Baba, M.: Photometric calibration of zoom lens systems. In: IEEE Proceedings of the 13th International Conference on Pattern Recognition, vol. 1, pp. 186–190. Vienna, Austria (1996)
Brady, M., Legge, G.E.: Camera calibration for natural image studies and vision research. J. Opt. Soc. Am. A 26, 30–42 (2009)
Burt, P., Adelson, E.: A multiresolution spline with application to image mosaics. ACM Trans. Graph. 4(2), 217–236 (1983)
Catrysse, P.B., Liu, X., El Gamal, A.: QE reduction due to pixel vignetting in CMOS image sensors. In: Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications, vol. 420, pp. 420–430. San Jose, CA, USA (2000)
Chen, Y.P., Mudunuri, B.K.: An anti-vignetting technique for superwide field of view mosaicked images. J. Imaging Technol. 12(5), 293–295 (1986)
Cho, H., Lee, H., Lee, S.: Radial bright channel prior for single image vignetting correction. Lecture Notes in Computer Science, Computer Vision ECCV 2014 vol. 8690, pp. 189–202 (2014)
CIE 15:2004 Technical Raport: Colorimetry
Doutre, C., Nasiopoulos, P.: Fast vignetting correction and color matching for panoramic image stitching. In: IEEE Proceedings of the 16th International Conference on Image Processing (ICIP), pp. 709–712. Cairo, Egypt (2009)
Galego, R., Bernardin, R., Gaspar, J.: Vignetting correction for pan-tilt surveillance cameras. In: International Conference on Computer Vision Theory and Application (VISAPP’11), pp. 638–644. Portugal (2011)
Goldman, D.B.: Vignette and exposure calibration and compensation. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2276–2288 (2010)
Goldman, D.B., Chen, J.H.: Vignette and exposure calibration and compensation. In: Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV’05), vol. 1, pp. 899–906. Beijing, China (2005)
Kang, S., Weiss, R.: Can we calibrate a camera using an image of a flat textureless Lambertian surface? Lect. Notes Comput. Sci. 1843, 640–653 (2000)
Kim, S.J., Pollefeys, M.: Robust radiometric calibration and vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 30(4), 562–576 (2008)
Leong, F.J., Brady, M., McGee, J.O.D.: Correction of uneven illumination (vignetting) in digital microscopy images. J. Clin. Pathol. 56(8), 619–621 (2003)
Litvinov, A., Schechner, Y.: Addressing radiometric nonidealities: a unified framework. In: IEEE Proceedings of the Computer Vision and Pattern Recognition, vol. 2, pp. 52–59. San Diego, USA (2005)
Robertson, D., Hui, C., Archambault, L., Mohan, R., Beddar, S.: Optical artefact characterization and correction in volumetric scintillation dosimetry. Phys. Med. Biol. 59(1), 23–42 (2014)
Russ, J.C.: The Image Processing Handbook, 3rd edn. CRC Press LLC, Boca Raton, FL (1999)
Sawchuk, A.A.: Real-time correction of intensity nonlinearities in imaging systems. IEEE Trans. Comput. C–26(1), 34–39 (1977)
Stumpfel, J., Jones, A., Wenger, A., Debevec, P.: Direct HDR capture of the sun and sky. In: Proceedings of the International Conference on Computer Graphics (AFRIGRAPH’04), pp. 145–149. Cape Town, South Africa (2004)
Willson, R.G., Shafer, S.A.: What is the center of the image? J. Opt. Soc. Am. A 11(11), 2946–2955 (1994)
Yu, W.: Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 50(4), 975–983 (2004)
Zheng, Y., Lin, S., Kambhamettu, C., Yu, J., Bing Kang, S.: Single-image vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 1–8 (2009)
Acknowledgments
This work was financed from funds for statutory activities of the Silesian University of Technology. Presented research was performed in the Laboratory of Imaging and Radiometric Measurements, at the Institute of Automatic Control of the Silesian University of Technology, Gliwice, Poland.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Kordecki, A., Palus, H. & Bal, A. Practical vignetting correction method for digital camera with measurement of surface luminance distribution. SIViP 10, 1417–1424 (2016). https://doi.org/10.1007/s11760-016-0941-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-016-0941-2