1 Introduction

The images are important sources of information about the surrounding environment. Imaging quality depends on many factors, which are prone to radiometric problems. One of them is vignetting, which refers to the fall-off of pixel intensity from the centre towards the edges of the image. Depending on the cause of vignetting, we can distinguish several types of vignetting [11].

The causes of vignetting are listed below in the order corresponding to a light path from a scene to an image sensor. Mechanical vignetting refers to the light fall-off due to the light path blockage by elements of camera lens system, typically by an additional filter or hoods mounted on a lens. Optical vignetting refers to the light fall-off caused by the blockage of off-axis incident light inside the lens body. The amount of blocked light depends on the physical dimensions of a lens [2]. Natural vignetting refers to the light fall-off related to the geometry of the image-forming system. It is usually described by the \(cos^4\, law\), which specifies the drop in light intensity depending on the angle formed between a ray of light entering the lens and the optical axis of the lens [13]. Pixel vignetting refers to the light fall-off related to the angular sensitivity of the image sensor pixel. It is caused by the physical dimensions of a single pixel, in particular, the length of the ’tunnel’ before the light reaches the photodiode [5]. Light incident on the pixel at an angle is partially occluded by the sides of the well. It is very difficult to determine the impact of different types of vignetting on image without an accurate knowledge about construction of the camera lens system. In the article, the vignetting phenomenon is understood as a light fall-off caused by each of the above vignetting types with the exception of mechanical vignetting.

The effect of vignetting on the image is undesirable in image processing and analysis, particularly in areas such as: image denoising [6], image segmentation [23], microscopic image analysis [17, 18], sky image analysis [20], visual surveillance [10], motion analysis in video sequences [1] and panoramic images [9, 16]. Therefore, from the viewpoint of image processing, it is important to reduce vignetting in image.

In this paper, we propose a new correction method of vignetting in images, especially non-radial vignetting, based on the local parabolic model of vignetting function. The methods presented so far in the literature are designed with the radial fall-off in mind. This has a significant influence on the accuracy of the vignetting correction of real images in which vignetting is not always radial. The proposed procedure contains also a stage for compensation of non-uniform luminance of a reference target. The new method was tested on images of different scenes acquired with the usage of two camera lens systems and different lighting and viewing conditions.

The presentation of the proposed method is preceded by a description of vignetting correction methods (Sect. 2). The proposed method has been presented in Sect. 3. The Sects. 4 and 5 describe, respectively, the experiments and the vignetting correction results of the new and known in the literature methods. A brief summary in last section concludes the article.

2 Vignetting correction methods

In stage of image acquisition, the vignetting can be reduced to a certain extent by removing additional filters, setting longer focal length or smaller aperture. Of course, these actions do not correct all types of vignetting and are not always possible to do. Therefore, a computational method for vignetting correction is used during preprocessing of acquired image. Most of these methods of vignetting correction require to estimate a mathematical model of vignetting. We can divide modelling methods into two groups [22]: physically based models and approximation of vignetting functions.

Physically based models are trying to find relations between the light emitted from the object and pixel intensity fall-off on the camera sensor. These approximations are directly related to the types of vignetting, which are estimated, for example natural vignetting [13] or optical vignetting [2]. Therefore, these methods need detailed data about the physical parameters of a lens or a camera. Such data are often not available for the end user and are difficult to determine. Therefore, such methods are not easy to use in practice.

The methods of approximation of the vignetting functions can be divided into two subgroups: reference target methods and image-based methods. The first group uses a reference image to determine a vignetting function. This image shows only the vignetting. The acquisition of such image requires appropriate measurement conditions and especially requires uniform luminance of the scene. The vignetting function is obtained in the process of approximation with the use of parametric models, e.g. polynomial model [3, 12, 19], exponential polynomial model [19], hyperbolic cosine model [22], Gaussian function [15] and radial polynomial model [4]. Due to vignetting nature, the last three methods need to assume the coordinates of the pixel representing a central point of the radial fall-off of pixel intensity. The coordinates of this point can be determined with the use of additional methods, which make the process of vignetting function estimation more complex [21]. The main problem of reference target methods is the acquisition of the reference image. The quality of this image depends mainly on the luminance uniformity of the scene, which is difficult to achieve.

The image-based methods use a set of not entirely overlapping images (shifted images) of the same reference scene. These images are used for calculating the vignetting function. In general, it is done by minimizing the objective function [11, 14, 22] which depends on, i.e. the differences between values of corresponding pixels on different images, which represent the same scene point. There are also the image-based methods that use a single image to estimate the vignetting function [7, 23]. The big advantage of these methods is the possibility of usage as the reference scene, a scene with uneven luminance and even natural images. The image-based methods usually assume radial fall-off of pixel intensity from the image centre. Although, such assumption does not correspond with all camera lens systems. The effectiveness of these methods depends on precision of localization of corresponding pixels and usually use additional image processing methods, e.g. image segmentation [23]. In most cases, all compared images require an acquisition in the same scene conditions and any change in the scene (e.g. position of objects) may influence the outcome vignetting function. The effectiveness of these methods strongly depends on uniformity of scene luminance.

3 The procedure of vignetting correction

The proposed procedure combines the advantages of both groups of vignetting correction methods based on approximation of vignetting functions. It has the precision of the reference target methods, but it can be also used for any stable light conditions like in case of image-based methods. The procedure to determine the vignetting function requires an image of reference target and measurement of luminance distribution of the same target. The vignetting function is estimated with the use of the proposed method of approximation which can fit to the non-radial vignetting. Flow chart of the proposed vignetting correction procedure is shown in Fig. 1.

Fig. 1
figure 1

Flowchart of the vignetting correction procedure

The first step of the procedure after image acquisition of reference target \(I_\mathrm{ref}\) is a conversion of this colour image into greyscale image \(I_c\). The next step is an image luminance compensation. We use the image of measured luminance \(I_L\) to compensate a non-uniformity of luminance mapped onto the camera’s image. In this way, the camera image I will present intensities of light fall-off only. The image I allows to approximate the vignetting \(I^*_v\). The final step of the procedure is not used only for reference image \(I_\mathrm{ref}\), but mainly on any scene image \(I_\mathrm{work}\) acquired with the same camera lens settings as \(I_\mathrm{ref}\). Below, the important steps of the procedure are described with details.

The goal of greyscale image conversion is an approximation of the scene luminance. The colour values of reference image were converted in the following way:

$$\begin{aligned} I_c = 0.2127R + 0.7151G + 0.0722B, \end{aligned}$$
(1)

where \(I_{c}\) is the greyscale image of \(I_\mathrm{ref}\) and R, G, B are colour components of image \(I_\mathrm{ref}\) described in RGB camera colour space. The conversion method uses a standard conversion from sRGB space to Y channel of CIE XYZ for illuminant D65.

The luminance compensation of the greyscale image \(I_{c}\) uses information about scene luminance \(I_L\) from measuring device (2D colorimeter). This important step allows to eliminate the non-uniformities of image \(I_{c}\) associated with the light and the object:

$$\begin{aligned} I(x,y)=\dfrac{\bar{I}_L}{I_L(x,y)}I_{c}(x,y), \end{aligned}$$
(2)

where \(I_L\) is measured luminance reference image, \(\bar{I}_L\) is mean value of luminance in reference image \(I_L\), \(\dfrac{\bar{I}_L}{I_L(x,y)}\) is scaling factor, which provides the same average value of image pixels before and after correction and (xy) are image coordinates. In this way, the image I will present only the light fall-off.

One of the most important steps of vignetting correction is a fitting of the approximation function to the image I(xy) according to the following criteria:

$$\begin{aligned} I_{v}^*(x,y) = \min _{\mathbf {a}}\sum _{x}\sum _{y} \Big (I(x,y) -I_v(\mathbf {a},x,y)\Big )^{2}, \end{aligned}$$
(3)

where \(\mathbf {a}\) represents the parameters of the approximation function.

Table 1 Image acquisition setups

The proposed model of vignetting uses a polynomial function based on image \(I_v\) pixel intensity. The function \(f_{xy}\) describes the camera response in the form of intensity level to incident light with radiance level L(xy). The values of the pixel in the image acquired by the camera can be described as a function of the Taylor series for the assumed level of radiance \(L_0\) of infinite sum:

$$\begin{aligned} I= & {} f_{xy}\big (L_0(x,y)\big )+\big (L(x,y)-L_0(x,y)\big )\dfrac{\partial f_{xy}}{\partial L}\nonumber \\&+\, \dfrac{\big (L(x,y)-L_0(x,y)\big )^2}{2}\dfrac{\partial ^2 f_{xy}}{\partial L^2} + \ldots \nonumber \\&+\, \dfrac{\big (L(x,y)-L_0(x,y)\big )^i}{i!}\dfrac{\partial ^i f_{xy}}{\partial L^i}. \end{aligned}$$
(4)

For each pixel, the model will be represented by a number of functions depending on the L(xy). The above formula can be simplified by joining parameters with similar power of L(xy):

$$\begin{aligned} I= & {} a_0(x,y) + a_1(x,y)L(x,y) + a_2(x,y)L^2(x,y)+ \nonumber \\&+ \ldots + a_i(x,y)L^i(x,y), \end{aligned}$$
(5)

where \(a_i\) are the parameters of the polynomial.

The accurate mathematical model of the image I, from the point of view of vignetting correction, is not necessary and can be replaced by its simplified model. Therefore, the approximation using a locally fitted polynomial models will be limited to the second-order along the respective rows \(I_x\) and columns \(I_y\) of the image. Assumed degree of regression is consistent with the intended shape of the vignetting function in the form of parabolas along individual lines. The proposed local parabolic model of vignetting function \(I_v\) for each pixel coordinates can be expressed as:

$$\begin{aligned} I_v(x,y) = \dfrac{I_x + I_y}{2}= & {} \dfrac{1}{2}\big (a_{x2}x^2 + a_{x1}x + a_{x0}\big ) \nonumber \\&+\, \dfrac{1}{2}\big (a_{y2}y^2 + a_{y1}y + a_{y0}\big ). \end{aligned}$$
(6)

The final step of vignetting correction procedure is an image correction. It is done according to following way:

$$\begin{aligned} I_\mathrm{out}(x,y) = \dfrac{I_\mathrm{work}(x,y)}{I^*_v(x,y)}, \end{aligned}$$
(7)

where \(I_\mathrm{out}\) is the image after vignetting correction, \(I_\mathrm{work}\) is the image acquired by the camera, \(I^*_v\) is vignetting function.

Fig. 2
figure 2

The luminance non-uniformity a \(d/0^{\circ }\), b \(0^{\circ }/45^{\circ }\) and c \(45^{\circ }/0^{\circ }\)

The determined vignetting function is optimized to specific settings of the lens. If we change the setting of the lens, we have to repeat a process of approximation of vignetting functions. The formula (7) can be used on colour images by applying correction independently on each colour channel of the an image.

4 The experiment

The aim of the experiment was to determine the quality of the proposed vignetting correction procedure. For this purpose, on the basis of literature, the best methods of fitting vignetting function to image I were selected. Their quality of vignetting correction and dependence on luminance compensation process were determined in experiments. The last point of experiments was an examination of the vignetting correction results of natural images. To achieve these objectives, the laboratory equipment, which is described below, was used.

The experiment set-up consists of five major elements: the camera with the lens, the light sources, the reference chart and the measurement equipment (Table 1). In the experiments, the Basler aca1300-gc digital camera equipped with a Sony ICX445 1/3” image sensor with progressive scan and a Bayer colour filter array with BG pattern was used. The optical system consists of a NET New Electronic Technology SV-1214H lens with fixed focal length 12 mm, manual iris aperture (f-stop) ranging from 1.4 to 16 and manual, and lockable focus. The lens is equipped with C-mount and supports cameras with the size of the sensor up to 2/3”. During experiments, the lens aperture was set to f/1.4. The camera parameters were set within application created in Microsoft Visual Studio using Pylon and FreeImage libraries. Image processing and visualization of the results were made in MATLAB. Image demosaicing was realised with bilinear method on RAW images. The digital still camera Canon EOS 650D with the lens Canon EF-S 18-55 mm f/3.5-5.6 IS II was used only in preliminary comparative experiment. Before performing the vignetting, correction procedure is necessary to check a linearity of camera response function [9]. Therefore, the linearity of both tested cameras has been checked and there was no need to improve their linearity.

The Basler camera has worked under different light conditions: fluorescent light (CRI \(R_a=86\)) from JustNormlicht Proof light (Setup 1), in D55 light (CRI \(R_a=96\)) from JustNormlicht LED Color Viewing Light (Setup 2) and fluorescent light (CRI \(R_a=83\)) from Bowens BW-3200 (Setup 3). The measurement viewing conditions were \(d/0^{\circ }\), \(0^{\circ }/45^{\circ }\) and \(45^{\circ }/0^{\circ }\) (Fig. 2) [8]. The non-uniformity of lights did not exceed 1 % (\(d/0^{\circ }\)), 3 % (\(0^{\circ }/45^{\circ }\)), 10 % (\(45^{\circ }/0^{\circ }\)) the average value. As a reference image was used image of Sekonic grey card Test Card II Photo.

The colorimeter Konica Minolta CA-2000 was used in the measuring of the light non-uniformity. Measurements of angles and distances were made by use of laser distance meter Leica DISTO D810. The laboratory was provided with a darkroom which allowed to cut-off an influence of external light and an air conditioning which allowed to maintain a fixed temperature.

In the experiments, different light conditions were used to check the usefulness of the luminance measurement in the luminance compensation process. The quality of compensation process was tested for different setups. The difference between images was determined with following measures:

  • MAE (mean absolute error):

    $$\begin{aligned} \hbox {MAE}(I_{\alpha },I_{\beta }) = \dfrac{1}{MN}\sum _{x=1}^M\sum _{y=1}^N\Big |I_{\alpha }(x,y)-I_{\beta }(x,y)\Big |, \end{aligned}$$
    (8)
  • RMSE (root mean square error):

    $$\begin{aligned} \hbox {RMSE}(I_{\alpha },I_{\beta }) = \root \of {\dfrac{1}{MN}\sum _{x=1}^M\sum _{y=1}^N\big (I_{\alpha }(x,y)-I_{\beta }(x,y)\big )^2}, \end{aligned}$$
    (9)

where \(M \times N\) is the image resolution, \(I_{\alpha }\) and \(I_{\beta }\) are compared images.

Most of the images presented in the article were made using Setup 1. The greylevel values of images were scaled to 8-bit representation (0–255). Each image used in calculations was averaged from 10 images. Most of the presented figures of vignetting functions were quantized to 10 levels of intensities.

5 Experimental results and analysis

The characteristic of vignetting for each camera lens system can be different. These differences are not only limited to the scale of light fall-off, but also to the shape of their characteristics. In the case of Basler camera and NET lens, we can see the light fall-off is not radial (Fig. 3).

Fig. 3
figure 3

The normalized (left side) and quantized (right side) light fall-off on: a Basler-NET system and b Canon system

Table 2 The quality of approximation of vignetting functions evaluated by \(\hbox {MAE}(I,I^*_v)\) and \(\hbox {RMSE}(I,I^*_v)\)

The proposed local parabolic model of vignetting function was compared with other models existing in the literature: the second-order polynomial model [19], second-order exponential polynomial model [19], hyperbolic cosine model [22], the Gaussian function [18] and the third-order radial polynomial model [4]. The methods described in the literature use global optimization, in contrast to local parabolic model. In the case of Basler-NET system, the best accuracy of fitting data to model has local parabolic model (Table 2). The obtained vignetting functions are different in strength of light fall-off in the corners of the image from 18 to 22 %. Depending on the method, the coordinates of the image centre were slightly shifted. In the case of Canon system, the best results of data fitting have the radial model. But if the shape of vignetting changes from radial (Canon system) to non-radial (Basler-NET system), the accuracy of the model is also changed. In this case, radial model is not able to adapt to the non-radial shape of vignetting (Fig. 4). In the case of tested camera lens systems, the local parabolic model has adapted better to change in shape of vignetting than other methods. The local parabolic model has the best quality of data fitting of Basler-NET system and good quality in the case of Canon system. It was a result of fitting model to the data and to a lesser extent the knowledge about the specific characteristic of vignetting. This shows a need to check the shape of vignetting characteristics before choosing vignetting function model (Fig. SM1, Supplementary Material). The performed tests also showed that the proposed method is faster than other considered methods (see SM).

Fig. 4
figure 4

Vignetting approximated with the use of various functions: af images with quantized values of approximated vignetting functions, g isolines and centres for tested vignetting functions for 6 % light fall-off. a Parabolic, b polynomial, c exponential polynomial, d hyperbolic cosine, e Gaussian function, f radial polynomial

In order to verify the quality of vignetting correction, two following measures were calculated (Fig. 5):

$$\begin{aligned} |\Delta \hbox {MAE}|= & {} \big |\hbox {MAE}(I'_\mathrm{out},\bar{I'}_\mathrm{out})-\hbox {MAE}(I_\mathrm{out},\bar{I}_\mathrm{out})\big |, \end{aligned}$$
(10)
$$\begin{aligned} |\Delta \hbox {RMSE}|= & {} \big |\hbox {RMSE}(I'_\mathrm{out},\bar{I'}_\mathrm{out})-\hbox {RMSE}(I_\mathrm{out},\bar{I}_\mathrm{out})\big |, \end{aligned}$$
(11)

where \(I_\mathrm{out}\) and \(I'_\mathrm{out}\) are images after vignetting correction procedure with, respectively, active and inactive luminance compensation process, \(\bar{I}_\mathrm{out}\) and \(\bar{I'}_\mathrm{out}\) are mean values of images \(I_\mathrm{out}\) and \(I'_\mathrm{out}\).

The reference image from Setup 1 was used during calculation, because it presents uniformly illuminated flat white surface and luminance distortions of this image are caused mostly by vignetting and camera noise. Therefore, each value of measures \(|\Delta \hbox {MAE}|\) and \(|\Delta \hbox {RMSE}|\) shows a difference between flat image and the image corrected with vignetting function approximated for each method and set-up. In the case of Setup 1 with uniform light, the luminance compensation process obtains slightly worse results of vignetting correction. This is due to multiplying of noises of two imposed images. For the Setup 2, luminance compensation has a small positive influence on vignetting correction results, which is caused by small non-uniformities of luminance (\({<}3\,\%\) of mean value). However, in case of Setup 3, wherein non-uniformities exceed 10 % of the mean value, the luminance compensation process becomes necessary to estimate vignetting function. The influence of luminance compensation process on vignetting correction depends on used vignetting model. In the case of radial polynomial function and Gaussian function, the luminance compensation has a small influence on vignetting correction results. It is caused by difficulties in fitting of these functions to non-radial nature of Basler-NET system vignetting. Therefore, these functions have a large fitting error regardless of the presence of the compensation process in vignetting correction procedure. In the other tested models, the application of luminance compensation process in Setup 2 and 3 has significantly improved the correction results.

Fig. 5
figure 5

The impact of luminance compensation on the values of quality measures of vignetting correction

Fig. 6
figure 6

Natural images and their magnified parts a Trees and b Building before (top half) and after (bottom half) vignetting correction

The quality of vignetting correction was tested on natural images (Fig. 6 and Fig. SM1–SM8). In this experiment, the vignetting function was approximated by local parabolic model. The differences between corrected image and uncorrected image were significant and in case of image:

  • Trees: \(\hbox {MAE}(I_\mathrm{work},I_\mathrm{out}) = 12.79\) and \(\hbox {RMSE}(I_\mathrm{work},I_\mathrm{out}) = 16.75\),

  • Building: \(\hbox {MAE}(I_\mathrm{work},I_\mathrm{out}) = 8.11\) and \(\hbox {RMSE}(I_\mathrm{work},I_\mathrm{out}) = 11.40\).

The differences between both images were large, and the lack of vignetting correction can cause difficulties in the further use of the images.

6 Conclusions

Each type of lens and camera can have additional distortions connected with its specific design. If we change a lens setting, we will also change a characteristic of light fall-off in the image. Most commonly described in the literature radial polynomial model is not always the best choice for vignetting correction and in some cases should be replaced by the local parabolic model. Our research shows that the vignetting correction should be preceded by checking the characteristic of the vignetting. Additionally, performed experiments demonstrate that the use of the image luminance compensation of luminance non-uniformity of reference target allows to obtain good vignetting correction results. The shape of the vignetting function depends on the lens and the camera, but the quality of the correction depends largely on the method of the approximation and quality of the reference image. In the case of Basler-NET system, which is a good example of a lens with non-radial vignetting function, the proposed method provides an effective vignetting correction procedure and best correction results among tested methods.