1 Introduction

Photography has been popular since a long time. Both in analogue and digital photography the most important factor is optimum exposure setting. Exposure can be understood as an optimal combination of aperture value f, shutter speed t and the ISO sensitivity [16]. Setting correct exposure is not an easy task [14]. This task becomes especially difficult when photographed scene is highly diverse in terms of incident light. In professional photography it is much easier to deal with this problem because it is possible to use different kind of photographing filters. Much worse it looks in point of view of smartphones or small compact cameras in which it is impossible to use such solutions. Another problem is that smartphones and compact cameras usually do not offer the possibility of manual exposure setting, but only in automatic mode. It is motivated by the possibility of fast photo shooting. But unfortunately, automatic algorithms usually do not perform well in many situations, resulting incorrectly exposed photographs [4, 14]. Generally, photographs are then overexposed. What is more, in [15] it is said that it is normal during photographing that some pixels in the image will be under- or overexposed. Therefore, there is need to propose such algorithms that will make it possible to reduce overexposed areas in the images, but at the same time they will not underexpose photographs. In this paper we propose algorithms for automatic exposure with particular emphasis on the shutter speed. These algorithms use weighted average and exponential smoothing methods. In weighted average method we propose to assigns weights to the indications of the lightmeters and then calculate the exposure. Methods using exponential smoothing mainly concentrate on reducing outlier values.

1.1 Contribution

Although the problem for generating exposure is not new, we show that it is possible to propose new algorithms that can decrease level of overexposed areas in the images. We conduct experiments in which we test proposed algorithms by analyzing photos’ histograms and provide the statistical analysis of obtained results. Results of statistical analysis clearly confirm that reducing overexposed areas while not strongly underexposing the image is possible.

2 Background

2.1 Brief literature review

The problem of generating exposure is generally covered up, but some brief description of generating exposure in Nikon cameras in mode of Programmed Auto (P) is described in [26]. In this approach the most important point is lower limit of the shutter speed, which must be set in a way that the picture was not blurred in the case of hand holding. It is proposed that this threshold should be not greater than \(\frac {1}{40}\) second. The P mode keeps the aperture wide open until the shutter speed exceeds \(\frac {1}{40}\). If the shutter speed is very small, the aperture is a bit closed. If the wide open aperture is not enough to achieve the shutter speed threshold, the ISO parameter is increased.

Generating exposure can be realized by dividing the frame into some number of blocks and calculating the luminance (BV) in each of them [18]. This approach is used by many brands (eg. Nikon [26] or Canon [2]). It is proposed to find maximum (\(\text {BV}_{\max \limits }\)) and minimum (\(\text {BV}_{\min \limits }\)) luminance in each block and to calculate mean luminance in lower (BVlower), upper (BVupper) and all blocks (BVmean). Then, the scene can be described with the following equation:

$$ BV = aBV_{max} + bBV_{min} + cBV_{mean} + dBV_{upper} + eBV_{lower} $$
(1)

After setting coefficients a,b,c,d,e the shutter speed and the aperture can be determined. The coefficients a,b,c,d,e should be set by camera’s firmware.

In [22] and [17] an auto-exposure algorithm using the False-position method is proposed and implemented. Algorithm is dedicated for the industry. The main goal is to correctly expose the leather samples in order to detect their defects. Considered is the shutter speed. Proposed algorithm captures leather samples with a proper exposure according to the lighting conditions.

Work [27] presents an algorithm for automatic exposure with lighting condition detecting. Proposed approach detects high-contrast lighting conditions and improve the dynamic range of images. Algorithm calculates the difference between mean value and median value of the brightness level of captured images and estimates lighting conditions. Light contrast is detected as a difference of average brightness level of the whole image and median of brightness levels of all pixels in the image. If the difference between mean and median is not signifficant, then lighting condidtions are said to be normal. But if this difference exceeds some assumed threshold, lighting conditions are considered as being contrast.

Similar approach using differences between lighting conditions for high contrast detecting ([27]) is presented in [19]. Authors propose to divide the frame (called main object region) into small areas. Then it is proposed to calculate brightness level as a combination of average brightness of the area with surrounding area and the corresponding weight. Authors conducted experiments which showed that method successfully detects high contrast scenes with small level of exposure error.

2.2 Preliminary definitions

Resolution

is the level of detail in a digital image, measured by number of pixels (eg. 1280×800) or dots per inch (eg. 300dpi) [8]. Aperturef – the unit specifying the diameter “hole” of the lens. The smaller value f, the larger the diameter and also more light reaches the camera’s sensor. The “distance” between adjacent values of f (eg. 4 and 5.6) is called f-stop and is equivalent to Exposure Value (EV) [4]. The Exposure Value is also commonly in literature called as the brightness level [19].

Exposure

is the amount of light incident on a photosensitive material which is required for correct exposure.

Definition 1

(Exposure equation) [23, 24] Exposure equation is defined as:

$$ \frac{N^{2}}{t} = \frac{L\cdot S}{K} $$
(2)

where:

  • N is the aperture value (f-number);

  • t is the shutter speed (or exposure time);

  • S stands for ISO;

  • K is the lightmeter calibration constant, according to [23], K = 12.5;

  • L is the luminance, given in \([\frac {cd}{m^{2}}]\).

Definition 2

(Exposure value – EV) A number that is obtained by determining optimum combination of aperture value f, exposure speed t (in seconds) and sensitivity (this parameter is called ISO). Formally, exposure value can be defined as follows [20, 21, 23]:

$$ \text{EV}_{\text{ISO}} = \log_{2}{\frac{L\cdot S}{K}} = \log_{2}{\frac{N^{2}}{t}} $$
(3)

EV 0 means that the aperture is set to f/1.0 and the shutter speed is 1 second [23]. Combinations of aperture and shutter speeds can be represented in the EV notation. For example, aperture 5.6 with shutter speed \(\frac {1}{30}\) denotes EV100 = 10; aperture 8 and shutter speed \(\frac {1}{500}\) stands for EV100 = 15 [5, 28]. The more EV value, the shorter shutter speed [5]. In literature, there can be also found tables for exposure values for various lighting conditions. For example, combination of aperture and shutter speed at ISO 100 for typical scene in full sunlight should be equal EV100 = 15. Typical scene with overcast should be performed with EV100 = 12 [12].

Changing exposure by 1 EV from the point of view of the aperture should be equivalent with changing exposure by shutter speed or ISO. For example, if we increase the f from 4 to 5.6, the exposure decreases by 1 EV. Therefore, we should increase shutter speed or the ISO by 1 EV.

Automatic exposure

is a mode in which all parameters of exposure are selected automatically by the camera [4]. Lightmeter – a sensor for measuring light intensity. In context of photography lightmeter (sometimes called photometer) is essential for selecting the exposure.

Often, lightmeters operating principle is as follows: for a given aperture and ISO speed, the appropriate shutter speed should be detected, depending on the luminance (brightness) of the analyzed object [6].

Histogram

is a map of color distribution shown as a diagram in which horizontal axis represents grey scale from darkest to lightest, and vertical axis represents number of pixels in tonal range [4, 8]. Colors (horizontal axis) are represented in a decimal notation, from 000 (black) to 255 (white).

According to [10], if in the histogram some pixels are nearly about minimum/maximum level, it can be called “permanently degraded region” of the image. Thus, we can say that if the picture has a lot of very bright pixels (also called “whites”), the image will be overexposed, and similarly if there is huge amount of dark pixels (“blacks” [1]), the picture is underexposed.

It is often assumed in literature that a diagram of an ideal histogram should be similar to the diagram of normal distribution [4, 13]. This means that it should not contain “bar charts” nor at its leftmost side (underexposed) and at rightmost side (overexposed) [7]. An example of such histogram is presented in Fig. 1. In [10] and [3] there is pointed that area of histogram can be divided into three regions: bright, dark and other grey tones. Analogously, the exposure can also be classified into three categories: underexposure (excess of dark pixels), overexposure (excess of bright pixels) and the proper exposure [22]. Of course, it should be emphasized that a histogram diagram does not always have to be an ideal indicator for assessing the quality of image.

Fig. 1
figure 1

Example of a histogram (Source: [13])

There exist number of works presenting the essence of the histogram, such as [4, 8, 13, 16] and [7]. An comprehensive histogram description is presented in [3].

3 Algorithms for automatic exposure

3.1 Preliminaries and problem formulation

We assume that lightmeter works as described in Section 2.2, ie. for given aperture value f and ISO parameter returns the shutter speed t.

Definition 3

(Preliminary notions) Let \(\mathcal {T} = \{t_{1}, \ldots , t_{n}\}\) be a set of n lightmeters. Each lightmeter measures brightness in the frame, ti is shutter speed of i-th lightmeter and \(\mathcal {W} = \{w_{1}, \ldots , w_{n}\}\) is a set of weights corresponding to each indication from set \(\mathcal {T}\), where w ∈ (0,1). Suppose also that we consider a standard f-stop (1 EV) with a step at 1/3 EV.

We will also use the formula for weighted average:

$$ av = \frac{{\sum}_{i=1}^{n} w_{i}t_{i}}{{\sum}_{i=1}^{n} w_{i}} $$
(4)

where ti denotes lightmeter indication and wi is a corresponding weight.

Definition 4

(Exposure shutter speed generation) Function \(f: \mathcal {T} \rightarrow \overline {t}\) is an exposure generating function that for given set of lightmeters’ indications \(\mathcal {T}\) returns an exposure shutter speed value \(\overline {t}\).

Thus, we can formulate the problem as follows (Problem 1):

Problem 1

For a given set of n lightmeters’ indications \(\mathcal {T} = \{t_{1}, \ldots , t_{n}\}\) calculate \(\overline {t}\) which minimizes the under- and overexposed areas in the picture.

3.2 Algorithm with weighted average

Suppose that there are given t1,⋯ ,tn lightmeters’ indications (shutter speeds) and the \(\overline {t}\) is the final exposure that has to be determined. We propose to use weighted average method and assign weights to lightmeters indications. The input indications can be “divided” into three groups: bright, indirect light and dark and the weights are different for each such a group. Thus, a method for calculating exposure can be described as Algorithm WAv.

In literature there are very often proposed approaches where some thresholds or weights are used [19, 27]. In our approach we also use such concept. Parameters δ are threshold values for shutter speeds and are dividing exposures to be considered as bright, indirect light and dark. Parameters λ define weights. Both δ and λ parameters should be defined by an “expert”. The value γ can be considered as a threshold for photographing by hand. For example, if \(\gamma = \frac {1}{15}\) and the \(\overline {t} > \gamma \), it is necessary to increase the value of ISO parameter. Obviously, the higher γ parameter (and lower ISO), the better image quality. It is well-known that increasing the ISO has a negative impact on quality of the image because digital noise increases [25]. In this algorithm, it is also assumed that the aperture is fully wide open.

figure b

By using floor function on av we mean round down to the nearest 1/3EV.

Example 1

Suppose that we have measured the light in 11 points of the frame and the lightmeter showed the following shutter speeds: \(\frac {1}{1600}, \frac {1}{1000}, \frac {1}{1250}, \frac {1}{200}, \frac {1}{100}, \frac {1}{125}, \frac {1}{50}\), \(\frac {1}{160}, \frac {1}{80}, \frac {1}{160}, \frac {1}{160}\). Suppose also that weights were assigned as: 0.9 for shutter speed \(\leq \frac {1}{800}\), 0.5 for shutter speed \(\frac {1}{800} < t \leq \frac {1}{200}\) and 0.1 for shutter speed \(> \frac {1}{200}\).

The sum of weights is equal:

$$ \sum\limits_{w} = 3\cdot 0.9 + 0.5 + 7\cdot 0.1 = 4.17 $$

The weighted average is equal:

$$ av = \frac{ \frac{0.9}{1600}+\frac{0.9}{1000}+\ldots+\frac{0.1}{160} }{4.17} = \frac{1}{417} $$

After rounding down to the nearest 1/3EV:

$$ \lfloor av \rfloor = \frac{1}{500} $$

Therefore, image should be exposed with the shutter speed of \(\frac {1}{500}\) second.

3.3 Algorithms using exponential smoothing

We consider methods of time series analysis, for example the model of exponential smoothing proposed by Brown and Holt. Goal of this model is to reduce outliers and the variance in input time series [11]. Method of exponential smoothing can be defined recursively as:

$$ \left\{\begin{array}{cc} y_{0}^{*} = y_{0}\\ y_{t}^{*} = \alpha y_{t-1} (1-\alpha) y_{t-1}^{*} \end{array}\right. $$
(5)

In (5), y denotes next values of input time series, y are values smoothed. The α ∈ (0,1) is called a smoothing parameter. Usually α = 0.5 [11].

Lightmeters indications can be interpreted as a time series. Using exponential smoothing method allows for reducing outliers. Thus one can propose a method that will retrieve the input series of lightmeters indications, apply smoothing on this series and finally calculate aritmetic mean of smoothed series. Result of aritmetic mean can be proposed as the shutter speed. This approach is presented as Algorithm ESAM.

This algorithm takes n indications of lightmeters, which puts in a time series. Next, they are smoothed with described above Brown’s method (5). Then, aritmetic mean of smoothed values is calculated and its value is proposed as the shutter speed. Similarly, as in method with weighted average (Section 3.2), the γ parameter models the threshold for photographing by hand.

figure c

Another proposition is quite similar and relies on smoothing the input time series and choosing the minimum value from it as an exposure shutter speed (Algorithm ESMV).

figure d

Similarly as in the previous algorithms, this algorithm retrieves the values of n lightmeters indications, smooths them with Brown’s method and returns minimum value from smoothed series as a shutter speed. We propose to increase obtained value by 1/3EV.

4 Experimental verification

4.1 Devices and testing environment

In experiments the following cameras and smartphones were used: Nikon D3100 (DSLR), Canon SX160 IS, Kodak P850 and two smartphones: Acer Liquid Jade S (Acer S56) and Samsung S III mini. Information about sensors is detailed in Table 1 (size sensor of Acer S56 is not known).

Table 1 Devices used in experiments

Proposed algorithms were implemented in JAVA for Android platform. The aim of the application is to get values of indications of 11 lightmeters and weights for weighted average algorithm. Then application calculates exposure speed, according to algorithms described in the previous section and obtained exposure value is used in the camera manual mode for taking a photo.

4.2 Experiment setup

Proposed algorithms were tested with an experiment, where indications of the built-in lightmeters of Nikon D3100 were used in 11 points of the frame (Fig. 2). In each point camera’s lightmeter “proposed” the shutter speed. Aperture value was set at different values (i.a.: 4; 4.5; 5.6; 8; 11) and the ISO parameter was always 100. Sample measurement of shutter speeds is presented in Fig. 2.

Fig. 2
figure 2

Frame and sample shutter speeds proposed by camera’s lightmeter

Next, based on such measurements, proposed algorithms were performed. For WAv algorithm we have used the following thresholds: \(\delta _{1} = \frac {1}{800}\) and \(\delta _{2} = \frac {1}{200}\). According to [4], it is easier to overexpose than underexpose an image. We use this information in WAv algorithm and we propose to assigns weights to lightmeters’ indications in such way to give priority for shorter shutter speeds. This in turn suggest that very bright areas of the picture can determine most overexposed areas. Thus we propose the following thresholds weights for shutter speeds indicated by lightmeter:

  1. (1)

    For shutter speed less than \(\frac {1}{800}\) the weight λ1 = 0.9;

  2. (2)

    For shutter speed \(\frac {1}{800} < t \leq \frac {1}{200}\), λ2 = 0.5;

  3. (3)

    For shutter speed greater than \(\frac {1}{200}\) the weight λ3 = 0.1.

The λ values are chosen experimentally and affect the brightness of the image. For example, decreasing λ1 value results in obtaining brighter images; increasing λ3 value will result in achieving darker images. Proposed values give the best results for image quality. Value of parameter α for exponential smoothing equals 0.5, assumed to be universally effective [11]. Sample experimental scenarios are described in Appendix A. Examples, of how algorithms’ parameters affect produced images can be seen in Appendix B (Figs. 14 and 15).

4.3 Statistical comparison

A total number of 496 pictures was taken: 62 pictures was taken with exposure proposed by the WAv algorithm, 62 pictures by ESAM algorithm, 62 pictures by ESMV algorithm and 62 pictures in AE mode for every device listed in Table 1. All pictures were taken in JPEG format, in all cameras the AE metering mode was set as multi-segment. We analyzed the amount of black and “nearly-black” colors which are represented in histogram as 0-5. White and “nearly-white” pixels were selected as 254-255.

Histograms of pictures were analyzed with the well-known GIMP software. This software allows for determining the statistical distribution of colors in the image. Thus it is possible to indicate intensity of occurence of a particular color [9]. We have used this information for determining the number of white (overexposed) and black (underexposed) pixels [10].

4.4 Analysis of overexposed images

An statistical analysis was performed, where the average of overexposed areas of: WAv, ESAM, ESMV algorithms and AEs in: Nikon D3100, Canon SX160 IS, Kodak P850, Acer S56 and Samsung S III mini was analyzed. The objective was to analyze if the average area of overexposed areas of proposed algorithms differ signifficantly from each other and cameras’ automatic exposure modes. The signifficance level was α = 0.05. Scripts for analysis were implemented in MATLAB.

First of all, normality tests were performed. The hypotheses are defined as follows:

  • \({\mathcal{H}}_{0}\): The sample performs the normal distribution;

  • \({\mathcal{H}}_{1}\): The sample is different from the normal distribution.

Shapiro-Wilk test of normality showed that none of analyzed data came from the normal distribution. In all cases p-value was < 0.0001 so the hyphotesis of normality was rejected. Thus, for further analysis non-parametric test were used. The Kruskal-Wallis ANOVA was used. Hypotheses relate to equality of average rank for next samples or are simplified to the median:

  • \({\mathcal{H}}_{0}\): 𝜃1 = 𝜃2 = ⋯ = 𝜃k;

  • \({\mathcal{H}}_{1}\): Not all 𝜃j are equal (j = 1,2,⋯ ,k).

The p = 0 means that there exist a statistical difference between all tested data. This denotes that the number of overexposed areas between tested data is signifficantly different.

The next step was the POST-HOC analysis in order to determine wich specific data are different. The values of mean ranks are presented in Table 2.

Table 2 Results of multcompare POST-HOC analysis of overexposed areas

Statistically least area of overexposed areas were obtained with ESAM and ESMV algorithms (mean ranks 152.44 and 152.71, respectively). More of overexposed areas was generated with WAv algorithm (mean rank 205.25). Nearly all of AE algorithms except Canon camera generated highest levels of overexposed areas in the images. The worst results were obtained with Nikon AE (mean rank 330.31) and smarthones: Acer (mean rank 309.2) and Samsung, where mean rank was equal 344.768. In general, proposed algorithms obtained statistically less overexposed areas in the images than AE modes. Graphically, this situation is presented in Fig. 3.

Fig. 3
figure 3

POST-HOC analysis of differences of overexposed areas. X-axis represents values of mean ranks for particular algorithms (Y -axis)

In Fig. 4 there is presented a total number of pictures that had any overexposed areas. Results are also presented in Table 3.

Fig. 4
figure 4

Number of images with overexposed areas

Table 3 Total number of pictures that had any overexposed areas (62 images per each algorithm/camera)

The greatest number of images with any overexposed areas was done by Nikon, Acer and Samsung’s AEs (more than 70%). ESAM, ESMV and WAv algorithms gave the least number of overexposed areas, 16.13% and 33.87%, respectively. Quite small level of overexposed areas was obtained in Canon’s camera.

Table 4 represents average percentage of overexposed areas in the pictures.

Table 4 Average percentage of overexposed areas in the pictures

The least average overexposed areas are obtained with ESMV algorithm (0.9%). A little more with ESMV – 1.68% and WAv algorithm – 3.03%. The highest level was obtained in Nikon AE (9.81%), Samsung AE (7.03%). Main result is that all proposed algorithms generated less area of overexposed images than automatic program of the camera. Especially, built-in smartphones’ cameras achieve the biggest number of overexposed images.

4.5 Analysis of underexposed images

Similarly, as in case of overexposed images, statistical analysis was performed, where average of underexposed areas of: WAv, ESAM, ESMV algorithms and AEs in: Nikon D3100, Canon SX160 IS, Kodak P850, Acer S56 and Samsung S III mini was analyzed. The objective was to analyze if average areas of underexposed areas of proposed algorithms and camera’s automatic program differ statistically from each other.

None of above data was from the normal distribution (in all cases SW-Test returned p < 0.0001) so the hypothesis of normality was rejected. Thus similarly, as in the case of overexposed images, non-parametric Kruskal-Wallis ANOVA was used.

Result of p < 0.0001 in ANOVA analysis shows that there exists statistical difference in area of underexposed areas.

Table 5 presents results of POST-HOC analysis. The least areas of underexposed images were obtained with Nikon AE, where mean rank was equal 178.17. Slightly more underexposed areas was generated with the AE in Canon’s camera (mean rank 181.21). WAv and ESAM algorithms resulted in mean rank of 221.98 and 241.71, respectively. ESMV algorithm produced mean rank of 276. The highest level of underexposed areas was generated in Acer smartphone, where mean rank was 360.69. Thus Acer’s AE gave statistically greater number of underexposed areas of all algorithms. WAv algorithm gave statistically the same amount of underexposed areas than Nikon, Canon and Samsung’s AEs and less than and Kodak and Acer. ESAM produced statistically the same amount of underexposed areas comparing with nearly all AEs except Acer. Only ESMV generated more underexposed areas than Nikon and Canon’s AEs. This situation is illustrated in Fig. 5.

Table 5 Results of multcompare POST-HOC analysis of underexposed areas
Fig. 5
figure 5

POST-HOC analysis of differences of underexposed areas. X-axis represents values of mean ranks for particular algorithms (Y -axis)

Figure 6 and Table 6 show a total number of pictures that had any underexposed areas.

Fig. 6
figure 6

Number of images with underexposed areas

Table 6 Total number with pictures that had any underexposed areas (62 images per each algorithm/camera)

Acer and Kodak AE modes generated the biggest amount of images with any underexposed areas – 83.87% and 45.16%, respectively. The greatest number of pictures with any underexposed areas of proposed algorithms was generated by ESMV algorithm (40.32%). Less underexposed areas were obtained with WAv – 20.97% and ESAM (27.42%). Nikon and Canon’s AEs produced the least underexposed areas – 4.84% and 6.45%, respectively.

Table 7 presents average percentage of underexposed areas in the pictures.

Table 7 Average percentage of underexposed areas in the pictures

All methods obtained similar level of underexposed areas. Kodak AE mode gave the greatest underexposed areas.

The main result is that all of proposed algorithms gave similar level of underexposed areas. Only the Acer’s AE resulted in signifficantly more underexposed areas. Note however, that due to small level of average value of underexposed areas in nearly all cases, the visual quality of the images is not seriously affected.

4.6 Summary

To conclude, proposed algorithms generated statistically less overexposed areas in the images than most AE modes in tested devices, especially compared with smartphones. The best results are obtained with the ESAM method, where both under- and overexposed areas are quite small. WAv and ESMV algorithms resulted in less number of overexposed areas than most AEs.

All presented algorithms generated almost the same level of underexposed areas in the images. WAv and ESAM generated statistically similar level of underexposed areas, compared with most AEs. Only the ESMV method resulted in statistically higher level of underexposed areas in the pictures compared with Nikon and Canon AEs, but note that it does not have any relevant impact on the quality of the image. Average level of underexposed pixels obtained with this algorithm was at about 1%, so this does not seriously affect the quality of the image.

5 Conclusion and future works

In this paper algorithms for calculating photographic exposure were presented. The main object was to decrease the number of overexposed photographs. Presented algorithms use weighted average and exponential smoothing methods. In weighted average method it was assumed that lightmeters’ indications are assigned with weights with a special priority to those indications that represent the brightest parts of the frame. Exponential smoothing methods are in turn used for reducing outliers from lightmeters indications. Experimental verification showed that proposed methods work well, what was also confirmed by the statistical analysis. Proposed algorithms generated less overexposed areas in the images than automatic exposure modes of tested cameras and smartphones. Level of underexposed areas was at a similar level. As a verification, the analysis of histogram and percentage distribution of “blacks” and “whites” was used.

As future works, the exponential smoothing algorithms should be improved. Especially, the ESMV algorithm, where the number of underexposed areas should be decreased.

Moreover, it is planned to extend experiments so that presented results were more representative. Also more exposure parameters combinations needs to be performed, for example experiments at different values of ISO parameter.