Keywords

1 Introduction

Maize is one of the main planting crops in the world, and its production has a great influence on food security. Maize yield formation mainly comes from the accumulation of photosynthetic assimilation products in the canopy, and the accumulation of assimilation products is an exchanging process with substances and energy in the external environment through a series of physiological and biochemical reaction in the canopy. The strength of the physiological function of the canopy is mainly limited by the internal structure form of the canopy [1]. Canopy structure is a visual indicator of a community appearance, directly reflecting the crop growth, cultivation conditions, as well as water and fertilizer measures. In many canopy structure parameters, the leaf area index (leaf area index, LAI) shows how much plant leaf area is on a unit surface of the field, while the mean leaf Angle (mean leaf Angle, MLA) expresses the spatial orientation of canopy leaves, in which the canopy structure plays a decisive role.

The parameter acquisition methods of canopy structure are divided into two categories, the direct method and indirect method. The so-called direct method is to measure the direct part of the involved indicators in the structural parameters, for example, by measuring the plant leaf areas in the sample zone and then summing (LAI) [2, 3]. The direct method needs a wide range of destructive sampling, consuming a large amount of manpower, and the subjective dependence in measuring is strong. So some indirect methods based on growth model, radiation model and canopy porosity had extensive development [4,5,6,7], of which hemisphere image (hemispherical photography), which can obtain crop growing and appearance data such as morphology, density, growth period while recording canopy porosity, was studied and applied by a great number of scholars.

Through the hemisphere image geometric correction, Huanhua et al. [8] established the parameter layer, and overlaid it on the classified vegetation canopy layer and conducted mathematical operations to extract the vegetation canopy structure parameters such as canopy widths, canopy area and canopy circumference etc. After comparing the leaf area index calculation method of the hemisphere image with other indirect methods, Tong et al. [9] pointed out that artificially setting threshold to the image segmentation is a major cause of the errors, and the accuracy is also affected by the sampling time and space because incorrect sampling time and space will bring certain errors to the LAI calculation. While Valerie Demarez et al. [10] applied the leaf area index of hemisphere image of three kinds of crops wheat, corn, sunflower, and introduced the cluster index (clumping index) into Poisson model of the hemisphere image, which made the value from LAI calculation is more close to the actual value. Gonsamo et al. [11] developed multi-platform calculation software package CIMES, using command line mode, based on canopy structure parameters of hemisphere image, which realized a variety of LAI calculation methods such as the Miller, Lang and Campbell etc. Baret et al. [12] applied the images obtained from 57.5° with the ground to simulate the zenith angle corresponding information of hemisphere image, proved the feasibility of this method through crop virtual 3d model, and took the results obtained by calculation as input parameters of the leaf surface calculation model. Confalonieri [13], and etc. developed an image acquisition device for canopy structure parameters based on handheld devices, using the angle sensors on the handheld device to acquire the canopy images at 57.5°. Verification results show that the measurement results from it have a high consistency with the ones from LAI2000, AccuPAR and other canopy analysis equipment, and the low price and portability extends its application conditions.

In related studies, whether using commercial equipment (such as: Hemiview, LAI - 2000, CI - 110), or equipment researchers developed by themselves, to obtain the hemisphere images, the requirements to light conditions are very strict: generally, 1 h after sunrise or before sunset within 1 h (less direct light, and more scattered light). The main reasons for Limiting the image acquisition time are, on the one hand, imaging equipment in different light conditions adopts different exposure strategies, which causes great differences in colors and brightness of images obtained at different times and thus, the image segmentation threshold is difficult to unity; On the other hand, plant leaves to sunlight has a certain role in the transmission. Under the condition of strong direct light, the images at the top of the blades often display as excessive exposure, while, due to the influence of the blade shadows cast over, the images of the blades at the bottom are often underexposed.

To sum up, the hemisphere image method is a kind of important indirect calculation method of canopy structure parameters. Due to the influence of field changing light conditions, this method can only be used in certain light conditions, so the restrictions on image acquisition time greatly affect the practical application of the method. In this paper, we adopted the fusion mapping method based on multiple exposure images, removing the influences of image light and shade, highlights and shadows due to the light changes in fields, and greatly increased the scope of the hemisphere image technology.

2 Materials and Methods

2.1 Hemisphere Image Acquisition Device and Method

The lens was prime fishieye lens “(SIGMA) 8 mm F3.5 EX DG FISHEYE”, which were able to provide vision range images of level 360° and vertical 180°. The Canon full frame camera (Canon EOS 5 d Mark III) was adopted. The camera was placed in the bottom of the canopy, the vertical to ground and towards the sky. To quickly switch fixed number aperture setting, exposure time, exposure for more than a canopy hemisphere image sequence, Sample image is shown in Fig. 1.

Fig. 1.
figure 1

Hemispherical photography sample of maize canopy

Image processing program is developed by Visual Studio 2010, using the image processing open-source library OpenCV 2.3. The program was running on PC, CPU core frequency 3.4 GHZ, memory 4 GB.

2.2 Hemisphere Image Processing Process

In this paper, the image processing algorithm process includes six steps, as shown in Fig. 2:

Fig. 2.
figure 2

Flowchart of image processing

2.3 Multiple-Exposure Image Fusion

The segmentation accuracy of plant pixels and non-plant pixels of the hemisphere images has a great influence on calculation of canopy porosity and the leaf area index. The brightness of the canopy hemisphere image varies with the acquisition time. The pixel brightness of the canopy top is high, and the pixel brightness of the canopy bottom is low, which causes 2 major effects on later image processing: (1) the fixed threshold to image segmentation can’t be chosen; (2) the blade pixels in too bright and too dark areas can wrongly be divided into background pixels. Jeon and Lati [14, 15] pointed out that under the field illumination conditions, specular reflections and shadows on plants is a great challenge for image segmentation. Liu and Panneton [16, 17] tried to use the optimized color space (such as: RGB HSV Lab) in the plant image segmentation, but segmentation thresholds for different plant color spaces need artificial selection to adjust. Woebbecke and Burgos [18, 19] using the feature of plant leaves containing chlorophyll, put forward the green component reinforcement method (ExG), which assumed that the plant pixels and background pixels after linear transformation can be projected onto different space planes, then divided planes according to the beforehand thresholds. But the threshold selection in this method is affected by the light intensity. In order to solve the influence of illumination changes, Ruiz and Zheng [20, 21] used machine learning classification method to distinguish plant pixels and background pixels, taking multiple color features as vectors. Experiments show that this method has a higher degree of automation, but still the images with highlights and shadows have higher rate of wrong points.

Using multiple exposure image fusion, images with relatively uniform brightness [22] are able to be obtained under varied illumination conditions. By capturing the image light radiation intensity in the scene for correction and normalization to the image brightness, the effects on the image segmentation from light transform can be overcome. The mathematical relationship between the image brightness and scene light radiation intensity is shown below.

$$ F(I_{ij} ) = \ln E_{i} + \ln \Delta t_{j} $$
(1)

In the formula, E is light radiation intensity values of a point in the scene (dimensionless); Δt is the exposure time of images (s); i is a sampling point on an image, value range (1, n); J stands for an image in an image sequence with different exposure and value range [1; m]; F (I) is the response function of the camera to light radiation intensity in the scene. The meaning of the formula (1) as follows: the light radiation intensity at some point in the scene of is Ei; the camera uses Δ t of exposure time; the brightness value of the image point is F (Iij). In the digital image, the pixel luminance values are in the range of 0 to 255, so the formula (1) can be discretization, a least squares problem. As shown in formula (2):

$$ D = \sum\nolimits_{i = 1}^{n} {\sum\nolimits_{j = 1}^{m} {F(I_{ij} )} } - \ln E_{i} - \ln \Delta t_{j} $$
(2)

After the camera response function is calculated, according to the formula (3), the light radiation intensity values of a point in the scene can be calculated by the pixel gray value and the image exposure time, completing the fusion of multiple exposure image sequence.

$$ \ln E_{i} = {{\{ \sum\nolimits_{j = 1}^{m} {F(I_{ij} )} - \ln \Delta t_{j} \} } \mathord{\left/ {\vphantom {{\{ \sum\nolimits_{j = 1}^{m} {F(I_{ij} )} - \ln \Delta t_{j} \} } m}} \right. \kern-0pt} m} $$
(3)

Figure 3 shows the image sequence of different exposure time. Figure 4 is the pseudo color image of light radiation intensity in the fused sequence (from blue to red, a gradual increase in values). It can be seen from the Fig. 3 that, affected by the pixel value range, the detailed light levels of the dark shadows and highlights in the images are not rich, with the top blades high brightness and the bottom blades low brightness under the direct sunlight. Through fusing the pixels of the multiple exposure images to light radiation intensity spaces, the image value range has been increased. As shown in Fig. 4, P1 and P2 have a difference of 3 orders of magnitude in the light radiation intensity images. Compared with the original image, the light radiation intensity of the image has a wider value range, with the light radiation intensity image with highlights and dark areas in the original images rendering more details.

Fig. 3.
figure 3

Maize canopy hemispherical images with different exposure times

Fig. 4.
figure 4

Fusion image of irradiance value (Color figure online)

2.4 Light Radiation Intensity Image Map

Light radiation intensity map expands the range of the original RGB image and contains information of specular reflection and shadow regions in the image sequence with different exposure. In order to make the brightness distribution of the light radiation intensity images of different times tend to be more consistent and uniform, the light radiation intensity images need to be compressed mapping. In this paper, by using the idea of histogram equalization, while compressing the light radiation intensity image, the brightness differences of the images shot at different times of the day were corrected. The specific methods are as follows: a certain value V0 was found in a numerical interval of the light radiation intensity images to make the formula (4) the minimum value. In this article, the value is 1.0, 2 kinds of effects can reflect; H (x) is the histogram function;

$$ \frac{{ ( {\text{V}}_{ 0} - 0. 5 (\text{L}_{ \hbox{max} } + \text{L}_{ \hbox{min} } ))^{2} }}{{\text{L}^{2} }} + \frac{{\upalpha ( (\sum\limits_{{{\text{x}} = 0}}^{{{\text{V}}_{ 0} }} {{\text{h(x)}} - 0. 5 {\text{N)}}^{ 2} } }}{{{\text{N}}^{ 2} }} $$
(4)

In the formula, L is the value range of light radiation intensity map; Lmax and Lmin are the maximum and the minimum; α is the condition parameter, whose range [0, ∞] influences the mapping transformation effects; the closer to 0 it is, the closer to the linear transformation the mapping effects are, and the closer to ∞ it is, the closer to the histogram equalization the mapping effects are. In this article, the value is 1 and 0, both of the 2 kinds of effects can be reflected; N stands for the pixel numbers of the interval; value V0 is calculated from formula (4). L can be divided into two intervals, on which formula (4) is used again to calculate and gain V10, V11. The entire interval is divided into four, repeat the operation above until the entire range is divided into 256, and set up a mapping relation between the light radiation intensity images and 256 color images. The method diagram is as shown in Fig. 5.

Fig. 5.
figure 5

Image mapping based on histogram equalisation method

2.5 Hemisphere Image Calculating Leaf Area Index and Average Leaf Inclination Angle

Monsi and Saeki [23] introduced the Beer - Lambert’s law of light propagation in homogeneous medium into the light propagation model of the crop canopy. The specific formula is as below

$$ {I \mathord{\left/ {\vphantom {I {I_{0} }}} \right. \kern-0pt} {I_{0} }} = e^{ - k \cdot LAI} $$
(5)

In the formula, I stands for the radiation intensity at the bottom of the canopy (unit: lux); I0 for the radiation intensity above the canopy (unit: lux); LAI for the leaf area index (dimensionless). Under the condition of only considering the direct light, I/I0 can be expressed in canopy porosity T, and the value of k is related to the angle of the incident light and the leaf inclination angle distribution of the canopy itself [24]. The calculation method is as follows:

$$ k = {{G(\theta ,\alpha )} \mathord{\left/ {\vphantom {{G(\theta ,\alpha )} {\cos \theta }}} \right. \kern-0pt} {\cos \theta }} $$
(6)

In the formula, θ is the incidence angle of the light, namely zenith angle; α is the average leaf inclination angle; G(θ,α) is the projection function; that is, a unit leaf area at a leaf inclination angleα is the projection area in the direction of θ. Accordingly, extinction coefficient has something with the light incident angle, blade orientation (in the form of leaf inclination angle). From formulas (5) and (6), the general formula is derived for calculating the leaf area index of crop canopy in hemisphere images.

$$ T(\theta ) = e^{{{{ - G(\theta ,\alpha )LAI} \mathord{\left/ {\vphantom {{ - G(\theta ,\alpha )LAI} {\cos \theta }}} \right. \kern-0pt} {\cos \theta }}}} $$
(7)

in the formula, T(θ) is the canopy porosity under the zenith angleθ. The calculation of leaf area index is as formula (8)

$$ LAI = {{ - \ln T(\theta ) \cdot \cos \theta } \mathord{\left/ {\vphantom {{ - \ln T(\theta ) \cdot \cos \theta } {G(\theta ,\alpha )}}} \right. \kern-0pt} {G(\theta ,\alpha )}} $$
(8)

In it, the projection function G(θ, α) has two features, (1) when the zenith angle from view angle is 57°, G(θ, α) = 0.5, a constant value [25]; (2) while 25° < θ < 65°, G(θ, α) can be thought as linear function of θ, the slope can be calculated through the leaf inclination angle [26], formula (8) can be simplified as

$$ LAI = {{ - \ln T(57^\circ ) \cdot \cos (57^\circ )} \mathord{\left/ {\vphantom {{ - \ln T(57^\circ ) \cdot \cos (57^\circ )} {0.5}}} \right. \kern-0pt} {0.5}} $$
(9)

In the formula, T (57°) is the canopy porosity obtained from hemisphere image at zenith angle θ 57°. Canopy porosity calculation formula is

$$ T(\theta ) = {{P_{L} (\theta )} \mathord{\left/ {\vphantom {{P_{L} (\theta )} {P_{S} (\theta )}}} \right. \kern-0pt} {P_{S} (\theta )}} $$
(10)

PL(θ) is the hemisphere leaf number of pixels on the image at zenith angleθ, and PS(θ) is the number of ring pixels which the zenith angle is corresponding to.

The leaf area index of a canopy can be calculated by the formula (9), and the average leaf inclination angle can be obtained by LAI indirect calculation. The porosity T (25°)–T (65°) of the zenith angle 25°–65° can be extracted from the hemisphere images, and the series of G (25°–G (65°) can be calculated by the formula (8). The series data of θ and G (θ) meet linear relationship, the straight slope D can be fitted, and thus the average leaf inclination angle MLA can be calculated through the polynomial about D.

$$ \begin{aligned} {\text{MLA}}\,{ = }\, 5 6. 6 3+ 2. 5 2 1\times 1 0^{ 3} {\text{ D}} - 1 4 1. 4 7 1\times 1 0^{ - 3} {\text{ D}}^{ 2} \hfill \\ - 1 5. 5 9\times 1 0^{ - 6} {\text{ D}}^{ 3} + 4. 1 8\times 1 0^{ - 9} {\text{ D}}^{ 4} + 4 4 2. 8 3\times 1 0^{ - 9} {\text{ D}}^{ 5} \hfill \\ \end{aligned} $$
(11)

3 Results and Analysis

The calculation of LAI and MLA to the test materials was carried out using the method described in this article. The test materials was Xianyu 335 and sowing time was on June 5, 2013, with planting density of 60000 plants/ha and normal water management measures. The hemisphere images of the canopy, which were separately obtained on August 6, August 13, August 19th, August 22, August 26, and September 12, were processed and analyzed. The LAI and MLA results are shown in the figure below (Fig. 6).

Fig. 6.
figure 6

The experiment result of canopy structure

August 20 or so was 20 days after silking, and the plant growth reached maximum at this time. In the measurement results, the LAI 3.71 in the measurement results on August 19 was the peak for the whole measuring sequences, and consistent with the observed results. MLA measured values and the LAI values both have the same changing trend, with the growth to maximum, uplift degree of the upper leaves on the plants increased on the vertical so as to increase the transmittance rate of solar radiation on the upper canopy. With plant leaf senescence, leaf shapes gradually let loose, and MLA presents downtrend in the later stage.

In order to further validate the measuring precision and accuracy of the method described in this article, obtain the corn leaf area by using the direct method, and then the real LAI values were calculated. The measurement method of leaf area is as shown in the figure below. The blades were dismantled and paved one by one to shoot the images, and the number of the green leaf pixels in the images was converted into leaf areas. Limited by the length, the image processing algorithm is no longer here (Figs. 7 and 8).

Fig. 7.
figure 7

The direct method of LAI measurement

Fig. 8.
figure 8

The result of linear fitting of our method and direct method

An equation of straight line was fitted using the measured values from the method as described in this article as well as the ones from the direct method, it is shown that the closer to 1the slope of the linear equation is, the better the accuracy of the method described in this article is, and it is also shown that the closer to 1the value R is, the greater the relevance of the two is. From the results, the slope of the equation is 1.463, which indicates that the measured values from the method in the article is generally lower than the measured values from direct method; the value R was 0.940, indicating that both have a high correlation, and that the method in the article has a higher consistency on the trend with the direct method. Later, the accuracy of the method in the article can be improved by increasing the correction coefficient.

4 Discussion

The precision and stability of canopy structure parameters obtained through the hemisphere image method is mainly affected by light conditions when imaging. In the previous researches, changing the aperture and exposure time is often adopted to respond to the light changes, but in practice, overly relying on experience, it is difficult to acquire the hemisphere images with uniform brightness. The literature [27] pointed out that the canopy porosities of the hemisphere images with different exposure times may differ by 20%. The other way to eliminate the influence from changing light is to acquire images under the condition of near sunset with more scattered light, the differences in light conditions at this time are relatively small, which can reduce the direct light transmission forming light spots and shadows on blades. The lighting conditions at the time of acquisitions limit the application scope of the hemisphere image method, and there is a big inconvenience in practical application, especially in some cases of timing continuous monitoring, the lighting changes are inevitable.

The fusion mapping method on the multiple exposure images described in this article, to a certain extent, eliminates the image brightness differences caused by the light change, and it is facilitated for the image processing program later to choose the same stable threshold for image segmentation. The multiple exposure images fusing increases the range of the image brightness values, as well as the magnitude order differences of different brightness pixel values. Mapping algorithm transforms the fused images to ordinary images, completing the compression to the brightness ranges of images. Observed from the images, light areas in the original images are suppressed, and dark areas are promoted and thereby the objective of reducing the light change effects on the image brightness has been achieved.

5 Conclusion

In this article, the hemisphere image method was used to acquire corn canopy structures parameters indirectly, and against the problem that hemisphere image method is easily affected by light changes, the fusion algorithm was proposed based on the multiple exposure images, which made the brightness of the images obtained at different times relatively uniform and stable, while eliminating the blade glossy reflections and shadows. The experiments show that, compared with the direct measurements, the method described in this article has a higher consistency, and provides a new means and a research idea for the study of the canopy structure parameter acquisition from the hemisphere images.