Segmentation of Tomatoes in Open Field Images with Shape and Temporal Constraints
Abstract
With the aim of estimating the growth of tomatoes during the agricultural season, we propose to segment tomatoes in images acquired in open field, and to derive their size from the segmentation results obtained in pairs of images acquired each day. To cope with difficult conditions such as occlusion, poor contrast and movement of tomatoes and leaves, we propose to base the segmentation of an image on the result obtained on the image of the previous day, guaranteeing temporal consistency, and to incorporate a shape constraint in the segmentation procedure, assuming that the image of a tomato is approximately an ellipse, guaranteeing spatial consistency. This is achieved with a parametric deformable model with shape constraint. Results obtained over three agricultural seasons are very good for images with limited occlusion, with an average relative distance between the automatic and manual segmentations of 6.46 % (expressed as percentage of the size of tomato).
Keywords
Image segmentation Parametric active contours Shape constraint Precision agriculture1 Introduction
Optimal harvesting date and predicted yield are valuable information when farming open field tomatoes, making harvest planning and work at the processing plant much easier. Monitoring tomatoes during their early stages of growth is also interesting to assess plant stress or abnormal development. Satellite data and crop growth modeling are generally used for estimating the yield of a large region [10, 13]. However, satellite data are affected by adverse climatic conditions (clouds, etc.) resulting in inaccurate predictions [10]. Crop growth modeling, which integrates information regarding the cultivated plant, soil and weather conditions, considers the ideal case with no infected plant. Recent studies have concentrated on combining these two approaches [19]. Nevertheless these methods depend on the quality of the different parameters involved (vegetation indices, soil and weather information) and they are not accurate enough to detect abnormal development.
In this work, we present a different approach where we intend to monitor the growth of tomatoes and measure their size in an open field. For this purpose, two cameras are installed in the field and two images are captured at regular intervals. In order to avoid a complete 3D reconstruction, we assume that a tomato can be approximated by a sphere in the 3D space, which projects into an ellipse in the image plane. Hence, the first part of our system aims at detecting and segmenting the tomatoes in both images, using elliptic approximations. Then, the second part aims at estimating the sphere radius, using the camera parameters. An estimate of the yield is obtained from this information. In this paper, we focus on the segmentation procedure only.
Computer vision algorithms have been applied in the agricultural domain in order to replace human operators with an automated system. They have been used to grade and sort agricultural products [4, 7, 11], to detect weeds in a field [2, 9, 18], and to model the growth of fruits and then predict the yield [1, 14]. In [1], the yield of an apple orchard is estimated using only the density of flowers. In [14], only 5 images captured at different stages of the apple maturation are studied in order to predict the yield. These methods [1, 14] are limited to a controlled environment (apple orchards) where complex scenarios such as occlusions are not considered. Moreover, in [1] the observed scene is modified by placing a black cloth behind the tree in order to simplify the image processing tasks. However, to the best of our knowledge, there has not been any related work where the growth of a fruit or vegetable cultivated in open fields is studied based on the images captured during the entire agricultural season.
In this work, the segmentation should be as automatic as possible. However, we assume that an operator validates each obtained segmentation. If the result is poor, the operator rejects it. Indeed, given the difficulties, the segmentation is a very challenging task, and a manual validation is preferable. This approach enables us to use the segmentation done in the \(i^{th}\) image (if validated) as a reference for the segmentation of the same tomato in the \((i+1)^{th}\) image.
In order to segment the tomatoes, we use a parametric active contour model, which allows us to introduce a priori knowledge on the shape of the object to be segmented, thus making the segmentation more robust to noise and occlusion. Using an elliptic shape constraint is consistent with our prior assumption.
The main steps of the segmentation algorithm are as follows: first, gradient information is used in order to find the candidate contour points and propose several elliptic approximations using the RANSAC algorithm. Secondly, region information is added, enabling us to select the best ellipse for the initialization of the active contour and finding the regions of potential occlusions. Thirdly, the active contour with elliptic constraint is applied. Finally, four ellipse estimates are computed. The operator has only to select the best one as the final segmentation.
The original features of the proposed algorithm include the approximation of the tomatoes as ellipses and the conditioning of the computation of the image energy by the nonoccluded regions. These features allow coping with occlusions and local loss of contour and edges.
We present the active contour model with shape constraint in Sect. 2 and the different steps of the segmentation algorithm in Sect. 3. Section 4 discusses the experimental results. A brief discussion on the second part of the system which aims at estimating the radius of the tomatoes is presented in Sect. 5. This paper extends our premilinary work in [15].
2 Active Contour with an Elliptical Shape Prior
The minimum of \(\mathbf E _{T}\) is obtained in two steps: first, a least square estimate of the ellipse \(z_{e}\) is computed from the initial contour \(z_{0}\). Then, the evolving contour z is computed by minimizing \(\mathbf E _{T}\) while assuming \(z_{e}\) fixed. From the evolving contour z so obtained, the parameters of the least square estimate of the ellipse \(z_{e}\) are regularly updated. This twostep iterative process is repeated, in order to obtain the minimum of \(\mathbf E _{T}\).
To find iteratively a solution of this equation, we introduce a time variable, and the resulting equation is discretized using finite differences, as in the case of the classical active contours.
3 Detailed Algorithm
In this section, we present an algorithm which allows us to follow the growth of a tomato, which has been manually segmented in the first image (\(i=1\)).
Let us denote by \(im^{i+1}\) the \((i+1)^{th}\) image of the tomato S. In the rest of this paper, an ellipse centered at [xc, yc], whose semi major and minor axes lengths are a and b, respectively, and which has a rotation angle of \(\varphi \), is represented as \(Ell= [xc,yc,a,b,\varphi ]\). The tomato approximated by an ellipse in \(im^{i}\) is represented as \(Ell^{i}= [xc^{i},yc^{i},a^{i},b^{i},\varphi ^{i}]\). In our sequential approach, the computation of the contour in the \((i+1)^{th}\) image is based on both the information in \(im^{i+1}\) and the contour of the tomato in the \(i^{th}\) image. The temporal regularization (assuming little growth and movement of the tomato during a day) and the spatial regularization (tomato modeled as a sphere in the 3D space) are used throughout the segmentation procedure.
3.1 Preprocessing
As mentioned above, the color information is not of much use. However, the edges of tomatoes are more prominent in the red component of the image, and hence only this component is considered. The original image is cropped around the position \((xc^{i},yc^{i})\), resulting in a smaller image \((imS_{c}^{i+1})\). The contrast is enhanced by a contrast stretching transformation.
3.2 Updating the Tomato Position
Due to its increasing weight, the tomato tends to fall towards the ground (Fig. 2). Its position in \(imS^{i+1}_{c}\) is calculated using pattern matching. The bright areas, that may correspond to the tomato, are extracted by convolving the cropped image with a binary mask representing a white disk of radius \(\chi r^{i}\) where \(r^{i} = \frac{a^{i}+b^{i}}{2}\) and \(\chi \) is a constant determined empirically (\(\chi =1.25\) in our experiments). The local maxima \(C^{i+1}_{c}=\lbrace (x_{k},y_{k})\), \(k=1,...,k_{n} \rbrace \) are then extracted. From these \(k_{n}\) points, the one \(C_{m}=(x_{m},y_{m})\), which is the closest to \((xc^{i},yc^{i})\), is selected as the new location of the tomato center (Fig. 2(b)).
3.3 Elliptic Approximations
In order to obtain an initial contour for the active contour model, we first compute \(l_{n}\) points which may lie on the boundary of the tomato. From these \(l_{n}\) points, a RANSAC estimate is used to obtain several candidate ellipses. Finally, one of these ellipses is selected as the initial contour based on additional region information and size regularization.
A least square estimate of an ellipse calculated from all \(l_{n}\) points might result in a contour far away from the actual boundary because of the detection of irrelevant points. Therefore, we use a RANSAC [5] estimate based on an elliptic model in order to compute several candidate ellipses. Note that the spatial (tomato modeled as an ellipse) and temporal (parameters of the model) regularization has been used in this step to increase the robustness of the segmentation procedure.
Negative variations for a and b (Eq. 8) are possible because of the movement of the tomato with respect to the camera or because of the variation in the orientation, as tomatoes are actually not perfect spherical objects. Equation 9 restricts the apparent size of the tomato while Eq. 10 restricts the admissible values for eccentricity, thus controlling the apparent shape of the tomato.
From the N ellipses computed using the RANSAC algorithm, a total of \(N_{a}\) ellipses, with \(N_{a}<N\), are retained, corresponding to the \(N_{a}\) ellipses with the largest number of inliers (Fig. 4(b)).
3.4 Adding Region Information
A region growing algorithm is applied in order to add region information and determine the best initialization for the active contour among the \(N_{a}\) ellipses \({Ell}^{i+1}_{u}\), where \(u=1,...,N_{a}\). Moreover, potential occlusions are also derived from this information.
Let us denote by \(a_{u}^{i+1}\) and \(b^{i+1}_{u}\) the semiaxis lengths of the candidate ellipse \(Ell_{u}^{i+1},u\in [1,N_{a}]\). We select the ellipse v (Fig. 4(d)) that minimizes \(\left[ \left( a^{i+1}_{u}a^{i})^{2}+(b^{i+1}_{u}b^{i}\right) ^{2}\right] \) under the condition \(\tau (v) \le 1.1 \;\tau _{m}\). Thus, we have obtained the initial contour by combining the results obtained using two different segmentation methods, one based on boundary information and the other based on region information. The selected ellipse \(Ell_{v}^{i+1}\) is chosen among the ones for which both results are consistent, leading to a better robustness with respect to occlusions. Moreover, another regularization condition is added, which imposes that the size and shape of the ellipse in \(imS^{i+1}\) are close to the ones in \(imS^{i}\).
The next step aims at finding the regions where occlusions could disturb the behavior of the active contour. For example, the region in which the tomato is attached to the plant has a different intensity from the one of the tomato.
Let \({Ell}_{te}\) denote the ellipse which covers the convex hull of \(\omega _{t}\) and which minimizes the number of pixels inside the ellipse \({Ell}_{te}\) and not belonging to the region \(\omega _{t}\) (Fig. 4(e)). Let \(\omega _{te}\) be the region inside \({Ell}_{te}\). Then, the region of occlusion \(\omega _{oc}\) can be computed as \(\omega _{oc} =\omega _{te} \cap \omega _{t}^{c}\).
3.5 Applying Active Contours
The active contour (Sect. 2) is applied with the following initialization \({Ell}^{i+1}_{vc}=[xc^{i+1}_{v},yc^{i+1}_{v},0.95a^{i+1}_{v},0.95b^{i+1}_{v},\varphi ^{i+1}_{v}]\). Indeed, the movement of the curve z is smoother and faster if initialized inside the tomato. For the first \(n_{start}\) iterations, the parameter \(\psi \) is set to zero, so that z moves towards the most prominent contours. Then the shape constraint is introduced for \(n_{ellipse}\) iterations (\(\psi \ne 0\)) in order to guarantee robustness with respect to occlusion. Finally, the shape constraint is relaxed (\(\psi =0\)) for a few \(n_{end}\) iterations, which guarantees reaching the boundary more accurately, as a tomato is not a perfect ellipse.
Note that the image forces are not considered in the region of occlusion \(\omega _{oc}\), in every step of this process.
As explained in Sect. 2, the reference ellipse \(z_{e}\) is regularly updated, every \(n_{shape}\) iterations. A least square estimate calculated from all the points of the curve z is not relevant, because some of them may lie on false contours (e.g. leaves). So, the following algorithm aims at selecting a subset of points that actually lie on the boundary of the tomato.
The first condition ensures that the magnitude of the gradient vector projected onto the normal of the ellipse is strong. The threshold \(\varGamma \) is determined automatically [12]. The second condition ensures that the direction of the gradient is close to the vector normal to the ellipse. The last condition (\(d_{max}= 2\) in our experiments) imposes that the considered point is a meaningful local maximum of the gradient. Finally, the parameters of the reference ellipse are updated by calculating a least square approximation from the subset of points lying on the evolving contour z selected using the above conditions (Fig. 5(a)).
3.6 Refining the Results
 1.
A least square approximation \({Ell}^{i+1}_{f1}= [xc^{i+1}_{f1},yc^{i+1}_{f1},a^{i+1}_{f1},b^{i+1}_{f1},\varphi ^{i+1}_{f1}]\) is computed from all the points of z.
 2.Another estimate \({Ell}^{i+1}_{f2}=[xc^{i+1}_{f2},yc^{i+1}_{f2},a^{i+1}_{f2},b^{i+1}_{f2},\varphi ^{i+1}_{f2}]\) is obtained from \(\mathbf P _{h}'\) using the RANSAC algorithm with the following conditions:$$\begin{aligned} 0.9a^{i+1}_{f1} < a^{i+1}_{f2} <1.1a^{i+1}_{f1} \end{aligned}$$(18)$$\begin{aligned} 0.9 b^{i+1}_{f1}< b^{i+1}_{f2}<1.1b^{i+1}_{f1} \end{aligned}$$(19)
 3.
A least square approximation \({Ell}^{i+1}_{f3}\) is obtained from the subset \(\mathbf P _{h}\).
 4.
A weighted least square estimate \({Ell}^{i+1}_{f4}\) is obtained where the points of \(\mathbf P _{h}\) are assigned a higher weight (0.75) and the other points of z a lower weight (0.25). This is done in order to give importance to the points that are surely on the boundary of the tomato.
If the images have a good contrast, and little or no occlusion, all the four ellipses will be almost identical (Fig. 5(e)). However, in case of occlusions and poor contrast, the four ellipses may be different (Fig. 5(f)), and the user selects the best one.
4 Results
Two cameras (Pentax Optio W80) were installed in an open field of tomatoes. The same setup was used for three agricultural seasons (AprilAugust, 2011, 2012 and 2013). We have identified 21 tomatoes, covering different sites and different seasons, thus ensuring variability (614 images in total). The tomatoes were identified manually by observing the images of the entire agricultural season. Due to the severe occlusions, only a limited number of tomatoes were visible in most of the images of a given season. Therefore, only the tomatoes which were visible in more than 10 consecutive images were studied.
As discussed earlier, one of the main challenges of the segmentation is the occlusion and the poor quality of the images due to the poor illumination and/or shadow. Moreover, for the images acquired in the 2013 agricultural season (\(S=12,..,21\)) the size of the tomatoes was significantly smaller as compared to the one observed during the agricultural seasons in 2011 and 2012 (\(S=1,...,11\)). This is due to the variation in the external climatic conditions. Also, in some images, a shadow created by the leaves (or the tomato itself) can be observed (\(S=8,12\)). As a result, a portion of the contour is not clear and distinct. This results in an ambiguity on the position of the contour. Given this ambiguity, even a manual segmentation is a challenging task on this portion of the contour. Moreover, a blurred contour was observed in some images of some sequences (\(S=3,13,18,19,20\)), due to the presence of additional neighboring tomatoes in the background.

Category 1, containing images with an amount of occlusion P less than 30 % for which the estimation is very robust with respect to segmentation imprecision,

Category 2, with \(30\,\%< P <50\,\%\) which is more prone to segmentation error,

Category 3, with \(P>50\,\%\) for which it is impossible to perform a reliable segmentation.
The percentage of occlusion was determined manually by selecting the end points of the occluded elliptic arc. Note that the percentage of occlusion was computed only to evaluate the segmentation procedure, and this is not a part of the algorithm.
Due to the smaller size of tomatoes in sequences \(S=12,...,21\), higher distances were observed in these sequences (since the distances \(D_{meanR}^{i}\) and \(D_{maxR}^{i}\) are normalized by the size \(r^{i}\) of the tomato). For example, Fig. 6(e) shows the obtained segmentation \(Ell_{f4}\) on the \(3^{rd}\) image of sequence \(S = 17\). The distances normalized by the size of the tomato are \(D_{meanR}\) = 9.89 % and \(D_{maxR}\) = 26.60 %. However, the distances expressed in pixels are significantly lower (\(D_{mean}\) = 2, \(D_{max}\) = 5.38 pixels). For most of the sequences a low \(\mu _{D_{meanR}}\) along with lower \(\sigma _{D_{meanR}}\) demonstrates the robustness of the proposed method. However, for some sequences (\(S=13,17,18)\) higher values of \(\mu _{D_{meanR}}\) and \(\sigma _{D_{meanR}}\) were observed mainly due to the false detection of the position of the tomato (Sect. 3.2), or due to the small size of the tomato, as discussed previously. For the sequence \(S=11\), all the images suffer from poor contrast and noise due to the shadow created by leaves. As a result, even a manual segmentation is a challenging task in this sequence.
Mean (\(\mu \)) and standard deviation (\(\sigma \)) of \(D_{meanR}\) and \(D_{maxR}\) by comparing ellipse \(Ell_{f4}\) and \(Ell_{opt}\) with the manual segmentation M. Only the images belonging to category 1 (i.e. with a low amount of occlusion) have been considered.
\(Ell_{f4}\)  \(Ell_{opt}\)  

\( N_{S}^{1}\)  \(\mu _{D_{meanR}}\)  \(\sigma _{D_{meanR}}\)  \(\mu _{D_{maxR}}\)  \(\sigma _{D_{maxR}}\)  \(\mu _{D_{meanR}}\)  \(\sigma _{D_{meanR}}\)  \(\mu _{D_{maxR}}\)  \(\sigma _{D_{maxR}}\)  
S = 1  26  1.72  0.77  5.06  2.76  1.34  0.68  3.76  2.21 
S = 2  4  1.85  0.46  5.45  1.91  1.57  0.40  4.43  0.88 
S = 3  21  3.4  2.24  9.79  6.88  2.87  2.05  8.40  6.58 
S = 4  14  2.73  1.92  7.81  5.71  2.20  1.77  6.28  5.34 
S = 5  5  4.81  1.3  13.05  3.56  4.54  1.14  12.44  3.46 
S = 6  0                 
S = 7  25  1.88  0.65  4.81  1.97  1.7  0.49  4.61  1.62 
S = 8  20  6.07  5.75  15.41  10.61  5.4  4.88  14.9  10.37 
S = 9  1  5.26  0  11.86  0  5.24  0.00  11.86  0.00 
S = 10  5  2.25  0.56  6.59  2.25  1.75  0.36  4.40  1.25 
S = 11  4  11.81  4.99  32.66  11.39  10.18  5.38  29.21  14.74 
S = 12  19  4.74  1.33  11.69  3.23  4.25  1.34  11.1  3.44 
S = 13  5  41.5  16.55  84.98  27.57  40.48  16.18  83.51  26.85 
S = 14  4  9.57  2.35  29.48  7.3  9.18  2.57  28.06  7.84 
S = 15  0                 
S = 16  21  4.68  1.08  10.22  2.74  4.46  1.18  10.12  2.68 
S = 17  20  11.78  2.44  26.75  4.21  11.56  2.48  26.85  4.04 
S = 18  23  14.18  20.06  35.76  33.83  13.94  20.09  35.15  33.42 
S = 19  0                 
S = 20  5  8.76  5.38  20.05  14.93  6.88  2.3  13.65  2.98 
S = 21  25  7.34  3.18  16.56  8.93  7.12  3.15  15.76  8.94 
In the results presented so far, \(Ell_{f4}\) was compared with the manual segmentation. However \(Ell_{f4}\) is not necessarily the best ellipse, and was selected here for illustrative purpose only. Due to the variation in the contrast and occlusion, there is not a single ellipse (among the four ellipse estimates) which represents a good segmentation for all the images. Let us denote by \(Ell_{opt}\) the ellipse, among the four ellipse estimates (\(Ell_{f1}\), \(Ell_{f2}\), \(Ell_{f3}\) and \(Ell_{f4}\)), for which \(\mu _{D_{meanR}}\) is minimum. Table 1 shows the distribution of \(D_{meanR}\) and \(D_{maxR}\) for ellipse \(Ell_{opt}\). It can be observed that the values of \(D_{meanR}\) and \(D_{maxR}\) for \(Ell_{opt}\) are lower than for those of \(Ell_{f4}\). The operator selects \(Ell_{opt}\) as the final segmentation.
5 Estimating the Size of the Tomato
From the obtained segmentation in both images and the camera parameters, we then estimate the size of the tomatoes. However, determining the image point pairs which correspond to the same point in the 3D space is a challenging task given the complexity of the scene. Instead we simplify the size estimation procedure by exploiting the spherical hypothesis.
The contour of the tomato is approximated by an ellipse whose parameters are calculated using the procedure presented above. Then, the sphere center in the 3D space is computed using triangulation from the centers of the ellipse calculated in both images. Next, the 3D space points situated on the contour of the tomato are computed using properties of projective geometry, independently from each image. Finally, a joint optimization procedure enables us to estimate the sphere radius.
In order to evaluate the size estimation procedure, the size of tomatoes observed in laboratory was measured. Since a tomato is not a perfect sphere, two reference values were measured manually and compared with the estimated radius of the sphere. For the manually segmented tomatoes observed in laboratory, we found that the relative percentage error between the largest of the reference value and the estimated radius was less than 5 % in 91 % of the cases. For the tomatoes cultivated in the open field, the relative percentage error was less than 10 % in 80 % of the cases [16]. The errors are mainly caused by the imperfect segmentation, due to shadowing effect and the poor quality of the images.
6 Conclusions
We presented a segmentation procedure used to monitor the growth of tomatoes from images acquired in an open field. Starting from an approximate computation of the position of the center of the tomato, segmentation algorithms based on contour and region information are proposed and combined, in order to determine a first estimate of the contour. Then, a parametric active contour with shape constraint is applied and four ellipse estimates representing the tomatoes are obtained. In all the steps of this process, a priori knowledge about the shape and the size of the tomatoes is modeled and incorporated as regularization terms, leading to a better robustness. It is supposed that the operator selects, at the end of the process for each image, the ellipse corresponding to the best elliptic estimation of the actual contour.
The segmentation of tomatoes is a challenging task due to the presence of occlusion and variation in contrast. In order to evaluate the robustness of the proposed algorithm, the entire image set was divided into three categories based on the amount of occlusion. For the images with an acceptable level of occlusion, good results were obtained on most (87 %) of images where \(D_{meanR}\) was less than 10 %. Also, the low standard deviation for \(D_{meanR}\) indicates the robustness of the proposed algorithm. Good results with \(D_{meanR}<10\,\%\) were obtained on 73 % of the images that contain a significant amount of occlusion.
For the moment, it has been assumed that an operator manually selects one ellipse as the final segmentation. In future work, we wish to provide automatically the best representation of the tomato. Also, in some images, the position of the tomato is not detected correctly due to the presence of other tomatoes nearby. This could be improved by updating the position of the tomato globally by considering also the movement of adjacent tomatoes. One possible improvement for the active contour model is to restrict the size of the reference ellipse, as there is little growth between two consecutive images.
Notes
Acknowledgements
This work was partly supported by the MCUBE project (European Regional Development Fund (ERDF)), which aims at integrating multimedia processing capabilities in a classical Machine to Machine (M2M) framework, thus allowing the user to remotely monitor an agricultural field. The authors would like to thank Jérôme Grangier, for his participation in this project.
References
 1.Aggelopoulou, A., Bochtis, D., Fountas, S., Swain, K., Gemtos, T., Nanos, G.: Yield prediction in apple orchards based on image processing. J. Precis. Agric. 12, 448–456 (2011)CrossRefGoogle Scholar
 2.Aitkenhead, M., Dalgetty, I., Mullins, C., McDonald, A., Strachan, N.: Weed and crop discrimination using image analysis and artificial intelligence methods. Comput. Electron. Agric. 39(3), 157–171 (2003)CrossRefGoogle Scholar
 3.Charmi, M., Ghorbel, F., Derrode, S.: Using Fourierbased shape alignment to add geometric prior to snakes. In: ICASSP, pp. 1209–1212 (2009)Google Scholar
 4.Du, C., Sun, D.: Learning techniques used in computer vision for food quality evaluation: a review. J. Food Eng. 72(1), 39–55 (2006)CrossRefGoogle Scholar
 5.Fischler, A., Bolles, C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
 6.Foulonneau, A., Charbonnier, P., Heitz, F.: Affineinvariant geometric shape priors for regionbased active contours. IEEE Trans. Pattern Anal. Mach. Intell. 28(8), 1352–1357 (2006)CrossRefGoogle Scholar
 7.Jayas, D., Paliwal, J., Visen, N.: Review paper (automation and emerging technologies): multilayer neural networks for image analysis of agricultural products. J. Agric. Eng. Res. 77(2), 119–128 (2000)CrossRefGoogle Scholar
 8.Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)CrossRefzbMATHGoogle Scholar
 9.Lee, W.S., Slaughter, D.C., Giles, D.K.: Robotic weed control system for tomatoes. Precis. Agric. 1, 95–113 (1999)CrossRefGoogle Scholar
 10.Mkhabela, M., Bullock, P., Raj, S., Wang, S., Yang, Y.: Crop yield forecasting on the Canadian prairies using MODIS NDVI data. Agric. Forest Meteorol. 151(3), 385–393 (2011)CrossRefGoogle Scholar
 11.Narendra, V.G., Hareesh, K.S.: Prospects of computer vision automated grading and sorting systems in agricultural and food products for quality evaluation. Int. J. Comput. Appl. 1(4), 1–9 (2010)CrossRefGoogle Scholar
 12.Otsu, N.: A threshold selection method from graylevel histograms. Automatica 11(285–296), 23–27 (1975)Google Scholar
 13.Prasad, A., Chai, L., Singh, R., Kafatos, M.: Crop yield estimation model for Iowa using remote sensing and surface parameters. Int. J. Appl. Earth Obs. Geoinf. 8(1), 26–33 (2006)CrossRefGoogle Scholar
 14.Stajnko, D., Cmelik, Z.: Modelling of apple fruit growth by application of image analysis. Agric. Conspec. Sci. 70, 59–64 (2005)Google Scholar
 15.Verma, U., Rossant, F., Bloch, I., Orensanz, J., Boisgontier, D.: Shapebased segmentation of tomatoes for agriculture monitoring. In: International Conference on Pattern Recognition Applications and Methods (ICPRAM), Angers, France, pp. 402–411 (2014)Google Scholar
 16.Verma, U., Rossant, F., Bloch, I., Orensanz, J., Boisgontier, D.: Tomato development monitoring in an open field, using a twocamera acquisition system. In: 12th International Conference on Precision Agriculture (2014)Google Scholar
 17.Xu, C., Prince, J.: Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 7(3), 359–369 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
 18.Yang, C., Prasher, S., Landry, J., Ramaswamy, H., Ditommaso, A.: Application of artificial neural networks in image recognition and classification of crop and weeds. Can. Agric. Eng. 42(3), 147–152 (2000)Google Scholar
 19.Zhao, H., Pei, Z.: Crop growth monitoring by integration of time series remote sensing imagery and the WOFOST model. In: 2013 Second International Conference on AgroGeoinformatics (AgroGeoinformatics), pp. 568–571 (2013)Google Scholar