Image analysis-based recognition and quantification of grain number per panicle in rice
Abstract
Background
The number grain per panicle of rice is an important phenotypic trait and a significant index for variety screening and cultivation management. The methods that are currently used to count the number of grains per panicle are manually conducted, making them labor intensive and time consuming. Existing image-based grain counting methods had difficulty in separating overlapped grains.
Results
In this study, we aimed to develop an image analysis-based method to quickly quantify the number of rice grains per panicle. We compared the counting accuracy of several methods among different image acquisition devices and multiple panicle shapes on both Indica and Japonica subspecies of rice. The linear regression model developed in this study had a grain counting accuracy greater than 96% and 97% for Japonica and Indica rice, respectively. Moreover, while the deep learning model that we used was more time consuming than the linear regression model, the average counting accuracy was greater than 99%.
Conclusions
We developed a rice grain counting method that accurately counts the number of grains on a detached panicle, and believe this method can be a huge asset for guiding the development of high throughput methods for counting the grain number per panicle in other crops.
Keywords
Rice Grain number per panicle Image processing Model CountingBackground
Phenomics involves the gathering of high-dimensional phenotypic data to screen mutants with unique traits and identify the corresponding genes [1]. Current methods for obtaining phenotypic data are generally manual [2], making them time-consuming, labor-intensive, and less accurate. Therefore, such approaches have been impractical for high-throughput measurements during plant growth and development.
The number of rice grains per panicle is a key trait that effects grain cultivation, management, and subsequent yield [3, 4, 5], as well as being an important parameter for evaluating the potential of new rice cultivars [6]. Rapid measurement of grain number per panicle could improve the efficiency of scientific research and cultivar development.
Image analysis-based methods have been widely used in many aspects of plant phenotyping. Image-analysis based high-throughput phenotyping platforms have also been applied to measure phenotypic traits of rice, including: plant height, the green leaf area, and rice tiller number [7]. Yang et al. [8] measured the number of panicles on plants using multi-angel color images and an artificial neural network algorithm. The authors reported a reliable, automatic, high-throughput leaf scorer (HLS) for the evaluation of leaf traits, including leaf number, size, shape, and color [9]. Feng et al. [10] developed a hyperspectral imaging system for the accurate prediction of the above-ground biomass of individual rice plants in the visible and near-infrared spectral regions. Zhou et al. [11, 12] used image analysis techniques to assess plant nitrogen and water status. Huang et al. [13] developed a prototype for the automatic measurement of panicle length using dual-cameras, which were equipped with a long-focus lens and a short-focus lens to capture a detailed and complete image of the rice panicle. In addition, image-based methods have been used to characterize seed morphology, including: seeds size, shape, color, and endosperm structure [14, 15, 16]. With the advancement of modern optical imaging and automation technology, hardware is no longer a bottleneck for phenotyping. Instead, the analysis and processing of multi-disciplinary optical images have become the new bottleneck [17].
The research on the rapid counting of grain number per panicle has been carried out in different ways. Generally, the panicle is spread out on a white background and held in place by metal pins so that branches and grains are nonoverlapping [14, 18]. It is also an effective way to spread the grains after threshing [16]. These methods are not suitable for rice panicles with severe adhesions in the Yangtze River Basin. Currently, there are two primary methods for the determination of grain number per panicle. The first method is to count the number of grains manually after threshing, which is an incredibly time-consuming and labor intensive process. During the processing of threshed grains, due to the existence of a large amount of awns and overlaid and clustered grains, it is very challenging for traditional algorithms to identify individual rice kernels when they are touching [19, 20]. Husking the grains would make them smoother and easy to separate, but husking also produces broken rice kernels and complicates the counting procedure.
The second methods for determining grain number on each panicle is the most common method and is called on-panicle counting method, which involves counting the number of grains in a spikelet. Collecting an image of the entire panicle is also problematic due to overlaid and clustered grains. To some extent, three-dimensional image acquisition may solve the problem of the touching grains, but equipment to conduct this analyses is expensive and complicated to use.
In this study, we proposed a new counting method that uses image processing and deep learning algorithm to detect rice grain from the image of the primary branch was acquired using digital scanner. Our method would solve grain overlap or clustering problems, be more cost-effective and user-friendly, and facilitate high throughput counting of grain number per panicle in rice.
Methods
Field experiment
Basic information of experimental materials
Density (10^{4} plant ha^{−1}) | Fertilizer (kg ha^{−1}) | Indica rice | Japonica rice | Total | ||
---|---|---|---|---|---|---|
Yangliang you No. 6 | Fengyou xiangzhan | Wuyunjing No. 27 | Nanjing No. 9108 | |||
150 | 150 | 15 | 15 | 15 | 15 | 60 |
225 | 15 | 15 | 15 | 15 | 60 | |
300 | 15 | 15 | 15 | 15 | 60 | |
225 | 150 | 15 | 15 | 15 | 15 | 60 |
225 | 15 | 15 | 15 | 15 | 60 | |
300 | 15 | 15 | 15 | 15 | 60 | |
300 | 150 | 15 | 15 | 15 | 15 | 60 |
225 | 15 | 15 | 15 | 15 | 60 | |
300 | 15 | 15 | 15 | 15 | 60 | |
Total | 135 | 135 | 135 | 135 | 540 |
Image acquisition
Basic information of image dataset
Image acquisition method | Panicle shape | Indica rice | Japonica rice | Total | ||
---|---|---|---|---|---|---|
Yangliangyou No. 6 | Fengyou xiangzhan | Wuyunjing No. 27 | Nanjing No. 9108 | |||
Original image data | ||||||
Camera | A | 45 | 45 | 45 | 45 | 180 |
B | 45 | 45 | 45 | 45 | 180 | |
C | 45 | 45 | 45 | 45 | 180 | |
Scanner | A | 45 | 45 | 45 | 45 | 180 |
B | 45 | 45 | 45 | 45 | 180 | |
C | 45 | 45 | 45 | 45 | 180 | |
Linear regression training data | ||||||
Camera | B | 40 | 40 | 40 | 40 | 160 |
C | 45 | 45 | 45 | 45 | 180 | |
Scanner | B | 40 | 40 | 40 | 40 | 160 |
C | 45 | 45 | 45 | 45 | 180 | |
Linear regression validation data | ||||||
Camera | B | 25 | 25 | 25 | 25 | 100 |
C | 25 | 25 | 25 | 25 | 100 | |
Scanner | B | 25 | 25 | 25 | 25 | 100 |
C | 25 | 25 | 25 | 25 | 100 | |
Deep learning training and validation data | ||||||
Camera | B | 5 | 5 | 5 | 5 | 20 |
C | 5 | 5 | 5 | 5 | 20 | |
Scanner | B | 10 | 10 | 10 | 10 | 40 |
C | 10 | 10 | 10 | 10 | 40 | |
Deep learning testing data | ||||||
Camera | B | 25 | 25 | 25 | 25 | 100 |
C | 25 | 25 | 25 | 25 | 100 | |
Scanner | B | 25 | 25 | 25 | 25 | 100 |
C | 25 | 25 | 25 | 25 | 100 |
Image pre-processing
Algorithm for the calculation of grain number per panicle
For untouched grains, the general counting method is to calculate the number of connected regions in binary images. When it touched, several methods were used to split the clustered kernels, including: dilation and erosion operation, the watershed method, corner detection, and feature matching. However, each method has its limitations in our study, which will be discussed in detail later in this paper. We designed two methods as follows.
Linear regression algorithm
Since the actual measurement was an integer and the model-predicted number was not always an integer, we rounded predicted number to integers for comparison. Table 2 is the image information used for regression model training and verification. In order to increase the sample size, the 45 shape A panicles were processed into 20 shape B panicles and 30 shape C panicles to acquire images again. The constructed model was evaluated using R^{2} and RMSE.
Deep learning algorithm
The hardware, software, and hyperparameters configurations for the deep learning model
Project | Content |
---|---|
CPU | Intel Xeon E5-2682v4 |
RAM | 16 G |
GPU | Nvidia Tesla P4 |
Operating system | Ubuntu 16.04 LTS |
Cuda | Cuda8.0 with Cudnn v6 |
Data processing | Python2.7, OpenCV, LabelImg, etc. |
Deep learning framework | TensorFlow |
Deep learning algorithm | Faster RCNN ResNet101 |
Num classes | 2 (Japonica rice grain and Indica rice grain) |
Batch size | 1 |
Initial learning rate | 0.0003 |
Learning rate | 0.0003 |
Iteration steps | 30,000 |
Minimum confidence | 0.9 |
Results and analysis
Comparison on image manually counting method
The accuracy of image manual counting for different groups
Image acquisition method | Rice subspecies | Panicle shape | Number of images measured | Accuracy (%) |
---|---|---|---|---|
Scanner | Japonica rice | A | 100 | 75.83 |
B | 100 | 98.33 | ||
C | 100 | 98.39 | ||
Indica rice | A | 100 | 68.46 | |
B | 100 | 95.34 | ||
C | 100 | 97.51 | ||
Camera | Japonica rice | A | 100 | 68.08 |
B | 100 | 93.35 | ||
C | 100 | 95.26 | ||
Indica rice | A | 100 | 66.31 | |
B | 100 | 89.01 | ||
C | 100 | 93.93 |
Linear model analysis
Training and validation of optimal multiple linear regression model
Rice subspecies | Combination method | Training | Validation | |||
---|---|---|---|---|---|---|
Models | R^{2} | RMSE | R^{2} | RMSE | ||
Indica | Scanner + Shape B | GN = 364.93 × CDʹ + 0.70 × Skʹ − 3.90 × Coʹ + 2.801 | 0.990 | 4.6732 | 0.980 | 6.3254 |
Scanner + Shape C | GN = 363.72 × CDʹ + 10.50 × Skʹ − 13.48 × Coʹ + 5.348 | 0.990 | 4.6345 | 0.980 | 6.3574 | |
Camera + Shape B | GN = 396.82 × CDʹ − 21.70 × Skʹ − 7.32 × Coʹ + 11.823 | 0.974 | 7.6989 | 0.965 | 8.3016 | |
Camera + Shape C | GN = 395.60 × CDʹ − 11.90 × Skʹ − 16.90 × Coʹ + 14.369 | 0.975 | 7.5595 | 0.964 | 8.3956 | |
Japonica | Scanner + Shape B | GN = 481.49 × CDʹ + 178.22 × Skʹ − 164.85 × Coʹ − 18.485 | 0.979 | 6.0957 | 0.975 | 6.4714 |
Scanner + Shape C | GN = 482.28 × CDʹ + 178.63 × Skʹ − 164.00 × Coʹ − 17.031 | 0.980 | 5.9838 | 0.976 | 6.4587 | |
Camera + Shape B | GN = 500.64 × CDʹ + 188.62 × Skʹ − 205.06 × Coʹ − 5.477 | 0.954 | 9.1121 | 0.953 | 8.5389 | |
Camera + Shape C | GN = 501.43 × CDʹ + 189.03 × Skʹ − 204.22 × Coʹ − 4.023 | 0.954 | 9.0961 | 0.953 | 8.5910 |
Deep learning model
Grain counting accuracy of the deep learning model
Image acquisition device | Panicle shape | Miss detection rate (%) | False detection rate (%) | Accuracy (%) |
---|---|---|---|---|
Scanner | Shape B | 0.79 | 0 | 99.21 |
Shape C | 0.62 | 0 | 99.38 | |
Camera | Shape B | 1.40 | 0 | 98.60 |
Shape C | 1.02 | 0 | 98.98 |
Discussion
The effect of stems on the counting accuracy of different models
Model | Rice subspecies type | Stems | Accuracy (%) |
---|---|---|---|
Linear regression model | Indica rice | Yes | 96.95 |
No | 97.84 | ||
Japonica rice | Yes | 95.48 | |
No | 96.43 | ||
Indica + Japonica rice | Yes | 84.86 | |
No | 87.56 | ||
Deep learning model | Indica rice | Yes | 98.84 |
No | 99.06 | ||
Japonica rice | Yes | 99.36 | |
No | 99.52 | ||
Indica + Japonica rice | Yes | 99.16 | |
Yes | 99.38 |
Time needed for each work
Image acquisition device | Panicle shape | Time used |
---|---|---|
Scanner | Shape A | 2 m 30 s |
Shape B | 5 m 40 s | |
Shape C | 4 m 46 s | |
Camera | Shape A | 40 s |
Shape B | 3 m 20 s | |
Shape C | 2 m 26 s |
Counting method | Time used | |
---|---|---|
Manual counting | 21 m 40 s | |
Linear regression model | 3 s | |
Deep learning model | 2 m 20 s |
The time needed to acquire images using a digital camera was 50% of that using scanner, and the time needed for Shape C was shorter than Shape B. During the time experiment, Shape B needs to carefully separate the intertwined branches, and the violent operation will cause the individual grains to fall off and affect the subsequent treatment. Shape C can be separated from the stem by cutting a knife. Therefore, Shape B takes more time than Shape C. When running the linear model, most of the time was used for image processing and for the extraction of parameters. Meanwhile, when running deep learning model, most time was spent on parameter loading. The linear regression model required significantly less time than the deep learning model due to the fact that the deep learning model must load millions of parameters and involves large amount of data execution. The Scanner + Shape B + Deep Learning method takes 8 min, which is only about one-third of the manual counting time. The brain power expended by manual counting was not included yet. Thus, using multiple sets of high-performance graphics processing unit (GPU) could significantly accelerate data execution [31]. In addition, model compression or the establishment of a simpler deep learning model could also reduce the model running time [32].
Conclusion
In summary, we established two models to count the grain number per panicle, a linear regression model and a deep learning model, which had a counting accuracy greater than 96% and 99%, respectively. However, the deep learning model required more time than the linear regression model. If we consider the time cost, linear regression model is recommended for counting the rice grain number per panicle. Otherwise, the deep learning model would be best to optimizing accuracy. We believe our high-throughput and rapid method for counting the number of rice grains per panicle is a useful tool for rice phenomics research.
Notes
Acknowledgements
We thank LetPub (http://www.letpub.com) for linguistic assistance during manuscript preparation.
Authors’ contributions
WW and WG performed all experiments and analyzed the data. TL, PZ and TY performed some experiments and analyzed the data. CL and XZ contributed the conceptual design and provided supervision. WW, CS and SL wrote the main manuscript text and prepared all figures. All authors were involved in preparing and revising the manuscript. All authors read and approved the final manuscript.
Funding
This research was mainly supported by the National Key Research and Development Program of China (2018YFD0300802, 2018YFD0300805), the National Natural Science Foundation of China (31872852, 31701355, 31671615), the Independent Innovation Project of Jiangsu Province (CX(18)1002), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX18_2371) and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
Ethics approval and consent to participate
Not applicable.
Consent for publication
Yes. All authors have seen the manuscript and approved to submit to your journal. All authors agree to publish.
Competing interests
The authors declare that they have no competing interests.
References
- 1.Dhondt S, Wuyts N, Inze D. Cell to whole-plant phenotyping: the best is yet to come. Trends Plant Sci. 2013;18(8):433–44.CrossRefGoogle Scholar
- 2.Liu T, Wu W, Chen W, Sun CM, Chen C, Wang R, Zhu XK, Guo WS. A shadow-based method to calculate the percentage of filled rice grains. Biosyst Eng. 2016;150:79–88.CrossRefGoogle Scholar
- 3.Garcia GA, Serrago RA, Dreccer MF, Miralles DJ. Post-anthesis warm nights reduce grain weight in field-grown wheat and barley. Field Crop Res. 2016;195:50–9.CrossRefGoogle Scholar
- 4.Li JM, Thomson M, McCouch SR. Fine mapping of a grain-weight quantitative trait locus in the pericentromeric region of rice chromosome 3. Genetics. 2004;168(4):2187–95.CrossRefGoogle Scholar
- 5.Slafer GA, Savin R, Sadras VO. Coarse and fine regulation of wheat yield components in response to genotype and environment. Field Crop Res. 2014;157:71–83.CrossRefGoogle Scholar
- 6.Ferrante A, Cartelle J, Savin R, Slafer GA. Yield determination, interplay between major components and yield stability in a traditional and a contemporary wheat across a wide range of environments. Field Crop Res. 2017;203:114–27.CrossRefGoogle Scholar
- 7.Duan LF, Huang CL, Chen GX, Xiong LZ, Liu Q, Yang WN. Determination of rice panicle numbers during heading by multi-angle imaging. Crop J. 2015;3(3):211–9.CrossRefGoogle Scholar
- 8.Yang WN, Guo ZL, Huang CL, Duan LF, Chen GX, Jiang N, Fang W, Feng H, Xie WB, Lian XM, Wang GW, Luo QM, Zhang QF, Liu Q, Xiong LZ. Combining high-throughput phenotyping and genome-wide association studies to reveal natural genetic variation in rice. Nat Commun. 2014;5:5087.CrossRefGoogle Scholar
- 9.Yang WN, Guo ZL, Huang CL, Wang K, Jiang N, Feng H, Chen GX, Liu Q, Xiong LZ. Genome-wide association study of rice (Oryza sativa L.) leaf traits with a high-throughput leaf scorer. J Exp Bot. 2015;66(18):5605–15.CrossRefGoogle Scholar
- 10.Feng H, Jiang N, Huang CL, Fang W, Yang WN, Chen GX, Xiong LZ, Liu Q. A hyperspectral imaging system for an accurate prediction of the above-ground biomass of individual rice plants. Rev Sci Instrum. 2013;84(9):095107.CrossRefGoogle Scholar
- 11.Tavakoli H, Gebbers R. Assessing nitrogen and water status of winter wheat using a digital camera. Comput Electron Agric. 2019;157:558–67.CrossRefGoogle Scholar
- 12.Zhou CY, Le J, Hua DX, He TY, Mao JD. Imaging analysis of chlorophyll fluorescence induction for monitoring plant water and nitrogen treatments. Measurement. 2019;136:478–86.CrossRefGoogle Scholar
- 13.Huang CL, Yang WN, Duan LF, Jiang N, Chen GX, Xiong LZ, Liu Q. Rice panicle length measuring system based on dual-camera imaging. Comput Electron Agric. 2013;98:158–65.CrossRefGoogle Scholar
- 14.AL-Tam F, Adam H, dos Anjos A, Lorieux M, Larmande P, Ghesquiere A, Jouannic S, Shahbazkia HR. P-TRAP: a panicle trait phenotyping tool. BMC Plant Biol. 2013;13:122.CrossRefGoogle Scholar
- 15.Tanabata T, Shibaya T, Hori K, Ebana K, Yano M. SmartGrain: high-throughput phenotyping software for measuring seed shape through image analysis. Plant Physiol. 2012;160(4):1871–80.CrossRefGoogle Scholar
- 16.Whan AP, Smith AB, Cavanagh CR, Ral JPF, Shaw LM, Howitt CA, Bischof L. GrainScan: a low cost, fast method for grain size and colour measurements. Plant Methods. 2014;10:23.CrossRefGoogle Scholar
- 17.Houle D, Govindaraju DR, Omholt S. Phenomics: the next challenge. Nat Rev Genet. 2010;11(12):855–66.CrossRefGoogle Scholar
- 18.Crowell S, Falcao AX, Shah A, Wilson Z, Greenberg AJ, McCouch SR. High-resolution inflorescence phenotyping using a novel image-analysis pipeline, PANorama. Plant Physiol. 2014;165(2):479–95.CrossRefGoogle Scholar
- 19.Bleau A, Leon LJ. Watershed-based segmentation and region merging. Comput Vis Image Underst. 2000;77(3):317–70.CrossRefGoogle Scholar
- 20.Lin P, Chen YM, He Y, Hu GW. A novel matching algorithm for splitting touching rice kernels based on contour curvature analysis. Comput Electron Agric. 2014;109:124–33.CrossRefGoogle Scholar
- 21.Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 2007;9(1):62–6.CrossRefGoogle Scholar
- 22.Liu T, Yang TL, Li CY, Li R, Wu W, Zhong XC, Sun CM, Guo WS. A method to calculate the number of wheat seedlings in the 1st to the 3rd leaf growth stages. Plant Methods. 2018;14:101.CrossRefGoogle Scholar
- 23.He K, Zhang X, Ren S, Jian S, editors. Deep residual learning for image recognition. In: IEEE conference on computer vision & pattern recognition. 2016.Google Scholar
- 24.Ren SQ, He KM, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Intell. 2017;39(6):1137–49.CrossRefGoogle Scholar
- 25.Shatadal P, Jayas DS, Bulley NR. Digital image analysis for software separation and classification of touching grains. I. Disconnect algorithm. Trans ASAE. 1995;38(2):645–9.CrossRefGoogle Scholar
- 26.Osma-Ruiz V, Godino-Llorente JI, Saenz-Lechon N, Gomez-Vilda P. An improved watershed algorithm based on efficient computation of shortest paths. Pattern Recogn. 2007;40(3):1078–90.CrossRefGoogle Scholar
- 27.Liu T, Chen W, Wang YF, Wu W, Sun CM, Ding JF, Guo WS. Rice and wheat grain counting method and software development based on Android system. Comput Electron Agric. 2017;141:302–9.CrossRefGoogle Scholar
- 28.Yao Y, Wu W, Yang TL, Liu T, Chen W, Chen C, Li R, Zhou T, Sun CM, Zhou Y, Li XL. Head rice rate measurement based on concave point matching. Sci Rep. 2017;7:41353.CrossRefGoogle Scholar
- 29.Charytanowicz M, Kulczycki P, Kowalski PA, Lukasik S, Czabak-Garbacz R. An evaluation of utilizing geometric features for wheat grain classification using X-ray images. Comput Electron Agric. 2018;144:260–8.CrossRefGoogle Scholar
- 30.Pierzchala M, Giguere P, Astrup R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput Electron Agric. 2018;145:217–25.CrossRefGoogle Scholar
- 31.Qin CZ, Zhan LJ. Parallelizing flow-accumulation calculations on graphics processing units-From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm. Comput Geosci. 2012;43:7–16.CrossRefGoogle Scholar
- 32.Cheng Y, Wang D, Zhou P, Zhang T. Model compression and acceleration for deep neural networks the principles, progress, and challenges. IEEE Signal Proc Mag. 2018;35(1):126–36.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.