Background

With a growing global population and increasingly challenging environmental conditions, it is critical for plant scientists to think ‘outside the box’ to identify and better understand novel plant traits that could be used to improve crop yield. This could be through the adoption of new technologies, through investigation of phenotypic traits throughout plant development or through improvement of crops that have been bred less intensively in the past. Recent advances in image capture and computing technologies have allowed us to accurately phenotype the 3D architecture of crop plants such as wheat, barley and rice. However, structurally complex crop species, such as chickpea, remain elusive due to their small leaves, high levels of branching and indeterminant nature. Here we assess the potential to virtually reconstruct the 3D structure and assess canopy architectural traits of these “complex” plants across their development using photogrammetry.

One of the main problems with conventional, direct measurements of plant structural properties is that they are laborious and often destructive. This is particularly evident when working with larger plants and plant species with many small leaves. Imaging intact plants can bypass the need to destructively harvest plants, allowing for the measurement of structural traits across plant development. Two-dimensional imaging techniques have long been used for the quantitative measurement of plant structural traits, including plant surface area, number of leaves, leaf shape and leaf colour (a database of such approaches is presented in [1], and is continually updated). Using new software tools, such as PlantCV [2], quantitative traits can even be extracted automatically from images, reducing user error and analysis time. However, most 2D imaging techniques were developed to only work for small plants with a simple structure, such as the two-dimensional rosettes of the model plant Arabidopsis thaliana [3], or to only extract relatively basic information, such as plant height [4]. For more complex or larger plants, 2D imaging techniques can result in inaccuracies due to overlapping features in captured images (i.e. occlusion of stems by leaves, leaves by leaves, etc.). 3D imaging addresses this issue, allowing us to capture the full detail of a plant’s structure without self-occlusion of any plant tissues.

There are several methods available to phenotype the 3D structure of plants (for detailed reviews see [5, 6]). Laser scanning (LiDAR) can provide very detailed reconstructions of plants but there is often a trade-off between the cost of instrumentation and the complexity of 3D models. Commercial instruments can cost upwards of US $10,000 but can generate detailed models of plants with > 2 million points. Newly developed DIY instruments can cost as little as US $400 but only generate models with approx. 40,000 points [7]. LiDAR can also be inflexible, both in terms of sample size (i.e. one system may provide good resolution for small plants but not for large plants, and vice versa) and downstream data analyses (i.e. may be limited to certain commercial data analysis programs). Photogrammetry on the other hand can be highly cost effective and versatile. Photographs of the plant are taken from multiple angles using a standard camera and subsequent computer analyses are used to reconstruct a scaled 3D model. This 3D reconstruction can then be used for trait measurements, such as plant dimensions, plant surface area and leaf area index, and modelling simulations, such as ray tracing of the canopy light environment [8]. Data quality can be comparable to more expensive LiDAR systems and it can be used for subjects of wide-ranging sizes. Generally speaking, the more photos of the subject, the better the reconstruction will be with regards to precision and accuracy [9], albeit with longer capture and processing times. Many photogrammetry software packages are open-source (including Colmap, [10]; Meshroom, [11], and VisualSFM, [12]), meaning that they are freely available and can be modified at the code level to give users a highly customised and powerful experience. Photogrammetry has been used effectively for the 3D reconstruction of a number of monocot crop species, including wheat [13] and rice [14], and for species with larger leaves, such as sunflower [15] and soybean [16]. However, few studies have assessed whether it could be used to accurately reconstruct 3D models of plant species with many small leaves, such as chickpea. Here we demonstrate that several important changes to existing photogrammetric reconstruction methods could allow for reconstruction of species with small leaves and highly branching architecture. These changes will ensure that smaller elements are captured accurately during imaging and during the reconstruction process. Increasing the number of capture angles around the plant will reduce the opportunity of small leaves/branches being occluded from view. Capturing higher quality images at larger resolutions will further assist in the inclusion of small plant features during reconstruction. Refinements to the photogrammetry workflow that increase the density of 3D point clouds, such as preventing downsizing of images during feature matching, increasing the number of pixel colours used to compute the photometric consistency score and reducing the photometric consistency threshold, will also improve the detail and accuracy of resultant 3D reconstructions [17].

Chickpea (Cicer arietinum L.) has long been an important annual crop for resource poor farmers across the globe but there is growing demand elsewhere due to changing diets and a push for protein rich alternatives to meat [18]. Chickpea is often considered more sustainable than non-legume grain crops, such as wheat or rice, due to its ability to form symbiotic relationships with nitrogen fixing bacteria, reducing reliance on nitrogen fertiliser [19]. It can also be used effectively in rotation with cereal crops to break the life cycle of diseases and improve soil health [20]. Chickpea can therefore be a lucrative option for many growers, particularly considering there are also economic benefits, with returns to Australian growers of roughly AU $300 t−1 compared to around AU $100 t−1 for wheat between 2012 and 2014 [21]. Yet, whilst chickpea has an estimated yield potential of 6 t ha−1 under optimal growing conditions, annual productivity of chickpea worldwide currently sits at less than 1 t ha−1 [18]. This yield gap is the result of a lack of genetic diversity in breeding programs that has left cultivars susceptible to biotic and abiotic stresses. Phenotyping for natural variation in traits of interest across diverse germplasm could be used to minimise this yield gap and to improve grain yield potential. Chickpea is an indeterminate crop in which vegetative growth continues after flowering begins. This poses management challenges for growers [22] and can result in yield losses. Genes for determinacy have been found in other species [23,24,25] and could be explored in chickpea by phenotyping diverse populations across their development. Chickpea also has a highly branching structure, requiring more resources to be allocated to structural tissue, which may reduce remobilisation of nutrients to pods during reproductive growth [26]. Modification of plant architecture through targeted plant breeding has led to huge successes in other crop species, most notable was the introduction of dwarfing genes into elite varieties of wheat, which led to increased seed yields, reduced yield losses due to lodging and was integral to the green revolution of the 1960s and 1970s [27]. By assessing canopy architecture traits across chickpea genotypes, we will improve our understanding of the underlying genetics controlling these traits, how these traits influence plant productivity and can then use this information to make informed breeding decisions.

The main aim of this work was to develop and validate a low-cost and open-source photogrammetric method for detailed 3D reconstruction of chickpea plants. The imaging setup consisted of three DSLR cameras, LED lighting and a motorised turntable, controlled by a user-programmable Arduino microcontroller (Fig. 1). 3D reconstruction and analyses of 3D models were performed using open-source software on a Windows PC (Fig. 2). The system was tested with a variety of chickpea genotypes (three commercial and three pre-breeding lines) and measurements were validated against conventional, destructive measurement techniques. We also assessed whether differences in plant architecture or growth rates could be observed across chickpea genotypes.

Fig. 1
figure 1

Diagram showing the 3D scanner set-up in the laboratory. The coloured circles highlight the three cameras angled to face the plant. Note that no cables are shown in the diagram for the purpose of clarity. Exact spacing of the set-up is shown in Additional file 11: Figure S1

Fig. 2
figure 2

Visual summary of the open-source data processing pipeline. (1) Captured images are used to generate a sparse point cloud, (2) which is then used to generate a dense point cloud. (3) Dense point clouds are manually cleaned and scaled, and then used to generate either (4) a convex hull or (5) a meshed model. (6) The scaled point cloud and the meshed model can then be used for further analyses. Note that all but step (3) can be automated in Windows using batch files or, in the case of (6), using R scripts

Results

Reconstruction validation

The 3D reconstructions provided very reliable estimates of plant height and total surface area (Fig. 3), both with an R2 > 0.99 and Spearman rank correlation coefficient (ρ) > 0.99 when compared to validation measurements. Height was slightly underestimated, with measurements from 3D reconstructions approximately 4% lower than validation measurements, yet there was little variation in this relationship (R2 = 0.999, RMSE = 5.45 mm, MAPE = 4.4%, ρ = 0.992, p < 0.001) and it was consistent across all studied genotypes (p > 0.05; Additional file 17: Table S1). Plant surface area measurements were estimated within 0.5% (R2 = 0.990, RMSE = 26.85 cm2, MAPE = 9.1%, ρ = 0.992, p < 0.001), although there was more overall variation in estimates and the validation relationship varied slightly across genotypes (p < 0.05; Additional file 17: Table S1). Specifically, the surface area of the breeding lines grown outdoors was slightly over-estimated when compared to ground truthing measurements. This was likely caused by smaller, more curled up leaves that were not correctly assessed by ground truthing measurements, which assume all leaves are laid on a two-dimensional plane (for an example see Additional file 11: Figure S1). The MAPE in surface area estimates for commercial cultivars (excluding breeding lines) was 7.2%, whilst for the breeding lines the MAPE was 12.3%.

Fig. 3
figure 3

Validation of 3D scanner estimates of phenotypic traits against conventional measurements. a Is plant height and b is total plant surface area. The black lines are linear regressions, the grey shaded region is the standard error. Each point represents an individual plant. Colours represent the seven different genotypes used for validation: Genesis Kalkee (yellow), PBA Hattrick (light blue), PBA Slasher (pink), ICC5878 (dark blue), SonSla (orange) and PUSA76 (green). See Additional file 17: Table S1 for details of genotype-specific regression models

Representative growth data

The 3D scanner allowed us to accurately assess a variety of canopy traits as the plants grew (Fig. 4). Whilst there was some variation across individual plants and chickpea genotypes, general trends in growth were clear and easily recovered from 3D reconstructions. Height increased rapidly to a median of 101 mm in the first week after germination and then increased more gradually to 191 mm 5 weeks post-germination (Fig. 4a). Projected plant area, total surface area and canopy volume all showed characteristic exponential growth curves (Fig. 4b–d). Projected plant area increased from a median of 17.0 cm2 1 week after germination to a median of 220.9 cm2 5 weeks post-germination, total surface area rose from 37.8 to 415.9 cm2 in the same period, and canopy volume from 233 to 14,575 cm3. Plant area index was not found to vary greatly during the growth of the plants, with a median of 1.91 m2 m−2 1 week after germination and a median of 1.87 m2 m−2 5 weeks after germination (Fig. 4e). Week-to-week RGR were greatest between weeks 1 and 2, with leaf area increasing on average 84.1% ± 4.4% during this period, dropping to 56.9% ± 4.4%, 51.4% ± 4.0% and 64.1% ± 7.0% between weeks 2 and 3, weeks 3 and 4, and weeks 4 and 5 respectively (Fig. 4f) (for reference, corresponding daily RGRs were 8.1%, 7.3% and 9.2%, respectively). Whilst there was some variation in these growth-related traits across individual plants, we found no statistically significant differences across genotypes (p > 0.05). Overall variation increased as the plants grew, with some apparent divergence across genotypes in the latter weeks of the experimental period. For example, standard error represented only 10.6% of the mean total surface area in week 1 whilst it represented 15.5% in week 5, with similar trends for the other traits.

Fig. 4
figure 4

Representative growth data from the 3D scanner. a Plant height, b projected plant area, c total plant surface area, d convex hull canopy volume, e plant area index and f week-to-week area based relative growth rate (RGR) across 5 weeks. In panels ae, solid lines and shaded regions represent genotype means ± SE (n = 5); whilst dashed lines represent individual plants. The main graphs of bd are presented on a logarithmic scale, with non-log data shown in the inset graphs. In f the violins represent the range of RGR for each genotype each week, points are individual plants. Colours represent the three commercial genotypes Genesis Kalkee (yellow), PBA Hattrick (blue) and PBA Slasher (pink)

Vertical distribution of plant surface area

Further analyses in R enabled us to retrieve detailed data about the distribution of plant surface area as a function of plant height in an automated and repeatable fashion. The visual summaries presented in each panel of Fig. 5 are directly outputted from R. These visual summaries provide a fast and semi-quantitative method of assessing how individual plants are partitioning surface area (and by proxy, their biomass). For example, in the representative data shown in Fig. 5, the Genesis Kalkee, PBA Hattrick, ICC5878 and PUSA76 plants (Fig. 5a, b, d, f respectively) assign most of their plant area to the lower canopy; the PBA Slasher plant (Fig. 5c) has a relatively sparse canopy and the SonSla plant (Fig. 5e), albeit much smaller than the others, appears to have two discrete canopy layers.

Fig. 5
figure 5

Representative plant surface area distributions for selected individual plants of each genotype. In each panel is (left) a graphical summary of the leaf area per mm of height, where each bar represents the sum area of all mesh triangles with centres lying inside each 1 mm z-axis cross section, and (right) a 2D visual representation of the meshed model of the plant. Note that the scales of both the graph and the models differ across panels ae due to variation in the size of individual plants. All plants shown were imaged 5 weeks post-germination

To make statistical comparisons of relative area distribution data across genotypes, individual plant data was normalised by plant height and total surface area (Fig. 6). Genotypes differed significantly in their relative vertical distribution of leaf area (p < 0.001), with particularly clear differences found between the breeding lines and commercial cultivars. The commercial cultivars were much denser in the lower half of the canopy, whilst the breeding lines, and in particular line SonSla, were denser in the mid- to upper-canopy.

Fig. 6
figure 6

Comparisons of leaf area distributions can be made across chickpea genotypes by normalising data. All plants presented were imaged at 5 weeks post-germination. a Is the normalised cumulative surface area from the base to the top of the plant plotted against normalised height. Thick lines represent genotype means (n = 10), thin lines represent individual plants. b Shows surface area versus normalised height, with each bar representing the sum area of all mesh triangles with centres lying inside a 1% z-axis cross section. Values shown by bars are genotype means (n = 10) whilst solid and dashed lines represent the genotype mean ± SE respectively fitted with a LOESS function in R

Discussion

We have successfully built and validated a low-cost, open-source 3D scanner and data processing pipeline to assess the architecture and growth trends of individual chickpea plants. Chickpea has leaves that are considerably smaller than most other species that have been studied previously using photogrammetry. In our initial attempts to use the 3D reconstruction workflow developed for wheat by Burgess et al. [13], we found that there was not enough detail in the 3D reconstructions for accurate measurement of structural traits (Additional file 12: Figure S2). However, by modifying key parameters in the reconstruction workflow, we were able to produce reconstructions that provided consistent high-quality data. Validations of height and area measurements from reconstructions against ground truthing measurements highlight the reliability of the system (height, R2 > 0.99, MAPE = 4.4%; area, R2 = 0.99, MAPE = 9.1%). The accuracy of leaf area estimates is comparable to other photogrammetric estimates reported in the literature for larger leaved plant species (Brassica napus, R2 = 0.98, MAPE = 3.7%, [28], maize, sunflower and sugar beet, R2 = 0.99, MAPE = 3.9%, [29], selected houseplant species, R2 = 0.99, MAPE = 4.1%, Itakura and Hosoi, 2018; tomato, R2 = 0.99, MAPE = 2.3%, [30]. We noted a difference in validation accuracy for plant surface area across genotypes, however, we assigned this to 2D ground truthing measurements underestimating the area of curled up leaves of the outdoor-grown breeding lines, rather than an overestimation of surface area from 3D reconstructions. This underestimation would also explain the greater overall MAPE for area estimates in our study versus other previously studied crops. A similar discrepancy was reported by Bernotas et al. [31] for Arabidopsis thaliana, where top down 2D images consistently underestimated rosette area relative to 3D models that accounted for leaf curvature. In this sense, our 3D reconstructions provide a better estimate of plant surface area than conventional, labour intensive and destructive measurement techniques of chickpea plants.

The results we present here show that photogrammetry could be used as an effective tool to assess diversity in plant architecture and growth-related traits across chickpea lines and help to identify novel plant breeding targets. Although we did not find statistically significant differences in architecture traits or growth trends across the three commercial genotypes included in our study, we feel that screening more diverse chickpea lines and continuing to monitor growth for a longer period of time would help to elucidate trends across genotypes. The narrow genetic base of chickpea has hindered improvements in breeding programs in recent years [18]. Together with next generation sequencing technologies, the development of new breeding lines selected specifically for the investigation of traits of interest could help to address this [32]. Even more diversity might be found if we were to investigate traits in wild relatives of cultivated chickpea [33]. As the main aim of this study was to evaluate whether photogrammetry could be used to accurately reconstruct chickpea plants, we only monitored the growth of the plants for 5 weeks post-germination. We did notice there was more variation in architectural traits, both across and within genotypes, as the plants grew larger and future work should seek to assess these traits to plant maturity. The ability to comprehensively assess growth rates of plants to maturity could provide an opportunity to screen for highly desirable developmental traits, such as determinacy.

Whilst we have found photogrammetry capable of producing highly accurate reconstructions of individual chickpea plants, it remains labour and time intensive and may act as a bottleneck for the wide scale phenotypic screening of whole populations of plants. To overcome this, future improvements could focus on further automation and acceleration of image capture and reconstruction. Lifting plants on and off the turntable is currently the major limitation to automation of image capture in our method, however this could be overcome by the adoption of photogrammetry in conveyor belt phenotyping platforms. Speeding up image capture will rely upon reducing the amount of time the plant remains stationary between rotational imaging steps. This could possibly be achieved using a smoother motor with high intensity lighting or synchronized flash photography, allowing for continuous capture of the plant without the need for stopping. With respect to the reconstruction process, automation and faster processing times may be achieved through use of high performance computing infrastructure or cloud computing resources, both of which are increasingly available to the research community.

Unlike monocot grain crops such as wheat and barley, chickpea does not have discreet canopy layers, with fruits developing across the whole plant. As such, the optimum light environment for productivity of chickpea canopies will be quite different to that of wheat. The indeterminate nature of chickpea likely shifts this optimum further still, as leaves lower in the canopy will remain photosynthetically active for longer. Modelling could allow us to determine the theoretical optimum light environment and then by running ray tracing simulations with our 3D reconstructions we could determine how close current chickpea architecture is to this optimum. A number of recent studies have used such approaches to simulate the canopy light environment of other crop species, often coupling this to a photosynthetic model to estimate potential plant productivity (intercropped millet and groundnut, [14], sugarcane, [34], and wheat, [35]). We hope that our validated method and open dataset will enable future studies to model the light environment of chickpea.

The method we present here provides very reliable estimates of overall plant surface area and other plant traits from whole chickpea plants. We were able to dissect each reconstruction into its component mesh triangles and investigate how plant surface area is distributed relative to plant height. However, what we have so far been unable to do is systematically distinguish between leaf, stem or other plant tissue types in the reconstructions. Segmentation of the models in this way would allow us to retrieve more detailed phenotypic information, including the ability to assess partitioning of biomass across plant tissues, accurately assess other phenotypic traits (such as leaf angles and leaf numbers) and even aid in yield prediction [36]. Automatic segmentation of 3D models has been achieved in other plant species with larger leaves using several approaches. Itakura and Hosoi [37] were able to segment individual leaves of a number of broad-leaved houseplant species using a combined attribute-expanding and simple projection segmentation technique. While they retrieved very accurate estimates of leaf area (R2 = 0.99, MAPE = 4.1%) using this method, we feel that it would be highly unlikely to work with comparatively tiny chickpea leaves. Another approach would be to use a machine learning algorithm to segment different plant tissues based on pre-trained models. Ziamtsov and Navlakha [38] recently developed an open-source software package called P3D for this explicit purpose. In their work, they showed P3D to segment leaves and stems in point clouds of tomato and tobacco with > 97% accuracy. We attempted to use P3D to segment our chickpea models with limited success (data not shown), although this was likely due to the use of the default P3D training datasets developed with larger leaved species. We hope that in the future, with more relevant annotated training datasets, this segmentation technique could also work for chickpea. We provide the full complement of our processed point clouds and meshed models to aid in the development of these training datasets.

The data processing pipeline we have presented here, whilst all open-source, does rely on a relatively powerful computer. Specifically, reliable reconstruction of a dense point cloud using PMVS takes a very long time if computer resources (CPU processing power and memory) are limiting. The smaller leaves of chickpea necessitated higher resolution photogrammetry than was needed for the reconstructions of wheat by Burgess et al. [13]. For our reconstructions on a desktop computer with a 16 core/32 thread 3.5 GHz CPU (Ryzen Threadripper 2950X; AMD Inc., Santa Clara, CA, USA) with 128 Gb 3200 MHz RAM (HyperX Fury; Kingston Technology Corp., Fountain Valley, CA, USA), the generation of a dense point cloud took roughly two hours per plant. We also found that running the reconstruction process from image data stored on a solid-state drive was considerably faster than running from images stored on a traditional hard disc drive. In the past, such computing resources would have been prohibitively expensive for most researchers however this is no longer the case, largely thanks to advances driven by computer gaming technology. Multicore computing is now the norm, even in portable laptop computers, and high capacity memory and fast solid-state storage are now reasonably priced.

On the topic of cost, our imaging set up cost roughly AU $1300, considerably less than commercially available alternatives that offer similar data quality. Panjvani et al. [7] recently developed a comparably priced (US $400) DIY LIDAR system for 3D scanning of individual plants, however the quality of leaf area estimates was considerably less than ours (R2 < 0.6 against ground truthing data, MAPE = 31.5%). By far the most expensive part of our set up was the cameras. In our method presented here, we used three DSLR cameras however we must highlight that the method can also be adapted to work with just one camera, substantially reducing cost. In our early testing, we used just one camera and rotated the plant three times, with the camera manually repositioned from one mounting point of the camera bracket to the next between each rotation. Whilst this took us longer to capture the image sets, we did not notice any reduction in data quality. It may also be possible to use cheaper cameras. Martinez-Guanter et al. [29] used a regular point and shoot camera for the 3D reconstruction of maize, sunflower and sugar beet plants, with an R2 > 0.99 for both height and leaf area estimates compared against ground truthing measurements. Paturkar et al. [39] show that even a mobile phone can be used for image capture, with 3D reconstructions of chilli plants giving an R2 > 0.98 for estimates of both height and leaf area. These technological advances and reductions in cost mean that photogrammetric techniques are more accessible than ever before to the plant phenotyping community. The increased availability of these technologies will allow for the adoption of data driven approaches plant science research where this was not possible before.

Conclusions

Our work has shown that it is possible to use low-cost photogrammetry techniques to accurately phenotype architectural traits and growth habits of individual chickpea plants. We hope that our use of open-source software and hardware will allow others to easily reproduce our method and to develop it further. In particular, there is a need to test whether photogrammetric reconstructions of chickpea could be used for simulations of the canopy light environment and whether they could be automatically segmented into different plant organs using deep learning algorithms. There is a need for higher yielding, environmentally friendly and stress tolerant chickpea varieties with increasing demand for high quality pulse protein worldwide. The use of novel measurement techniques and associated data analytics should assist us in identifying traits of interest and allow us to explore diversity in these traits so that breeders can make informed breeding decisions.

Methods

Plant material

Three commercial Australian chickpea (Cicer arietinum L.) cultivars (PBA Slasher, PBA Hattrick and Genesis Kalkee) were grown from seed in a controlled glasshouse in August 2019. These genotypes were selected as their architecture is known to differ in the field (Additional file 17: Table S2) and are referred to collectively herein as “commercial cultivars”. Seeds were planted in potting mix containing slow release fertiliser (Osmocote Premium; Evergreen Garden Care Australia, Bella Vista, NSW, Australia) in 7 L square pots and watered to field capacity once daily. The daytime temperature in the glasshouse was controlled to 25 °C and the nighttime temperature controlled to 18 °C. The relative humidity was set to 60%. Supplemental lighting was provided by LED growth lights if ambient light fell below a photosynthetic photon flux density (PPFD) of 400 µmol m−2 s−1, this effectively maintained a PPFD of > 400 µmol m−2 s−1 at the plant level at all times during the day. Fifteen plants (five for each genotype) were transferred from the glasshouse to the laboratory for imaging once per week and were returned to the glasshouse after measurement. Additionally, each week 15 plants (five of each genotype) were imaged and then destructively harvested for validation of 3D scanner measurements.

Three chickpea genotypes (ICC5878, SonSla and PUSA76) were selected from local and international sources based on contrasting canopy architecture and growth-related traits (Additional file 17: Table S1) and are referred to collectively herein as “breeding lines”. ICC 5878 is from the ICRISAT Chickpea Reference Set (http://www.icrisat.org/what-we-do/crops/ChickPea/Chickpea_Reference1.htm). SonSla is a fixed line (F7-derived) resulting from a cross between Australian cultivars Sonali and PBA Slasher. PUSA 76 is an older accession released by IARI, India and imported via the Australian Grains Genebank. These were grown outside in February–April 2020. Seeds were planted in potting mix containing slow release fertiliser (Complete Vegetable and Seedling Mix; Australian Native Landscapes Pty Ltd, North Ryde, NSW, Australia) in 7 L square pots and watered every 3 days to field capacity. Twelve plants of each genotype were imaged at 5 weeks post-germination and destructively harvested for validation of 3D scanner measurements.

Semi-automated 3D imaging platform

Plants were imaged using a turntable and camera photogrammetry setup (schematic in Fig. 7). The turntable is constructed from acrylic (Suntuf 1010493; Palram Australia, Derrimut, Victoria). It consists of a circular top plate on which the potted plant is placed and a base which houses a stepper motor (42BYG; Makeblock Co., Ltd, Shenzhen, China). A lazy susan bearing plate (Adoored 0080820; Bunnings Warehouse, Hawthorn East, Victoria, Australia) is used to connect the plate to the base to provide smoother movement and reduce strain on the motor during imaging. The turntable is connected to and controlled by a user-programmable Arduino microcontroller (Uno R3; Arduino LLC, Somerville, MA, USA) and a number of Arduino breakout boards. The stepper motor is driven via a stepper driver board (DRV8825; Pololu, Las Vegas, NV, USA), that provides precise control of turntable rotation, allowing for individual rotational microsteps as small as 0.06°. A copper heatsink (FIT0367; DFRobot, Shanghai, China) and 5 V fan (ADA3368, Adafruit Industries LLC, New York, NY, USA) are installed on the stepper driver to prevent overheating. The microcontroller triggers the cameras via a relay breakout board (Grove; Seeed Studio, Shenzhen, China) and a custom-made remote shutter cable. An LCD screen with integrated keypad (DFR0009; DFRobot) is used to operate the turntable and provide basic information during the capture process. A 5 V buzzer (AB3462; Jaycar, Sydney, NSW, Australia) audibly alerts the user when a full rotation is complete. Power is provided via a mains—12 V DC 5 A power supply (MP3243; Jaycar). The motor is powered directly with 12 V DC whilst a step-down voltage regulator (XC4514; Jaycar) is used to provide 5 V DC to the microcontroller and associated boards. A wiring diagram is provided in Fig. 7b. The turntable was set on a white table against a white backdrop (Fig. 1).

Fig. 7
figure 7

Diagram of the 3D scanner turntable and microcontroller. a Exploded view of the 3D scanner showing components required for assembly. b Wiring diagram for the 3D scanner. Note that in b, 5 V wires are represented by solid lines and 12 V wires by dashed lines

The microcontroller is programmed using the open-source Arduino IDE software (Version 1.8.10; Arduino LLC). The automated capture program was designed such that it will turn the plant a set number of degrees (determined by the user), pause briefly for the plant to stop moving (with a delay programmed by the user) and then trigger the camera(s) to capture an image. This process is repeated until a full rotation of the plant has been captured. The microcontroller also offers the user some control of the turntable via the buttons on the LCD shield (to increase/decrease the number of images captured per rotation, to manually turn the plant clockwise/anticlockwise and to start/pause/stop the automated capture sequence). Further control of the capture sequence can be achieved through modification of the code. The Arduino program is provided in Additional file 1.

Lighting is provided by two large LED floodlights (generic LED floodlights bought on eBay) held in a vertical orientation with custom stands made from aluminium extrusion (Fig. 1a). A sheet of white acrylic (Suntuf 1010493; Palram Australia) is placed over the front of each light as a diffuser. Large cooling fans (MEC0381V3; Sunon, Kaohsiung City, Taiwan) were installed on the rear of the lights. In our imaging setup, the lights were set 80 cm away from the plant on either side of the tripod and angled to face the plant directly (Additional file 13: Figure S3).

A tripod (190XPRO; Manfrotto, Cassola, Italy) was used as a base for a custom-made camera mounting bracket (schematic in Additional file 14: Figure S4). The top of the tripod was set level with the table on which the turntable was sat. The mounting bracket was constructed from a 110 cm length of aluminium square hollow extrusion with three quick release mounting points (323 Quick Change Plate Adapter; Manfrotto) for a camera positioned at 10 cm, 55 cm and 100 cm vertically from the base and angled towards the plant. A steel angle bracket (SAZ15; Carinya, Melbourne, Australia) was bolted to the bottom of the aluminium extrusion for secure attachment to the tripod.

Camera setups

Three digital SLRs (D3300; Nikon Corporation, Tokyo, Japan) were used for imaging, each with a 50 mm prime lens (YN50; Yongnuo, Shenzhen, China). The cameras were affixed to the custom mounting bracket such that images were captured in a horizontal orientation. Exposure was set to 1/100 s, aperture set to F8 and ISO set to 400. Each camera was manually focussed on the first plant imaged each day and remained fixed for the remaining plants. Images were captured in JPEG format at 24.2-megapixel resolution and saturation boosted in-camera. Each camera was powered by an AC adaptor (EP-5A; Nikon). The cameras were connected via USB cables to a Windows computer running the open-source digiCamControl software (Version 2.1.2; Istvan, 2014) for live offload of captured images into the structured folders required for downstream data processing. Images were also backed up onto SD cards installed in each camera. 120 images were captured of each plant (40 with each camera). 120 images were chosen after initial testing (data not shown) revealed this to provide the best balance between reconstruction quality and reconstruction processing time.

Semi-automated image processing and 3D reconstruction

Image processing and 3D model reconstruction was conducted using open-source software on a Windows PC (as summarised in Fig. 2). A dense 3D point cloud was first generated from captured images using VisualSFM (Version 0.5.26 CUDA; [12]) and CMVS + PMVS2 [17] using a modified method of Burgess et al. [13]. Processing parameters were adjusted (in the nv.ini configuration file of the VisualSFM working folder; provided in Additional file 2) from the default settings to optimise the reconstruction of chickpea plants. The settings we used were modified from those used successfully for wheat plants by Burgess et al. [13] as these were unsuitable for reconstruction of the finer details of chickpea plants and underestimated plant surface area (as shown in Additional file 15: Figure S5). Briefly, compared to the settings used for wheat, the CMVS max_images parameter was increased from 40 to 120, allowing the whole image dataset to be analysed concurrently during reconstruction, rather than separated into batches. This was possible due to the large memory capacity on the computer that we used for processing (128 Gb) and reduced the likelihood of multiple point clouds being produced for each plant. The PMVS2 min_images parameter was increased from 3 to 4, meaning that each 3D point in the reconstruction must be visible in at least four images. Functionally, this reduces noise and improves the accuracy of the point cloud. The PMVS2 csize parameter was reduced from 2 to 1 to create a denser point cloud. The PMVS2 wsize parameter was increased from 7 to 12 to provide more stable reconstructions by including more colour information when computing the photometric consistency score. Finally, the PMVS2 threshold parameter was reduced from 0.7 to 0.45. The threshold refers to the photometric consistency measure above which a patch reconstruction is deemed a success and kept in the point cloud. Reducing the threshold allowed us to retain more of the less consistent points in the point cloud. Note that more detailed descriptions of these parameters can be found in the CMVS + PMVS2 documentation. Point cloud generation was automated using a Windows batch file (provided in Additional file 3).

Dense point clouds were scaled (using the width of the pot as a reference), denoised based on colour (removing all but the green/brown points), reoriented (such that the ground was parallel to the X–Y plane) and any remaining non-plant points removed manually in Meshlab (Version 2020.06; [40]). Statistically outlying points were then removed using the statistical outlier removal (SOR) feature of CloudCompare (Version 2.11.0; GPL software). The remaining points were sub-sampled using Poisson disk sampling (Explicit Radius = 0.5, Montecarlo oversampling = 20; [41]). A meshed model was created from the sub-sampled point cloud using a ball pivoting algorithm (default settings; [42]) and any large holes in the meshed model filled with the close holes feature (max size to be closed = 50). All but the scaling, manual removal of non-plant points and outlier removal were run in a consistent and automated fashion using Meshlab scripts and a Windows batch file (provided in Additional files 4, 5, 6, 7, 8 and 9). Meshed models consisted of n triangles with 3D coordinates of the ith triangle given by a vector (xi1, yi1, zi1, xi2, yi2, zi2, xi3, yi3, zi3), where x and y correspond to coordinates parallel to the ground and z corresponds to height above the ground.

Analyses of geometric features (height, max width, etc.) and plant surface area were performed using the base functions in Meshlab. The surface area from Meshlab was divided by 2 to provide a “one-sided” area, which is referred to herein as total surface area. Canopy volume was measured in Meshlab after fitting a convex hull to meshed model. A top down orphographic projection of the model was exported as an image file and processed in ImageJ (Fiji 1.52p; [43]) to estimate projected plant area. Plant area index (PAI) was calculated as total surface area/projected plant area. Week-to-week relative growth rates (RGR) for total surface area were derived for each plant as per Pérez-Harguindeguy [44], using Eq. 1:

$$RGR = \frac{1}{t} \cdot \ln \left( {\frac{A2}{{A1}}} \right),$$
(1)

where t is the time between measurement of leaf areas A1 and A2.

An R script was written to calculate the area of each individual triangle making up the surface of the meshed model and then to calculate plant surface area as a function of height. The script uses the png (version 0.1.7; [45]), rgl (version 0.100.54; [46]), Rvcg (version 0.19.1; [47]) and tidyverse (version 1.3.0; [48]) R packages. Briefly, the length of the ith triangle’s edges (Ai, Bi and Ci) is first calculated using the XYZ coordinates of its three vertices (xi1, yi1, zi1,xi2, yi2, zi2; and xi3, yi3, zi3), using Eqs. 24:

$$A_{i} = \sqrt {\left( {x_{i1} - x_{i2} } \right)^{2} + \left( {y_{i1} - y_{i2} } \right)^{2} + \left( {z_{i1} - z_{i2} } \right)^{2} } ,$$
(2)
$$B_{i} = \sqrt {\left( {x_{i1} - x_{i3} } \right)^{2} + \left( {y_{i1} - y_{i3} } \right)^{2} + \left( {z_{i1} - z_{i3} } \right)^{2} } ,$$
(3)
$$C_{i} = \sqrt {\left( {x_{i2} - x_{i3} } \right)^{2} + \left( {y_{i2} - y_{i3} } \right)^{2} + \left( {z_{i2} - z_{i3} } \right)^{2} } .$$
(4)

The area of the ith triangle (Si) is then calculated using lengths Ai, Bi and Ci using Eq. 5:

$$S_{i} = \frac{1}{2}\sqrt {\frac{{\left( {A_{i} + B_{i} + C_{i} } \right)}}{2} \cdot \left( {\frac{{\left( {A_{i} + B_{i} + C_{i} } \right)}}{2} - A_{i} } \right) \cdot \left( {\frac{{\left( {A_{i} + B_{i} + C_{i} } \right)}}{2} - B_{i} } \right) \cdot \left( {\frac{{\left( {A_{i} + B_{i} + C_{i} } \right)}}{2} - C_{i} } \right)} .$$
(5)

The script outputs a visual summary of plant surface area as a function of height, as well as a comprehensive.CSV file that contains extracted parameters (XYZ coordinates for the vertex of each triangle, XYZ coordinates of the centre of each triangle, the area of each triangle, etc.) from each reconstruction. This R script is provided in Additional file 10. For comparisons across genotypes, for each plant, height data was normalised based on the overall plant height and area data was normalised based on total surface area.

Validation measurements

The height of each plant was measured using a ruler, from the base of the stem to the highest point of the canopy. Plants were then destructively harvested, the harvested plant material laid flat on a large sheet of white paper and an image taken from above using a DSLR camera (Canon EOS R; Canon Inc., Tokyo, Japan) mounted to a tripod for validation of total surface area (representative images used for ground truthing are presented in Additional file 15: Figure S5). A ruler was included in the image for scaling. Lens corrections were first performed on the captured images in Adobe Photoshop (Adobe Inc., San Jose, CA, USA) to remove distortion and then images were analysed using ImageJ (Fiji 1.52p; [43]) to obtain measurements of total plant surface area.

To test the assumption that 2D image analysis techniques would not be accurate in assessing area-related traits due to overlapping plant elements, we analysed the side projected green area of two images of each chickpea plant from the week 5 image set used to reconstruct the 3D models. The two images chosen for each pair were separated by 90 rotational degrees but were both taken from the same height. Using a modified method of Atieno et al. [4], each image was scaled and a HSV colour thresholding mask used to compute the area of green plant material in ImageJ [43]. The mean variation in side projected area between the two images was found to be 8.4%, whilst the maximum variation for an image pair was 25.1% (Additional file 17: Table S3), highlighting the need for 3D phenotyping techniques.

We were concerned that movement of chickpea leaves during the measurement period (09:00–15:00) could influence estimates of surface area from the 3D scanner. Chickpea leaves move considerably during the day we thought this diurnal rhythm may affect the results. To alleviate this concern, we scanned the same plant several times across this measurement time window and found minimal variation (< 2.3% variance from the mean) in area estimates over time (Additional file 16: Figure S6).

Statistical analyses

Statistical analyses were performed in R [49]. For validation data, linear regressions models were plotted to visually compare conventional and 3D scanner measurements. Root mean squared error (RMSE) and mean absolute percentage error (MAPE) were calculated using base R and the MLmetrics package (version 1.1.1; [50]) respectively. Spearman rank correlation coefficient (ρ) was used to statistically analyse the regressions. Analysis of variance (ANOVA) was used to determine whether regression models differed statistically across genotypes. For representative growth data, statistical comparisons across genotypes were analysed using a repeated measures ANOVA with post-hoc Tukey’s HSD test using the emmeans package in R (version 1.4.7; [51]). Normalised area distribution data were analysed statistically using a non-parametric ANCOVA using the sm package (version 2.2–5.6; [52]). All regressions and representative data were visualised using ggplot2 in R (version 3.3.2; [53]).