Structure from Motion Photogrammetry in Forestry: a Review

  • Jakob IglhautEmail author
  • Carlos Cabo
  • Stefano Puliti
  • Livia Piermattei
  • James O’Connor
  • Jacqueline Rosette
Open Access
Remote Sensing (J Suarez, Section Editor)
Part of the following topical collections:
  1. Topical Collection on Remote Sensing


Purpose of Review

The adoption of Structure from Motion photogrammetry (SfM) is transforming the acquisition of three-dimensional (3D) remote sensing (RS) data in forestry. SfM photogrammetry enables surveys with little cost and technical expertise. We present the theoretical principles and practical considerations of this technology and show opportunities that SfM photogrammetry offers for forest practitioners and researchers.

Recent Findings

Our examples of key research indicate the successful application of SfM photogrammetry in forestry, in an operational context and in research, delivering results that are comparable to LiDAR surveys. Reviewed studies have identified possibilities for the extraction of biophysical forest parameters from airborne and terrestrial SfM point clouds and derived 2D data in area-based approaches (ABA) and individual tree approaches. Additionally, increases in the spatial and spectral resolution of sensors available for SfM photogrammetry enable forest health assessment and monitoring. The presented research reveals that coherent 3D data and spectral information, as provided by the SfM workflow, promote opportunities to derive both structural and physiological attributes at the individual tree crown (ITC) as well as stand levels.


We highlight the potential of using unmanned aerial vehicles (UAVs) and consumer-grade cameras for terrestrial SfM-based surveys in forestry. Offering several spatial products from a single sensor, the SfM workflow enables foresters to collect their own fit-for-purpose RS data. With the broad availability of non-expert SfM software, we provide important practical considerations for the collection of quality input image data to enable successful photogrammetric surveys.


SfM Point cloud UAV Close-range photogrammetry (CRP) Forest inventory Forest health 


The use of remotely sensed (RS) data in forestry is motivated by efforts to increase cost efficiency, precision and timeliness of forest information [1]. Differently, to traditional field-based sampling, the availability of full-coverage RS data enables the production of maps of key forestry variables, which are useful for forest management purposes. First examples of aerial imagery usage for forestry purposes date back to the beginning of the 1920s [2, 3]. Over the past century, there has been tremendous growth in the number of RS data sources available for the assessment and monitoring of forests. Three-dimensional (3D) RS data, which can describe tree or canopy height, have shown great potential for forest inventory [4]. In the past 20 years, the use of airborne laser scanning (ALS) has been widely used for forest inventory purposes and has become the standard data source for operational forest inventories in many countries around the world [5, 6, 7]. Nevertheless, the acquisition of ALS data requires a degree of planning and investment, making these data sources cost-effective only on a relatively large scale [8]. Up to the beginning of 2010, there were no cost-effective means of acquiring high-resolution 3D RS data for smaller areas, such as single forest properties or forest stands. Furthermore, in those cases, where ALS-based forest management is implemented, surveys are carried out infrequently, e.g. at intervals of 10–20 years [5]. Hence, for some forest stands, the information may be too unreliable for decision-making. Timeliness is a key requirement to enable the adoption of precision forestry practices. This is especially true when the forest structure is changing rapidly, as is the case in fast-growing regeneration forests, or when growth is hindered by biotic or abiotic disturbances.

Photogrammetric approaches to obtain 3D information on forest structure have become popular, offering substantial cost savings in the case of aerial photogrammetry compared with ALS [9, 10]. Photogrammetry is limited to the reconstruction of surfaces visible in the image data, providing ground information only where large vegetation gaps exist. However, photogrammetric data can be combined with pre-existing ground data, derived from light detection and ranging data (LiDAR) for example. This data synergy has been thoroughly discussed by Goodbody et al. [11], indicating the potential for cost-efficient forest inventory updates. Similarly, Kangas et al. [6] suggest an equal value of photogrammetric and ALS data in forest management planning, given that a ALS ground information is available from previous campaigns. Additional to the proven complementary use of LiDAR and photogrammetric data [9, 11, 12], recent attempts at deriving inventory relevant forest metrics from photogrammetric data alone show potential for aerial [13•, 14] and terrestrial [15•, 16] acquisitions. Further standalone use of photogrammetry was shown for forest health monitoring [17, 18•, 19] species classification [20] and biodiversity assessments [21, 22].

In the last decade, a photogrammetric approach offering flexible and cost-effective acquisition of combined 3D and spectral RS data has found wide application and acceptance in physical geography [23]: Structure from Motion (SfM), paired with multi-view stereo (MVS) algorithms (SfM-MVS, commonly abbreviated to just SfM). SfM is based on computer vision and facilitates the photogrammetric reconstruction from images alone. Contrary to traditional stereophotogrammetry, 3D information can be computed from overlapping images, without the need for prerequisite information on camera location and orientation, camera calibration and/or surveyed reference points in the scene. This allows the use of inexpensive imaging platforms, both for aerial or terrestrial applications.

SfM photogrammetry has been comprehensively reviewed in the geosciences [24, 25, 26••], where it has been gaining prominence for topographical surveys. We complement these findings with a summary of SfM photogrammetry use specific to forestry. We present an overview of the theoretical principles of a SfM-MVS workflow and its applications in forestry by reviewing a representative sample of key research in this field. Challenges and technical considerations are discussed, concluding with opportunities and practical implications for operational use of SfM by forest practitioners.

Structure from Motion: Theoretical Principles

Traditional stereophotogrammetry methods are based on an analogy of the binocular human vision. Depth can be perceived from two points whose relative position is known. However, depth, volumes or 3D features can also be perceived from a single observing point if either the observer or the object is moving [27, 28]. SfM is a photogrammetric technique that is based on both these principles: (i) the binocular vision and (ii) the changing vision of an object that is moving or observed from a moving point [29]. SfM is used for estimating 3D models from sequences of overlapping 2D images. It gained popularity in recent years due to its ability to deal with sets of unordered and heterogeneous images without prior knowledge of the camera parameters [30]. SfM differs from traditional photogrammetry mainly in three aspects: (i) features can be automatically identified and matched in images at differing scales, viewing angles and orientations, which is of particular benefit when small unstable platforms are considered; (ii) the equations used in the algorithm can be solved without information of camera positions or ground control points, although both can be added and used and (iii) camera calibration can be automatically solved or refined during the process. SfM can thus automatically deliver photogrammetric models without requiring rigorous homogeneity in overlapping images, camera poses and calibrations [31, 32, 33].

‘SfM’ photogrammetry is commonly used to define the entire reconstruction workflow, from image set to dense point cloud; however, strictly speaking, SfM only refers to a specific step in the workflow that provides camera parameters and a sparse point cloud (see Fig. 1). Although some studies use the sparse point cloud as a final product [31, 34], in most cases, dense image matching algorithms, such as MVS, are used in a subsequent step to densify the point cloud. The whole process can thus be referred to as SfM-MVS. Figure 1 contains a schematic workflow of the whole SfM-MVS process, and Fig. 2 shows a graphic diagram of the main three steps.
Fig. 1

Schematic workflow of the SfM-MVS process resulting in a dense point cloud from image sets. The point cloud is georeferenced by providing positional information for images and/or ground control points

Fig. 2

The three key stages in a SfM-MVS workflow illustrated on two hypothetical images of a Canary Island pine forest: (1) keypoint identification and matching (e.g. SIFT), (2) SfM with camera parameters and a sparse point cloud as output and (3) the densified point cloud following MVS

The SfM-MVS process starts with the automatic extraction of keypoints (i.e. points or sets of pixels with distinctive contrast or texture) in the images. The keypoints are identified in all images and then tied (matched) across images where they appear. The scale-invariant feature transform (SIFT [35]) and its variations are the most common algorithms for keypoint identification and matching in SfM [26••]. SIFT produces numerical descriptors for each point in each image. These descriptors are invariant to scale and orientation, thus suitable for identifying points or objects in pictures taken from different perspectives and under different conditions. Then, coherence of keypoint matches is checked using a coarse reconstruction of the geometry of the images and the relative position of the keypoints on them (Figs. 1 and 2).

Given a sufficient number of images and keypoint matches, SfM performs bundle adjustments to simultaneously compute camera poses and parameters, and a sparse 3D point cloud of the scene (consisting of the position of keypoints matched in different images). The bundle adjustment is solved using (i) initialization values obtained from sequences of randomly selected matched keypoints and, complementarily, parameters from the cameras and poses and (ii) a non-linear refinement [36]. Then, the outputs of SfM are scaled and georeferenced based on ground control points (GCPs) and/or data from navigation devices from the camera or its platform (Figs. 1 and 2).

The camera poses and parameters obtained from SfM are then applied to generate a densified point cloud using MVS algorithms. Prior to the MVS densification, and for computational efficiency or even viability, images are clustered based on their location [37]. In this way, the dense point cloud of each cluster (i.e. group of images) is computed separately (Figs. 1 and 2).

A dense point cloud, with colour/spectral information derived from the input images, represents the primary output of the SfM-MVS workflow. Subsequent processing steps (for aerial surveys) typically involve the derivation of a digital surface model (DSM) and an orthomosaic. A canopy height model (CHM) can be attained by height normalization (i.e. conversion from height above sea level to height above ground) with a pre-existing digital terrain model (DTM). When SfM-derived surface data are height normalized in such a way, this offers the calculation of forest metrics like those commonly derived from ALS (e.g. height, timber volume, biomass). Additionally, image metrics like radiance/reflectance values and texture may be extracted [13•, 18•, 20, 38, 39]. Finally, rasterization can offer opportunities to explore the sensed information in more depth when statistics are calculated for every cell (e.g. height percentiles, surface roughness, spectral indices) [40, 41, 42].

SfM Photogrammetry in Practice

With photogrammetry being a passive technique, results are highly influenced by the input image data. SfM photogrammetry, employing an automated process to identify and match features by computer vision, is fundamentally dependent on image quality. Sensors, settings and acquisition designs should be considered with great care.

In every circumstance, the camera settings need to be considered to ensure optimal image data is acquired given a set of constraints, namely (i) those from the environment (lighting conditions), (ii) the platform (UAV, pole, tripod or handheld) and (iii) the camera and lens combination (the exposure triangle, focal length, sensor size). Acquiring high-quality image data has been discussed in O’Connor et al. [43••] and Mosbrucker et al. [44], with key rules-of-thumb including keeping the motion of the camera to a minimum, and increasing ISO (i.e. the sensors sensitivity) to account for potential underexposure (Fig. 3). RAW image data is better to capture as it retains the raw pixel values acquired by the camera prior to quantization and compression [45].
Fig. 3

Image quality issues illustrated by simulated degradations on UAV image: a adds motion blur, which has negative impacts on the quality of photogrammetric reproduction (Sieberth et al., 2014); b adds JPEG compression, which smoothes subtle contrast changes across an image; c adds noise, which can rapidly degrade image quality at high-ISO values; d adds overexposure, where the image sensor was exposed for too long a time and e underexposure, where the image sensor was exposed for too little time

Image network geometry has an impact on the quality of reproduction, and for every survey, a ‘convergent’ imaging geometry should be sought that where the principal axis (perpendicular to the image sensor) of the images used converge so that systematic error is minimized [46, 47]. In UAV imaging, James and Robson [46] suggest surveying with gently banked turns when using fixed wing UAVs, in order to achieve this. With rotary UAVs, a similar result can be achieved by angling the camera on the gimbal on which it is mounted. For terrestrial imaging, a convergence of images on AOIs is advised, as presented in Mosbrucker et al. [44].

Within image acquisition and SfM photogrammetric workflows, users have many parameters which they can vary depending on the equipment and software used. For some, users can have near full control (e.g. the ‘exposure triangle’; ISO, shutter speed and aperture), though there are several which will only be estimated prior to performing a survey (such as the exact camera positions images will be acquired from). Other influential factors, which cannot be manipulated (e.g. light conditions), will have to be carefully considered when planning a SfM-based survey. The success of reconstruction is ultimately dependent on factors that can be broken down into five categories, as presented in Table 1. The accuracy of the position and scale of a survey is then determined by the referencing approach (e.g. GCPs, direct georeferencing, manual scaling).
Table 1

Overview of variables influencing the results of a SfM survey






High surface contrast to allow for feature-point detection

Pattern repetition

Increase overlap and increase accuracy of geotags

Moving features



Increase overlap and viewing angles

Lighting conditions

Sun angle

High! Solar noon is ideal


Overcast provides even lighting (ambient occlusion) for structural (RGB) surveys.

For spectral surveys little atmospheric influence may be required, clear skies.

Changing illumination


Camera parameters

Focal length

Wide but not too wide to minimize distortions. 28–35 mm is a good basis (James et al. 2012)

Exposure ∆

Well exposed

- Aperture

Small for max DOF*, f/8 an advisable default

- Shutter speed

High for reduced motion blur*, ground speed (m/s) * exposure time (s) = blurred pixel



Low for min noise*, auto-ISO an advisable default

*Ideal scenario, but will always be a compromise between these three parameters

Pixel pitch

As high as is practical. Physical pixel size positively influences dynamic range and sensitivity

Survey characteristics


High (> 80% forward and lateral) as rule of thumb for forests to increase redundancy and matchability in scenes with high pattern repetition, moving features and/or occlusions. As a rule of thumb, a UAV-SfM data acquisition should be planned so that each point will be visible at least in 4–5 images.

View angles

Convergent for reduction of systematic errors (RGB)

Parallel (Nadir) for spectral sensing (reflectance)

Survey range

With increasing distance to the object/scene (decreasing GSD) survey precision degrades. Increased GSD requires higher overlap.

Processing parameters


- Image scale

If matching is not successful at full image scale ½ or ¼ may promote matchability

- Keypoints

The number may be reduced for large datasets to reduce processing time


Densification may not always be required at full image scale/maximum point cloud density

Secondary products

Multitude of algorithms for meshing, gridding etc. (results will depend on specific method)

To apply SfM photogrammetry in forestry, important aspects to a successful survey are as follows: (i) the scene is covered with overlapping images from multiple locations and angles (high overlap to increase redundancy and multiple viewing angles of the same object to reduce occlusions and systematic errors), (ii) any feature to be reconstructed should be visible in at least three images (five or six images for dense vegetation) and the angular divergence between neighbouring images between should not exceed 10–20°, (iii) the scene is sufficiently illuminated (constant lighting is preferable, e.g. overcast or cloud-free conditions) and (iv) object of interest is fixed (preferably no movement from branches in wind).

The Current Status of SfM in Forestry

With the ability to produce highly detailed 3D information from a set of images alone, SfM photogrammetry lays a powerful tool into the hands of anyone looking to collect their own fit-for-purpose RS data. Owing particularly to the potential of using off-the-shelf cameras and the availability of affordable user-friendly software, the application of SfM photogrammetry in physical geography has increased rapidly [26••, 48]. With SfM photogrammetry being scale independent, images may be acquired from a multitude of platforms ranging from ground-based, handheld or pole-mounted options, to unmanned aerial vehicles (UAVs) and manned aircraft. UAVs have enabled geospatial data to be acquired in new ways. Flexibly deployed at scales from several hectares to square kilometres [49], they allow forest practitioners to collect their own aerial information. In fact, there is an increasing interest in UAV forest surveys that can arguably be attributed to SfM-based photogrammetric processing [26••].

The rapid adoption of SfM photogrammetry is indicated by a growing number of scientific publications in forestry that utilize this photogrammetric technique. We conducted a search for peer-reviewed studies indexed by the Web of Knowledge database using the keywords ‘Structure from Motion’, ‘UAV’ and ‘Forestry’ (and their most common variations). The search results were manually filtered to retain only forestry-related studies applying a SfM-based workflow. We further categorized results into research on aerial and terrestrial inventory, forest health and proof-of-concept studies. These results are presented in Table 2 and reveal a steady rise of publications on forest remote sensing with SfM photogrammetry.
Table 2

Number of publications on SfM photogrammetry for forest/tree remote sensing per year with manually assigned sub-categories










SfM in forestry









- Airborne inventory









- Terrestrial inventory









- Forest health









- Proof-of-concept









Results presented are based on a manually filtered search in the scientific publications database Web of Knowledge using the search terms: TS = (‘Structure from Motion’ OR ‘Structure-from-Motion’ OR ‘SfM’ OR ‘sfm’ OR ‘structure from motion’ OR ‘structure-from-motion’ OR photogrammetry OR UAS OR SfM OR UAV OR RPAS OR drone OR CRP OR ‘unmanned aerial’ OR ‘Unmanned Aerial’) AND TS = (forest OR forestry OR tree). The date of this search was 21 February 2019

SfM photogrammetry applications aimed at forest inventory are currently the most studied (Table 2). Here, a distinction between aerial and terrestrial approaches can be made. An aerial approach typically utilizes a canopy surface model derived from SfM and/or associated spectral properties to estimate inventory relevant parameters [11]. Terrestrial acquisitions, also termed close-range photogrammetry (CRP), focus on the reconstruction of stems within sample plots or the reconstruction of individual trees [15•].

A further field of research is the assessment and monitoring of forest health condition. For SfM-based mapping of the canopy, hereby an aerial acquisition of image data, most commonly by UAV, with multispectral sensors prevails. Studies dealing with forest health often make use of the 3D information and derived 2D spectral products that SfM photogrammetry delivers [18•, 50]. The following sections describe the research on SfM-based forest inventory and health assessments to date in more detail.


Forest inventory holds a central role in all of the forest research. Sustainable management of forests relies on knowledge of their structure, distribution and dynamics over time [51]. The collection of field data for inventory purposes is labour intensive, time-consuming and expensive, and cannot be applied to large areas, consequently drastically limiting the number of field inventories that can be afforded [52, 53]. Efforts to improve on the efficiency of inventory practices therefore drive research in this field [53]. Amongst RS technologies, SfM photogrammetry offers a low-cost and flexible approach to collect information on forest structure, thus naturally there has been an increase in interest to use such data for forest inventory.

Within the context of forest inventory, the main use of SfM photogrammetry has been its application on UAV image data to produce wall-to-wall auxiliary information in a similar fashion to ALS data. As such, UAV-SfM data has been shown to be suitable for the estimation of inventory relevant biophysical parameters such as height, density and biomass [11, 12, 34, 54, 55, 56]. Even though SfM has mostly been applied to aerial image data, in recent years, there has been an increasing effort in developing terrestrial SfM applications to replace or augment field data collections. The focus of studies incorporating CRP lies on estimating diameter at breast height (DBH), tree position and stem curves. The following sub-sections elaborate further on the developments up to today regarding aerial and terrestrial SfM and highlight some of the key work on using these photogrammetric data for inventory purposes.

Aerial Inventory

The use of SfM techniques applied to aerial image data for forest inventory was pioneered by Dandois and Ellis in 2010 [54]. These authors were the first to use a series of unordered but overlapping images acquired using a consumer-grade camera mounted on a kite to produce a dense 3D point cloud representing the forest canopy. A first attempt to model forest biophysical properties using UAV-SfM data was done by Dandois and Ellis in 2013 [34] and Lisein et al. in 2013 [55]. Both studies found that even though the results were not consistent in all the studied areas, there was a correlation between UAV-SfM data and variables such as dominant height (R2 = 0.07–0.91) or aboveground biomass (R2 = 0.27–0.73). A more comprehensive evaluation of the possibilities to use UAV-SfM for forest inventories came with the studies by Puliti et al. in 2015 [12] and Tuominen et al. in 2015 [56] who extended their evaluation to the range of biophysical variables commonly used in forest management. Their results in terms of RMSE% for dominant height (3.5%), Lorey’s height (13.3%–14.4%), stem density (38.6%), basal area (15.4–23.9%) and timber volume (14.9–26.1%) were found to be similar to errors associated with ALS-based forest inventories. While these two studies set an important benchmark, they were both conducted in even-aged managed boreal forests and thus provided limited information on how UAV-SfM may perform in different forest types and forest developmental stages.

Since the early days of UAV-SfM, the rapid growth in computing capabilities, availability of UAVs and SfM software triggered increased interest in the scientific community (see Table 1). This led to a widespread evaluation of UAV-SfM technology over a variety of forest types and forest developmental stages. UAV-SfM data has been consistently proven to be useful for forest inventories in a large variety of forest types, including temperate European beech forests in Italy [13•], mangrove forests in Malaysia [57], tropical forests in Guyana [58], mixed conifer-broadleaved forest in Japan [59], sparse sub-alpine coniferous forests in China [60], tropical woodlands in Malawi [41] and various plantations around the globe [61, 62, 63]. From these studies, a conclusion can be drawn that the accuracy of UAV-SfM models is consistent across many different forest types and on a similar scale to ALS models. All of the aforementioned studies dealt with mature to nearly mature forest, while there has been little effort dedicated to estimating biophysical variables for forests under regeneration [64, 65, 66•]. Nevertheless, the use of UAV-SfM data for regeneration forests may outperform alternative data sources such as field assessments or the use of ALS data in terms of costs and accuracy. Goodbody et al. [64] demonstrated the possibility to discriminate coniferous and deciduous species (overall accuracy of 86–95%). Puliti et al. [66•] showed that UAV-SfM data could be used to accurately model stem density and height (RMSE% = 21.8% and 23.6%). Such results represent a substantial increase in accuracy over ALS forest inventories and field assessment. Furthermore, their study reported that data acquired using UAV-SfM techniques halved the amount of time required for traditional field surveys that are commonly performed in regeneration stands. Thus, the use of UAV-SfM for regeneration forest may be particularly interesting since it allows a simultaneous increase in the precision of the inventory while reducing its costs.

Different methodological approaches have been applied to UAV-SfM data, similarly to ALS data. The methods can be categorized into area-based approaches (ABA) [67] and individual tree crown (ITC) approaches [68, 69]. While in the former case, the population units are represented by grid cells of area equal to that of the field plots; in the latter, they are polygons representing single-tree crowns. In both cases, the UAV-SfM data, corresponding either to the grid cells or the single-tree crowns, are then linked to a sample of field observations either for field plots or for single trees through models. These models are then applied to all the population units either for estimation of parameters for stand or forest level mapping. The results of ABA methods have been presented in the previous paragraph. The adoption of ITC approaches to UAV-SfM has been found to be useful for detecting single trees with 25–90% detection accuracy [63, 70, 71], to classify them according to tree species with overall accuracies up to 95% [71], and measuring their height with RMSEs in the range of 0.5–2.84 m [55, 63]. In addition to rather large variability in the accuracy of some of these variables, the results of UAV-SfM ITC approaches vary according to forest types since they remain limited to the detection of the dominant tree layer, while smaller and dominated trees remain mostly undetected.

Terrestrial Inventory

Currently, terrestrial laser scanning (TLS) is the most accurate non-contact method of measurement to derive detailed forest inventory information at the plot level [15•]. The main drawbacks of this technology are the high hardware costs [53], and the time required for multiple scans mitigating occlusions along with post-processing to provide full coverage of a plot [72]. Mobile laser scanning systems reduce acquisition time but high costs remain [73].

The deficiencies of traditional field data collection and the need for reducing the cost of alternative laser scanning solutions have encouraged the application of terrestrial photogrammetry for forest inventory. Efforts to utilize terrestrial photogrammetric point clouds for deriving forest parameters derive from the low-cost of the equipment for the data collection, the automated SfM-based data processing and the potentially simple and fast data acquisition [74]. Requiring only a camera, typically handheld or mounted to a pole or tripod, terrestrial SfM photogrammetry makes such a system highly mobile, reducing the risk of occlusion yet providing a level of detail comparable to TLS [75].

Studies on terrestrial SfM for forestry purposes have become more frequent in the last years (Table 1) and mainly focus on linear rather than volumetric tree metrics. Studies vary according to (i) the scale of application, i.e. at plot level and individual tree reconstruction; (ii) the measured forest parameters like tree position, DBH, height and stem curve; (iii) the resolution of the sensor, e.g. video, mobile phone and SLR camera; (iv) the camera configuration and photographic path and (v) the equipment used to acquire the images, e.g. pole, tripod, camera rig and backpack. Based on these aspects an overview of key work on terrestrial SfM applications together with the obtained accuracies, acquisition method and geo-referencing approach are provided.

Most of the recent studies on photogrammetric measurements of forest parameters are based on the single-camera technique, according to which overlapping images are acquired around the plot (Fig. 4).
Fig. 4

Example of a terrestrial SfM survey [51] in an open forest plot showing a the configuration of camera positions and orientations, dense point cloud and ground control points; b an example of an image and the dense point cloud from the same point of view and c the dense point cloud of a single stem without RGB colouring together with a 10-cm cross-section at 1.3 m (light blue point cloud)

Terrestrial photogrammetry has been evaluated in several studies in the past few years at plot scales [14, 15•, 16, 74, 76, 77, 78]. In these studies, DBH and tree locations were estimated in circular plots with diameters ranging from 12 to 30 m. The reported RMSE of the DBH ranged from 0.88 to 6.80 cm compared with either field or TLS DBH measurements. Tree detection ranged between 60 and 98%. Results were influenced by the complexity of the forest plot, the acquisition path and mode.

The impact of photographic path on the accuracy of forest metrics derived from terrestrial SfM point clouds was firstly investigated by Liang et al. in 2014 and 2015 [74, 75] followed by Mokroš et al. in 2018 [78]. According to Mokroš et al. [78], the optimal acquisition solution resulted in portrait images, stop and go shooting mode and a path leading around the plot with two diagonal paths through the plot. Differently, Liang et al. [75] concluded that the image matching results of landscape images were optimal together with a photographing path based on inside and outside of an inner circle (Fig. 4). For complex forest plots, Piermattei et al. [15•] found that the optimal acquisition path was a combination of the solution found by Liang et al. [75] and Mokroš et al. [78]: landscape images, stop and go mode around the plot pointing in, following by an inner circle pointing out of the plot and two diagonals. This solution allowed reconstruction of stems with an accuracy of few centimetres up to a few metres above ground. For low-density and medium-density forests, Liu et al. [77] proposed a system that combines two pole-mounted cameras with a RTK GNSS for continuous capturing. Compared with total station measurements, their automatic determination of tree position, DBH and height achieved RMSEs of 0.16–0.2 m, 0.92–1.13 cm and 2.41–2.51 m respectively.

Most investigations on the use of terrestrial SfM were performed reconstructing single trees (i.e. not the entire forest plot) [16, 76, 79, 80, 81, 82, 83]. In those studies, DBH was the most frequently estimated parameter and often compared with TLS data for accuracy assessment. Although sub-centimetre accuracy was achieved in all cases, the obtained RMSEs differed according to the approaches used, forest types and survey conditions, i.e. natural forest and controlled field settings.

Not all the research studies report the time required for collecting the images. This can range from around 10 min to 2 h depending on the system used, parameters to be estimated, plot size and survey configuration, by excluding the time to acquire scaling measurements. However, the accuracy of the scaling factor is crucial for forest plot and individual tree reconstruction [74]. To scale the photogrammetric point cloud, most of the studies used targets surveyed by total station, implying additional equipment needs to be carried into the field, consequently increasing the total acquisition time per plot and reducing the portability of the entire surveying system. Aside from systems requiring a GNSS solution [77], currently, only Liang et al. [74] tested natural reference objects, e.g. tree stems, for the determination of correct scale. Their results showed that both natural reference objects and artificial targets worked effectively.

Health Assessment and Monitoring

As part of a sustainable forest management, assessment and monitoring of forest health condition play a crucial role. With threats to forest thought to increase globally [84], the identification of declining forest health induced by biotic, abiotic and anthropogenic stress agents becomes imperative. RS approaches offer rapid, spatially inclusive and objective ways to monitor forest health when compared with field assessments. With the aim of identifying and observing stress in plants, multi- and hyperspectral sensors are capable of capturing information outside the visible spectrum, which allow for estimation of biochemical plant traits like chlorophyll, leaf pigments and canopy water content [85]. Spatially continuous spectral mapping used to be exclusive to the manned airborne surveying domain. However, in recent years, lightweight sensors with discrete narrow spectral bands suitable for UAV mounting have become commercially available, allowing researchers to collect their own aerial spectral data [17, 86, 87, 88]. Such 2D spectral imagers may be used for SfM-based photogrammetric reconstruction and orthophoto generation similarly to RGB cameras, although they typically exhibit lower resolution.

Opposed to the field of precision agriculture, where SfM-based processing of spectral image data is widely applied [89], in forestry there currently are only a few examples where SfM-derived mapped spectral products have found the application. Early UAV/SfM-based studies of forest health made use of off-the-shelf RGB cameras, modified to capture near-infrared (NIR) images. Lehmann et al. [19] and Michez et al. [18•] used an object-based image analysis (OBIA) approach to segment and classify their scenes in order to identify declining tree health caused by biotic stress agents on both alder and oak. They achieved good overall classification accuracies (79.5–84.1% and 82.5–85% at their respective study sites) across five classes. However, they pointed out the limitations of NIR-modified RGB cameras, namely that visible and NIR spectra are not separable on the same sensor, spectral contamination due to broad and overlapping bands (see also Pauly [90]) as well as the inability to correct for changing light conditions (as downwelling irradiance is not captured).

Näsi et al. [86] used hyperspectral image data combined with the SfM-derived DSM for bark beetle damage detection at the individual tree level, achieving an overall classification accuracy of 76% when distinguishing between healthy, infested and dead trees. In a follow -up study, Näsi et al. [91] concluded that the individual tree-based approach, facilitated by the combination of 3D and spectral data, provides a promising and cost-efficient alternative to field-based assessments of pest infestation. Minařík and Langhammer [92] also used a UAV-SfM-based mapping approach to map bark beetle forest disturbance and found that bands from the red edge and NIR part of the spectrum were most suited for stress detection. These findings go in line with the results from Dash et al. [17], who assessed the potential of the commercially available multispectral sensor, the Micasense RedEdge (Micasense Inc., Seattle, WA, USA), for the detection of early signs of stress during a simulated disease outbreak in a pine plantation. In the applied random forest classification of time-series data, normalized difference vegetation index (NDVI) showed to be the best performing predictor variable to map physiological stress symptoms along with the declining tree health. Further, late examples of forest health monitoring are Baena et al. [93] and Brovkina et al. [50], both successfully applying an OBIA approach on SfM-mapped NIR image data stemming from a modified consumer RGB sensor to separate between dead and living trees.


Used complementary to existing RS data (e.g. LiDAR) or by itself, SfM photogrammetry has shown great potential for forestry. Particularly attractive is the ability to use uncalibrated cameras paired with unstable or handheld platforms, enabling the use of low-cost and non-expert equipment. Ground and aerial SfM surveys can be carried out with high flexibility offering the option for increased frequency RS surveys to incorporate, e.g. phenological changes in the analysis [59, 65, 71]. The implementation of SfM algorithms in modern photogrammetric software enables on-demand processing with little required user input. SfM photogrammetry thus presents a highly accessible and versatile solution to the acquisition of very high-resolution 3D data. In this regard, SfM empowers common forestry practitioners to produce real-time data analytics with the minimum investment required for hardware and software.

Additional value in a SfM-based processing chain derives from the ability to provide multiple geospatial data products (i.e. 3D models and orthomosaics) from a single sensor. Spectral information is inherently linked to the reconstructed structural data and derived products as these are generated directly from the input imagery. Studies on forest health particularly highlight the benefits of using the fused structural and spectral information that SfM-based processing of UAV image data provides [18•, 50].

Point clouds generated from high-resolution images can exhibit point densities greater than LiDAR, providing higher detail information on the visible surface of forests. The increased spectral variation stemming from such high-resolution data may hereby provide another valuable source of information, namely texture, such as the case in an OBIA approach [94]. Alongside the computational analysis, high-resolution SfM-generated models appear visually realistic, providing experts a near true depiction of the scene. Intuitive to understand, SfM models thus hold an important advantage over coarser remote sensing methods by enabling the rapid visual assessment and/or validation.

As is the case with all RS data, these will only ever be an approximation of the Earth’s surface and some limitations always remain. With SfM photogrammetry being a new technology, the boundaries of these limitations are not fully tested yet. Some of the main challenges with SfM photogrammetry for forest applications that we are facing nowadays relate to the following:
  1. 1)


    With SfM photogrammetry enabling frequent surveys, variations in illumination, atmospheric and seasonal conditions are inevitable between acquisitions. Being a passive sensing technique, these variations are directly reflected on the data thus on the replicability of analyses. To ensure the use of SfM data on demand, allowing acquisitions at different times of the year, it is therefore crucial to develop protocols for varying conditional scenarios and models that account for variations in the data.

  2. 2)

    Availability of accurate DTMs:

    Most airborne inventory studies presented here adopted highly accurate DTMs (e.g. ALS-based DTMs) to normalize UAV-SfM data and these are relatively rare around the globe, thus potentially limiting the area of application of UAV-SfM. To overcome this issue, some authors proposed the use of DTMs generated from the UAV-SfM data themselves [41] or the use of coarse resolution global DTMs such as shuttle radar topography mission data (SRTM) [41]. While the former type of DTM is obtainable only in open forests, the latter source was found to be unsuitable for estimation of aboveground biomass. A conceptually novel approach came with the study by Giannetti et al. [13•] who, to overcome any of the abovementioned limitations, proposed the use of UAV-SfM data-derived variables without prior normalization (i.e. DTM-independent variables). Their results showed that models fitted raw UAV-SfM data alone predicted stem volume with similar accuracy to ALS data, even in the highly productive broadleaf forest in steep terrain. Despite such encouraging results, it remains fundamental to further apply the method by Giannetti et al. [13•] in a wider variety of forest types and response variables. Furthermore, the greater complexity of DTM-independent variables over more traditional explanatory variables could limit the transferability of the models through space and time.

  3. 3)

    Lack of acquisition and processing protocols:

    The success of a SfM-based photogrammetric acquisition is largely based on the sensor used, the photographic path and viewing angles along with the chosen image overlap as well as the composition of a scene. Adjustments to the acquisition approach to ensure quality data are currently undertaken based on the surveyor’s experience. Here protocols that enable certainty for SfM outputs across forest types and phenological stages, yet minimizing acquisition efforts, need to be established. Eltner et al. [24] suggested a protocol for the collection of image data in geoscientific studies, which should be extended to take into account forestry-specific factors. Additional research is required on the parametrization of SfM-based photogrammetric software for vegetated scenes specifically. Processing protocols designed to deliver data adequate to the research question and to optimize processing speed are needed.

  4. 4)

    Image matching issues:

    Forests may prove to be challenging scenes for the feature matching algorithms underlying a SfM workflow. Their fine uniform texture, repeating patterns and potential movement (e.g. branches in wind) can introduce uncertainty in matching, consequently leading to incomplete reconstruction and/or noisy point clouds. In such cases, the likelihood of identifying visual similarities in overlapping images is promoted by increasing the distance to the area of interest (AOI), thus increasing the number of features per image and decreasing perspective distortions. Coarser ground sampling distances (GSDs) and higher image overlaps were shown to positively influence image matching [95, 96]. The overlap should thus be increased when decreasing the GSD (i.e. images with finer detail). Other potential mitigation strategies for reconstruction errors, like the use of high-accuracy position and orientation information for reduction of matching uncertainty, have yet to be studied.


To widely employ SfM photogrammetry in operational forestry, future research needs to tackle the abovementioned hurdles. It is essential to develop a consensus on acquisition protocols and parametrization of SfM photogrammetry software that is set to answer specific research questions across forest types and environmental conditions. We have started to gain some understanding of how image quality, overlap, GSD and photographic path are influencing SfM-based reconstruction [15•, 43••, 95, 96, 97]. However, prior to processing, uncertainty remains in predicting the completeness of these photogrammetric models. More in-depth work on these influential parameters is needed in conjunction with the development of methods that allow for reliable quality estimation of SfM-based outputs. Towards the quantification of data quality, James et al. [98] presented a method for estimating the precision of each point produced within the SfM pipeline by repeatedly running bundle adjustments on a set of input images. These ‘precision maps’ allow practitioners to describe the spatial variability of precision within SfM-derived products and gain insight into limitations in a given survey (such as image quality or control-point measurements). To our knowledge, ‘precision maps’ have not been applied in the context of forested scenes. In forestry, future studies would benefit from this method to objectively describe the data quality of SfM-derived products and thereby reduce uncertainty in subsequent analysis.


A camera and a computer are the basic requirements for SfM photogrammetry. With the examples given here, and in terms of what valuable data may be extracted from SfM-derived data by analysis, SfM photogrammetry shows great potential for forest practitioners and researchers. Adding the power of UAVs for the acquisition of aerial image data, the canopy of a forest can be mapped nearly in real time, responding rapidly to management needs. The temporal and spatial dimension that can be provided with SfM photogrammetry enables assessment and monitoring of forests in an economical way that has not existed before.

However, constraints linked to the fundamental principles of SfM photogrammetry being a passive optical method will remain. Influencing factors like viewing geometry, lighting and the availability of static texture have to be carefully considered prior to every survey. We suggest that, in order to enable the collection of fit-for-purpose data with predictable quality, further work is needed on acquisition and processing protocols specific to forestry. Further progress in these areas will facilitate the move away from proof-of-concept studies and towards the operational application of SfM photogrammetry in forestry.



Jakob Iglhaut is funded by the Royal Society Research Grant RG140494. Stefano Puliti is funded by NIBIO.

Compliance with Ethical Standards

Conflict of Interest

Jakob Iglhaut, Carlos Cabo, Stefano Puliti, Livia Piermattei, James O’Connor, and Jacqueline Rosette declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.


Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

  1. 1.
    Mcroberts R, Tomppo E. Remote sensing support for national forest inventories. Remote Sens Environ. 2007;110:412–9. Scholar
  2. 2.
    Wulder M. Optical remote-sensing techniques for the assessment of forest inventory and biophysical parameters. Prog Phys Geogr. 1998;22:449–76.CrossRefGoogle Scholar
  3. 3.
    Korpela I. Individual tree measurements by means of digital aerial photogrammetry. Silva Fennica Monographs 2004;3:93.Google Scholar
  4. 4.
    Lim K, Treitz P, Wulder M, St-Onge B, Flood M. LiDAR remote sensing of forest structure. Prog Phys Geogr Earth Environ. 2003;27:88–106.CrossRefGoogle Scholar
  5. 5.
    Næsset E. Airborne laser scanning as a method in operational forest inventory: status of accuracy assessments accomplished in Scandinavia. Scand J For Res. 2007;22:433–42.CrossRefGoogle Scholar
  6. 6.
    Kangas A, Gobakken T, Puliti S, Hauglin M, Næsset E. Value of airborne laser scanning and digital aerial photogrammetry data in forest decision making. Silva Fennica 2018;52:1–19.Google Scholar
  7. 7.
    Ginzler C, Hobi ML. Countrywide stereo-image matching for updating digital surface models in the framework of the swiss national forest inventory. Remote Sens. 2015;7:4343–70.CrossRefGoogle Scholar
  8. 8.
    Koch B. Remote sensing supporting national forest inventories. In Knowledge Reference for National Forest Assessments; FAO. 2013:1–18.Google Scholar
  9. 9.
    White JC, Wulder MA, Vastaranta M, Coops NC, Pitt D, Woods M. The utility of image-based point clouds for forest inventory: a comparison with airborne laser scanning. Forests. 2013;4:518–36.CrossRefGoogle Scholar
  10. 10.
    Leberl F, Irschara A, Pock T, Meixner P, Gruber M, Scholz S, et al. Point clouds: Lidar versus 3D vision. Photogramm Eng Remote Sens. 2010;76:1123–34.CrossRefGoogle Scholar
  11. 11.
    Goodbody TRH, Coops NC, White JC. Digital aerial photogrammetry for updating area-based forest inventories: a review of opportunities, challenges, and future directions. Curr For Rep. 2019;5:55–75. Scholar
  12. 12.
    Puliti S, Ørka HO, Gobakken T, Naesset E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015;7:9632–54.CrossRefGoogle Scholar
  13. 13.
    • Giannetti F, Chirici G, Gobakken T, Næsset E, Travaglini D, Puliti S. A new approach with DTM-independent metrics for forest growing stock prediction using UAV photogrammetric data. Remote Sens Environ. 2018;213:195–205. By omitting the prerequisite of DTM availability their approach indicates forward-looking potential for inventories based solely on SfM photogrammetry. CrossRefGoogle Scholar
  14. 14.
    Mikita T, Janata P, Surový P. Forest stand inventory based on combined aerial and terrestrial close-range photogrammetry. Forests. 2016;7:1–14.CrossRefGoogle Scholar
  15. 15.
    • Piermattei L, Karel W, Wang D, Wieser M, Mokroš M, Koreň M, et al. Terrestrial Structure from Motion photogrammetry for deriving forest inventory data. Remote Sens. 2019. This paper deals with the detection of tree characteristics from close-range SfM photogrammetry and TLS in four case studies. The two techniques with respective acquisition and post-processing phases and their source requirements are discussed thoroughly.
  16. 16.
    Mokroš M, Výbošťok J, Tomaštík J, Grznárová A, Valent P, Slavík M, et al. High precision individual tree diameter and perimeter estimation from close-range photogrammetry. Forests. 2018;9.
  17. 17.
    Dash JP, Watt MS, Pearse GD, Heaphy M, Dungey HS. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak. ISPRS J Photogramm Remote Sens. 2017;131:1–14.CrossRefGoogle Scholar
  18. 18.
    • Michez A, Piégay H, Lisein J, Claessens H, Lejeune P. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system. Environ Monit Assess. 2016;188:146. A great example of forest health monitoring at the individual tree level based on SfM-derived structural, spectral and textural variables. CrossRefGoogle Scholar
  19. 19.
    Lehmann JRK, Nieberding F, Prinz T, Knoth C. Analysis of unmanned aerial system-based CIR images in forestry-a new perspective to monitor pest infestation levels. Forests. 2015;6:594–612. Scholar
  20. 20.
    Lisein J, Michez A, Claessens H, Lejeune P. Discrimination of deciduous tree species from time series of unmanned aerial system imagery. PLoS One. 2015;10:1–20.CrossRefGoogle Scholar
  21. 21.
    Alonzo M, Andersen H-E, Morton D, Cook B. Quantifying boreal forest structure and composition using UAV structure from motion. Forests. 2018;9:119.CrossRefGoogle Scholar
  22. 22.
    Saarinen N, Vastaranta M, Näsi R, Rosnell T, Hakala T, Honkavaara E, et al. Assessing biodiversity in boreal forests with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens. 2018;10.
  23. 23.
    Smith MW, Carrivick JL, Quincey DJ. Structure from motion photogrammetry in physical geography. Prog Phys Geogr. 2015;40:247–75.CrossRefGoogle Scholar
  24. 24.
    Eltner A, Kaiser A, Castillo C, Rock G, Neugirg F, Abellán A. Image-based surface reconstruction in geomorphometry-merits, limits and developments. Earth Surf Dyn. 2016;4:359–89. Scholar
  25. 25.
    Bemis SP, Micklethwaite S, Turner D, James MR, Akciz S, Thiele S, et al. Ground-based and UAV-based photogrammetry: a multi-scale, high-resolution mapping tool for structural geology and paleoseismology. J Struct Geol. 2014;69:163–78.CrossRefGoogle Scholar
  26. 26.
    •• Carrivick J, Smith MJ. Quincey D. Structure from motion in the geosciences. 2016. This book further elaborates on the concepts and methods of SfM photogrammetry applied in physical geography. Technical aspects are described with great clarity. Google Scholar
  27. 27.
    Bolles RC, Baker HH, Marimont DH. Epipolar-plane image analysis: an approach to determining structure from motion. Int J Comput Vis. 1987;1:7–55.CrossRefGoogle Scholar
  28. 28.
    Ullman S. The interpretation of structure from motion. Proc R Soc Lond Ser B Biol Sci. 1979;203:405–26.CrossRefGoogle Scholar
  29. 29.
    Koenderink JJ, van Doorn AJ. Affine structure from motion. J Opt Soc Am A. 1991;8:377–85.CrossRefGoogle Scholar
  30. 30.
    Westoby MJ, Brasington J, Glasser NF, Hambrey MJ, Reynolds JM. “Structure-from-motion” photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphology. 2012;179:300–14.CrossRefGoogle Scholar
  31. 31.
    Fonstad MA, Dietrich JT, Courville BC, Jensen JL, Carbonneau PE. Topographic structure from motion: a new development in photogrammetric measurement. Earth Surf Process Landf. 2013;38:421–30.CrossRefGoogle Scholar
  32. 32.
    Micheletti N, Chandler JH, Lane SN. Structure from motion (SfM) photogrammetry. Br Soc Geomorphol. 2015.Google Scholar
  33. 33.
    Skarlatos D, Kiparissi S. Comparison of laser scanning, photogrammetry and Sfm-Mvs pipeline applied in structures and artificial surfaces. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2012;I–3:299–304.CrossRefGoogle Scholar
  34. 34.
    Dandois JP, Ellis EC. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens Environ. 2013;136:259–76. Scholar
  35. 35.
    Lowe DG. Distinctive image features from Scale-Invariant Keypoints. Int J Comput Vis. 2004;60:91–110.CrossRefGoogle Scholar
  36. 36.
    Shum H-Y, Ke Q, Zhang Z. Efficient bundle adjustment with virtual key frames: a hierarchical approach to multi-frame structure from motion. In: Proceedings. 1999 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (Cat. No PR00149). IEEE Comput. Soc, pp 538–543.Google Scholar
  37. 37.
    Ahmadabadian AH, Robson S, Boehm J, Shortis M, Wenzel K, Fritsch D. A comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs. ISPRS J Photogramm Remote Sens. 2013;78:157–67.CrossRefGoogle Scholar
  38. 38.
    Gini R, Sona G, Ronchetti G, Passoni D, Pinto L. Improving tree species classification using UAS multispectral images and texture measures. ISPRS Int J Geo-Information. 2018;7:315.CrossRefGoogle Scholar
  39. 39.
    Puliti S. Tree-stump detection, segmentation, classification, and measurement using unmanned aerial vehicle (UAV) imagery. Forests. 2018.
  40. 40.
    Puliti S, Gobakken T, Ørka HO, Næsset E. Assessing 3D point clouds from aerial photographs for species-specific forest inventories. Scand J For Res. 2017;32:68–79.Google Scholar
  41. 41.
    Kachamba D, Ørka H, Gobakken T, Eid T, Mwase W, Kachamba DJ, et al. Biomass estimation using 3D data from unmanned aerial vehicle imagery in a tropical woodland. Remote Sens. 2016;8:968.CrossRefGoogle Scholar
  42. 42.
    White JC, Wulder MA, Varhola A, Vastaranta M, Coops NC, Cook BD, et al. A best practices guide for generating forest inventory attributes from airborne laser scanning data using an area-based approach. Canadian Forest Service: Information Report. 2017.
  43. 43.
    •• O’Connor J, Smith MJ, James MR. Cameras and settings for aerial surveys in the geosciences: optimising image data. Prog Phys Geogr. 2017;41:325–44 This paper gives a very clear introduction to aerial image capture. Foundational principles essential to capturing high quality imagery with digital sensors are provided. Cameras and settings are discussed with worked examples. CrossRefGoogle Scholar
  44. 44.
    Mosbrucker AR, Major JJ, Spicer KR, Pitlick J. Camera system considerations for geomorphic applications of SfM photogrammetry. Earth Surf Process Landf. 2017;42:969–86.CrossRefGoogle Scholar
  45. 45.
    Verhoeven GJJ. It’s all about the format – unleashing the power of RAW aerial photography. Int J Remote Sens. 2010;31:2009–42.CrossRefGoogle Scholar
  46. 46.
    James MR, Robson S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf Process Landf. 2014;39:1413–20.CrossRefGoogle Scholar
  47. 47.
    Wackrow R, Chandler JH. Minimising systematic error surfaces in digital elevation models using oblique convergent imagery. Photogramm Rec. 2011;26:16–31.CrossRefGoogle Scholar
  48. 48.
    Anderson K, Westoby MJ, James MR. Low-budget topographic surveying comes of age: structure from motion photogrammetry in geography and the geosciences. Prog Phys Geogr Earth Environ. 2019;43:163–73.CrossRefGoogle Scholar
  49. 49.
    Anderson K, Gaston KJ. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front Ecol Environ. 2013;11:138–46.CrossRefGoogle Scholar
  50. 50.
    Brovkina O, Cienciala E, Surový P, Janata P. Unmanned aerial vehicles (UAV) for assessment of qualitative classification of Norway spruce in temperate forest stands. Geo-Spat Inf Sci. 2018;21:12–20.CrossRefGoogle Scholar
  51. 51.
    Liang X, Kukko A, Hyyppä J, Lehtomäki M, Pyörälä J, Yu X, et al. In-situ measurements from mobile platforms: an emerging approach to address the old challenges associated with forest inventories. ISPRS J Photogramm Remote Sens. 2018;143:97–107.CrossRefGoogle Scholar
  52. 52.
    Thompson ID, Maher SC, Rouillard DP, Fryxell JM, Baker JA. Accuracy of forest inventory mapping: some implications for boreal forest management. For Ecol Manag. 2007;252:208–21.CrossRefGoogle Scholar
  53. 53.
    White JC, Coops NC, Wulder MA, Vastaranta M, Hilker T, Tompalski P. Remote sensing technologies for enhancing forest inventories: a review. Can J Remote Sens. 2016;42:619–41.CrossRefGoogle Scholar
  54. 54.
    Dandois JP, Ellis EC. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010;2:1157–76.CrossRefGoogle Scholar
  55. 55.
    Lisein J, Pierrot-Deseilligny M, Bonnet S, Lejeune P. A photogrammetric workflow for the creation of a forest canopy height model from small unmanned aerial system imagery. Forests. 2013;4:922–44.CrossRefGoogle Scholar
  56. 56.
    Tuominen S, Balazs A, Saari H, Pölönen I, Sarkeala J, Viitala R. Unmanned aerial system imagery and photogrammetric canopy height data in area-based estimation of forest variables. Silva Fenn. 2015.
  57. 57.
    Otero V, Van De Kerchove R, Satyanarayana B, Martínez-Espinosa C, Bin FMA, Bin IMR, et al. Managing mangrove forests from the sky: forest inventory using field data and unmanned aerial vehicle (UAV) imagery in the Matang Mangrove Forest Reserve, peninsular Malaysia. For Ecol Manag. 2018;411:35–45.CrossRefGoogle Scholar
  58. 58.
    Roşca S, Suomalainen J, Bartholomeus H, Herold M. Comparing terrestrial laser scanning and unmanned aerial vehicle structure from motion to assess top of canopy structure in tropical forests. Interface Focus. 2018;8:20170038.CrossRefGoogle Scholar
  59. 59.
    Jayathunga S, Owari T, Tsuyuki S. Evaluating the performance of photogrammetric products using fixed-wing UAV imagery over a mixed conifer-broadleaf forest: comparison with airborne laser scanning. Remote Sens. 2018;10.
  60. 60.
    Lin J, Wang M, Ma M, Lin Y, Lin J, Wang M, et al. Aboveground tree biomass estimation of sparse subalpine coniferous forest with UAV oblique photography. Remote Sens. 2018;10:1849.CrossRefGoogle Scholar
  61. 61.
    Shin P, Sankey T, Moore M, Thode A, Shin P, Sankey T, et al. Evaluating unmanned aerial vehicle images for estimating forest canopy fuels in a ponderosa pine stand. Remote Sens. 2018;10:1266.CrossRefGoogle Scholar
  62. 62.
    Iizuka K, Yonehara T, Itoh M, Kosugi Y. Estimating tree height and diameter at breast height (DBH) from digital surface models and orthophotos obtained with an unmanned aerial system for a Japanese Cypress (Chamaecyparis obtusa) Forest. Remote Sens. 2018;10.
  63. 63.
    Guerra-Hernández J, Cosenza DN, Rodriguez LCE, Silva M, Tomé M, Díaz-Varela RA, et al. Comparison of ALS- and UAV(SfM)-derived high-density point clouds for individual tree detection in Eucalyptus plantations. Int J Remote Sens. 2018;39:5211–35.CrossRefGoogle Scholar
  64. 64.
    Goodbody TRH, Coops NC, Hermosilla T, Tompalski P, Crawford P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int J Remote Sens. 2018;39:5246–64.CrossRefGoogle Scholar
  65. 65.
    Feduck C, McDermid G, Castilla G, Feduck C, McDermid GJ, Castilla G. Detection of coniferous seedlings in UAV imagery. Forests. 2018;9:432.CrossRefGoogle Scholar
  66. 66.
    • Puliti S, Solberg S, Granhus A. Use of UAV photogrammetric data for estimation of biophysical properties in forest stands under regeneration. Remote Sens. 2019;11:233. This paper demonstrates that high resolution UAV-SfM data can outperform ALS and traditional field data in forest inventory.
  67. 67.
    Næsset E, Økland T. Estimating tree height and tree crown properties using airborne scanning laser in a boreal nature reserve. Remote Sens Environ. 2002;79:105–15.CrossRefGoogle Scholar
  68. 68.
    Brandtberg T. Automatic individual tree based analysis of high spatial resolution aerial images on naturally regenerated boreal forests. Can J For Res. 1999;29:1464–78.CrossRefGoogle Scholar
  69. 69.
    Hyyppä J, Inkinen M. Detecting and estimating attributes for single trees using laser scanner. Photogramm J Finl. 1999;16:27–42.Google Scholar
  70. 70.
    Mohan M, Silva C, Klauberg C, Jat P, Catts G, Cardil A, et al. Individual tree detection from unmanned aerial vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest. Forests. 2017;8:340.CrossRefGoogle Scholar
  71. 71.
    Nevalainen O, Honkavaara E, Tuominen S, Viljanen N, Hakala T, Yu X, et al. Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens. 2017;9:185.CrossRefGoogle Scholar
  72. 72.
    Liang X, Kankare V, Hyyppä J, Wang Y, Kukko A, Haggrén H, et al. Terrestrial laser scanning in forest inventories. ISPRS J Photogramm Remote Sens. 2016;115:63–77.CrossRefGoogle Scholar
  73. 73.
    Wulder MA, Bater CW, Coops NC, Hilker TH, White JC. The role of LiDAR in sustainable forest management. For Chron. 2008;84:807–826.Google Scholar
  74. 74.
    Liang X, Jaakkola A, Wang Y, Hyyppä J, Honkavaara E, Liu J, et al. The use of a hand-held camera for individual tree 3D mapping in forest sample plots. Remote Sens. 2014;6:6587–603.CrossRefGoogle Scholar
  75. 75.
    Liang X, Wang Y, Jaakkola A, Kukko A, Kaartinen H, Hyyppä J, et al. Forest data collection using terrestrial image-based point clouds from a handheld camera compared to terrestrial and personal laser scanning. IEEE Trans Geosci Remote Sens. 2015;53:5117–32. Scholar
  76. 76.
    Surový P, Yoshimoto A, Panagiotidis D. Accuracy of reconstruction of the tree stem surface using terrestrial close-range photogrammetry. Remote Sens. 2016;8:1–13.CrossRefGoogle Scholar
  77. 77.
    Liu J, Feng Z, Yang L, Mannan A, Khan T, Zhao Z, et al. Extraction of sample plot parameters from 3D point cloud reconstruction based on combined RTK and CCD continuous photography. Remote Sens. 2018;10:1299.CrossRefGoogle Scholar
  78. 78.
    Mokroš M, Liang X, Surový P, Valent P, Čerňava J, Chudý F. Evaluation of close-range photogrammetry image collection methods for estimating tree diameters. Int J Geo-Inf. 2018;7:93.Google Scholar
  79. 79.
    Morgenroth J, Gomez C. Assessment of tree structure using a 3D image analysis technique-a proof of concept. Urban For Urban Green. 2014;13:198–203.CrossRefGoogle Scholar
  80. 80.
    Miller JM. Estimation of individual tree metrics using structure-from-motion photogrammetry. MSc Thesis. 2015.Google Scholar
  81. 81.
    Bauwens S, Fayolle A, Gourlet-Fleury S, Ndjele LM, Mengal C, Lejeune P. Terrestrial photogrammetry: a non-destructive method for modelling irregularly shaped tropical tree trunks. Methods Ecol Evol. 2017;8:460–71.CrossRefGoogle Scholar
  82. 82.
    Berveglieri A, Tommaselli AMG, Liang X, Honkavaara E. Vertical optical scanning with panoramic vision for tree trunk reconstruction. Sensors (Switzerland). 2017;17:1–13.CrossRefGoogle Scholar
  83. 83.
    Huang H, Zhang H, Chen C, Tang L. Three-dimensional digitization of the arid land plant Haloxylon ammodendron using a consumer-grade camera. Ecol Evol. 2018;8:5891–9.CrossRefGoogle Scholar
  84. 84.
    Lausch A, Erasmi S, King D, Magdon P, Heurich M. Understanding forest health with remote sensing -part I—a review of spectral traits, processes and remote-sensing characteristics. Remote Sens. 2016;8:1029.CrossRefGoogle Scholar
  85. 85.
    Stone C, Mohammed C. Application of remote sensing technologies for assessing planted forests damaged by insect pests and fungal pathogens: a review. Curr For Rep. 2017;3:75–92.Google Scholar
  86. 86.
    Näsi R, Honkavaara E, Lyytikäinen-Saarenmaa P, Blomqvist M, Litkey P, Hakala T, et al. Using UAV-based photogrammetry and hyperspectral imaging for mapping bark beetle damage at tree-level. Remote Sens. 2015;7:15467–93.CrossRefGoogle Scholar
  87. 87.
    Minařík R, Langhammer J. Use of a multispectral UAV photogrammetry for detection and tracking of forest disturbance dynamics. Int Arch Photogramm Remote Sens Spat Inf Sci. 2016;41:711–8.CrossRefGoogle Scholar
  88. 88.
    Aasen H, Bareth G. Spectral and 3D nonspectral approaches to crop trait estimation using ground and UAV sensing. In Biophys. Biochem. Charact. Plant Species Stud: CRC Press; 2019;103–32.Google Scholar
  89. 89.
    Hunt ER, Daughtry CST. What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture? Int J Remote Sens. 2018;39:5345–76.CrossRefGoogle Scholar
  90. 90.
    Pauly K. Towards calibrated vegetation indices from UAS-derived orthomosaics. 13th Int Conf Precis Agric. 2016.
  91. 91.
    Näsi R, Honkavaara E, Blomqvist M, Lyytikäinen-Saarenmaa P, Hakala T, Viljanen N, et al. Remote sensing of bark beetle damage in urban forests at individual tree level using a novel hyperspectral camera from UAV and aircraft. Urban For Urban Green. 2018;30:72–83.CrossRefGoogle Scholar
  92. 92.
    Minařík R, Langhammer J. Use of a multispectral Uav photogrammetry for detection and tracking of forest disturbance dynamics. Int Arch Photogramm Remote Sens Spat Inf Sci. 2016;XLI-B8:711–8.CrossRefGoogle Scholar
  93. 93.
    Baena S, Moat J, Whaley O, Boyd DS. Identifying species from the air: UAVs and the very high resolution challenge for plant conservation. PLoS One. 2017;12:1–21.CrossRefGoogle Scholar
  94. 94.
    Blaschke T. Object based image analysis for remote sensing. ISPRS J Photogramm Remote Sens. 2010;65:2–16.CrossRefGoogle Scholar
  95. 95.
    Frey J, Kovach K, Stemmler S, Koch B. UAV photogrammetry of forests as a vulnerable process. A sensitivity analysis for a structure from motion RGB-image pipeline. Remote Sens. 2018;10.
  96. 96.
    Ni W, Sun G, Pang Y, Zhang Z, Liu J, Yang A, et al. Mapping three-dimensional structures of forest canopy using UAV stereo imagery: evaluating impacts of forward overlaps and image resolutions with LiDAR data as reference. IEEE J Sel Top Appl Earth Obs Remote Sens. 2018;11:3578–89.CrossRefGoogle Scholar
  97. 97.
    Dandois JP, Olano M, Ellis EC. Optimal altitude, overlap, and weather conditions for computer vision uav estimates of forest structure. Remote Sens. 2015;7:13895–920. Scholar
  98. 98.
    James MR, Robson S, Smith MW. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: precision maps for ground control and directly georeferenced surveys. Earth Surf Process Landf. 2017;42:1769–88. Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Jakob Iglhaut
    • 1
    Email author
  • Carlos Cabo
    • 2
    • 3
  • Stefano Puliti
    • 4
  • Livia Piermattei
    • 5
    • 6
  • James O’Connor
    • 7
  • Jacqueline Rosette
    • 1
  1. 1.Department of GeographySwansea UniversitySwanseaUK
  2. 2.CETEMAS, Centro Tecnológico y Forestal de la Madera, Área de Desarrollo Forestal SostenibleSieroSpain
  3. 3.Department of Mining Exploitation and Prospecting, Polytechnic School of MieresUniversity of OviedoMieresSpain
  4. 4.Department of National Forest Inventory, Division of Forestry and Forest ResourcesNorwegian Institute of Bioeconomy Research (NIBIO)ÅsNorway
  5. 5.Department of Geodesy and GeoinformationTU WienViennaAustria
  6. 6.Physical Geography Catholic University of Eichstätt-IngolstadtEichstättGermany
  7. 7.Department of Natural and Built EnvironmentKingston UniversitySurreyUK

Personalised recommendations