Abstract
Purpose of the Review
Many LiDAR remote sensing studies over the past decade promised data fusion as a potential avenue to increase accuracy, spatial-temporal resolution, and information extraction in the final data products. Here, we performed a structured literature review to analyze relevant studies on these topics published in the last decade and the main motivations and applications for fusion, and the methods used. We discuss the findings with a panel of experts and report important lessons, main challenges, and future directions.
Recent Findings
LiDAR fusion with other datasets, including multispectral, hyperspectral, and radar, is found to be useful for a variety of applications in the literature, both at individual tree level and at area level, for tree/crown segmentation, aboveground biomass assessments, canopy height, tree species identification, structural parameters, and fuel load assessments etc. In most cases, gains are achieved in improving the accuracy (e.g. better tree species classifications), and spatial-temporal resolution (e.g. for canopy height). However, questions remain regarding whether the marginal improvements reported in a range of studies are worth the extra investment, specifically from an operational point of view. We also provide a clear definition of “data fusion” to inform the scientific community on data fusion, combination, and integration.
Summary
This review provides a positive outlook for LiDAR fusion applications in the decade to come, while raising questions about the trade-off between benefits versus the time and effort needed for collecting and combining multiple datasets.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Forest ecosystems are often characterized in terms of structure, composition, and functions [1]. Light Detection and Ranging (LiDAR) remote sensing (RS) has substantially improved our understanding of forest structure around the world in recent decades [2,3,4,5]. LiDAR instruments provide explicit three-dimensional (3D) data that have enabled measurements of forest structure parameters such as canopy height, leaf area index, and diameter at breast height across different scales with unprecedented accuracy [6,7,8].
LiDAR data can be collected from a variety of sensors and platforms, resulting in a range of 3D data types (mostly point clouds), with different point densities, accuracies, and perspectives. Common LiDAR sensors can be mounted on different platforms including ground-based, both fixed and mobile [3, 9], airborne with unoccupied aerial vehicles (UAVs or drones), helicopters, and airplanes [10, 11], and space-based from satellites or the international space station [7, 12, 13]. The cross-scale LiDAR data collection has enabled many applications of tree and forest measurements, including forest inventories and biomass estimates [14, 15], species and habitat classification, biodiversity assessment [16, 17], forest fuel estimates [18] and detailed 3D reconstruction of trees [19, 20].
While LiDAR instruments have developed rapidly and extensively, the data continue to have limitations. For example, ground-based LiDAR data might not record all trees and tree tops due to occlusion [21]. Conversely, airborne and spaceborne LiDAR instruments can measure the top of the canopies and, in some cases, forest vertical structure, but rarely capture stems below canopies [22]. Moreover, LiDAR is specifically used to gather information on vegetation structure, but provides limited information on other important drivers of forest ecosystems, composition, and functioning. These limitations have resulted in a rapid increase in data fusion approaches, in which data from various instruments can be merged together (multi-sensor approach) to enhance the data and their application potential.
Various definitions of data fusion have been proposed [23, 24]. Here, we focus on multi-source or multi-sensor LiDAR data fusion, defined as “the merging of data or derived features from different sources (instruments/devices), of which at least one is LiDAR data, to improve the information content of the data sources and enable enhanced forest observations". Multi-sensor data fusion approaches have been deemed useful in overcoming measurement and sampling limitations from the original dataset to the final information extraction [25].
This review paper aims to summarize the current state-of-the-art LiDAR data fusion approaches for forest observations and identify main challenges that need to be addressed to move forward. We consider two levels of multi-sensor data fusion in this review: (1) data-level fusion, and (2) feature-level fusion. In data-level fusion, raw datasets from various sources are combined into one dataset or product (e.g. merging of two LiDAR point clouds, one collected with ground-based LiDAR and the other with unoccupied vehicle laser scanner (ULS)) [26]. In feature-level fusion, features extracted from various data sources individually are merged into new features or vectors (e.g. merging of structural parameters from LiDAR with coincident spectral parameters from hyperspectral (HS) data to derive a species classification) [27, 28].
This paper includes two major components. The first component provides a structured literature review on LiDAR data fusion addressing the following questions:
-
What are the trends in LiDAR data fusion in the last decade?
-
What are the main motivations and applications of LiDAR data fusion?
-
What are the main methods used to perform data fusion?
-
What are the main gains of LiDAR data fusion?
The literature review was then analyzed by a team of 11 international experts to address the following key questions:
-
What is ‘data fusion’ and how should this term be used in our community?
-
What are the most important lessons learned about data fusion in forest observations?
-
What are the main challenges in data fusion for operational applications?
-
What should the community focus on to move data fusion forward?
The experts in the team were assembled through the EU COST Action 3DForEcoTech; an EU initiative to bring together all experts on LiDAR data for forestry within the EU. An open call was held to solicit scientists interested in collaborating on this literature review. The final team was assembled to encompass all expertise required for addressing the key questions, including scientists with expertise on all types of LiDAR (mobile, terrestrial, airborne and spaceborne) and fusion with all common datasets assessed here (multispectral, hyperspectral, and radar).
Structured Literature Review Method
We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) approach [29, 30]. The following search terms were used in the Web of Sciences database: LiDAR AND fus* (Topic) and forest* OR tree OR canop* (Topic) and structure OR height OR inventory (Topic). We included literature from the last decade between January 2014 - May 2023, published in English language, and with a publication status of ‘article’ or ‘review article’. As defined in the introduction, we focused on multi-sensor data fusion. We did not consider studies that included a combination of two datasets from the same sensor collected at different times or at different locations. By limiting our search to only include the term ‘data fusion’ and no alternative search words, such as ‘data integration’ or ‘data combination’ (that may refer to the same process), we demonstrate how ‘data fusion’ is specifically used in the last decade. In the Discussion sub-section Data fusion, we further discuss the term ‘data fusion’ in relation to other terms with a potentially similar meaning in the LiDAR context.
Literature Search Results
The Web of Science query resulted in 664 papers (Fig. 1). Of these, 407 adhered to the eligibility criteria defined above (2014-2023, English, article or review). The abstracts of these 407 papers were screened by two independent reviewers, who decided whether to include or exclude a paper based on two criteria: (1) some aspect of trees/forest, relevant to forestry applications, was assessed, and all papers that solely studied crops, infrastructure or buildings were eliminated, and (2) the fusion must include LiDAR data.
Extracting Information from Literature
We developed a coding scheme to organize the information in the 151 papers in a comprehensive and understandable fashion that addressed the four main research questions. The coding scheme consisted of five main categories: general information, geographic location, survey area, data characteristics, and survey goals (Table 1). In the category ‘general information’, we included the most pertinent information, so the paper could be relocated for later analysis. In ‘geographic location’, we included information on the continent and country/countries of the study areas. Regarding ‘survey area’, we included survey scale (i.e. global or local) and forest stand (i.e. type of vegetation surveyed). In ‘data characteristics’, we included information on the LiDAR platform used, as well as the sensor's name and type. We also recorded the datasets that were fused with the LiDAR dataset. Within ‘survey goals’, we included information on the application for which the fusion was used, the motivation (aim) for the fusion (e.g. increasing spatial resolution of data product), the type of method used to fuse the datasets, and reported gain of the fusion process.
Trends in Data Fusion Literature
The number of publications concerning LiDAR data fusion for forests demonstrates a slight general upward trend over the last 10 years, especially in 2022 (Fig. 2). LiDAR data from airborne platforms were most commonly used. These airborne platforms include both instruments mounted on UAVs and occupied aircrafts. Fusion with data from terrestrial platforms, including terrestrial laser scanners (TLSs) and mobile laser scanners (MLSs), seems to be emerging in recent years, starting in 2016. Generally, there has been a slightly increasing trend in the use of spaceborne LiDAR sensors, with satellite papers published in 2016 and 2017 employing data from ICESat/GLAS and the papers published after 2018 with data from ICESat-2 and GEDI.
LiDAR data can be fused with data collected from a similar platform (e.g. airborne-airborne) or a different platform (e.g. airborne-spaceborne). Fusion of airborne LiDAR and other airborne data types was the most common type of fusion encountered (45.4%), followed by fusion of LiDAR data from airborne and spaceborne devices (29.8%). Spaceborne LiDAR fused with data collected by other spaceborne sensors and airborne-terrestrial fusion had the same amount of publications (11.3%), whereas fusion of terrestrial LiDAR with other data from terrestrial platforms was found to be the least common (2.1%) (Table 2).
In terms of geographical representation (Fig. 3), studies from North America (38%), Europe (31%) and Asia (21%) represent 90% of the publications. The remaining 5% study Australia, and another 5% focus on Africa and South America together. In particular, our literature review found very few LiDAR data fusion studies in the southern hemisphere. This pattern is consistent with a review of the geographic distribution of authorship in remote sensing publications [31], documenting that four specific countries, the USA, Italy, Germany, and China, are over-represented, with almost no contributions from South America and Africa. Our literature sample demonstrates that most of the fusion studies in Asia are taking place in China alone, while other countries such as Iran, India, and Malaysia are studied just one time each.
Main Motivations and Applications of LiDAR Data Fusion
Motivations
Three main motivations for data fusion were found: (1) fusion of data across platforms can enhance spatial or temporal resolution of the data product. (2) two different LiDAR datasets can be fused to improve data density and/or overcome occlusion. For example, terrestrial and aerial point clouds are fused to better represent both the top and the bottom of the canopy, and to subsequently extract structural parameters more accurately [32, 33]. (3) fusion from the same platform primarily enriches the existing dataset with additional information, and these studies seek to add more information to the LiDAR dataset. For example, spectral data can be fused with LiDAR data to create a better estimate of above-ground biomass (AGB) or improve tree segmentation.
Applications
In the LiDAR data fusion literature, we find two main streams of applications, at the individual tree level (ITA - Individual Tree Approach) and at the area level (ABA - Area-Based Approach). Among all papers reviewed, 27% focus on ITA, 50% on ABA, 17% on both ITA and ABA, and 6% are review papers. The main applications of LiDAR data fusion at these two levels are divided into seven categories:
-
1)
Classification (tree species/land cover): 29.5% of the papers [27, 28, 34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73] encompassed land cover classification, specifically, forest type classification, classification of individual tree species or genus, and forest habitat mapping.
-
2)
Growing stock volume / above-ground biomass: 17.7% of the papers [74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98] are studies in which data fusion was used to improve biomass estimates both at ABA and ITA levels.
-
3)
Forest structure: 15.5% of the papers [11, 13, 32, 33, 99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115] include different datasets fused to improve the extraction of horizontal as well as vertical structure parameters beyond canopy height. This category includes individual tree biometric parameters such as crown diameter, crown length or base height. On an area-based level, the information derived includes mean crown length, number of vertical layers, gaps, crown coverage, stem density, basal area, DBH distribution etc. This category also includes assessment of post-fire forest structure and regeneration.
-
4)
Tree height: 12.7% of the papers [116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133] include canopy height represented by different parameters such as mean height, quantiles, deviations etc. Data fusion was applied to generate better estimates of tree height at a single tree level or a stand level, mainly by fusing aerial LiDAR data with other LiDAR platforms.
-
5)
Segmentation: 9.2 % of the papers [134,135,136,137,138,139,140,141,142,143,144,145,146,147] delineate individual tree crowns and identify locations of individual trees. In ABA, the segmentation includes delineation of homogeneous forest patches as well as forest stands.
-
6)
Other: 9.1% of the papers [148,149,150,151,152,153,154,155,156,157,158,159,160] include a variety of applications, such as mapping the pigment distribution and quantifying taxonomic, functional, and phylogenetic diversity, tree age estimation etc.
-
7)
Fuel load: 6.3% of the papers [161,162,163,164,165,166,167,168,169] include applications that deal with fuel load and forest fire modeling.
Methods for LiDAR Data Fusion
The methods used for LiDAR data fusion can generally be divided into two main categories. Data-level fusion studies typically merge datasets from different sensors during the pre-processing stage and before any formal classification or feature extraction occur, whereas feature-level fusion studies merge post-classification outputs and extracted features from disparate datasets to generate a new dataset. A third level, namely decision-level fusion, exists in the literature, but none of the papers in our literature sample fell into this category [170, 171].
Data-level Fusion
Among all papers we reviewed, 22% performed data-level fusion. Point cloud-to-cloud fusion can be achieved by combining, for example, airborne and terrestrial LiDAR datasets using the reference points acquired in both surveys [19]. TLS typically acquires detailed measurements at a plot-scale, while ULS can obtain measurements across a larger spatial extent at a landscape-scale [26]. The raw datasets can be fused using ground control points (GCPs) or by identifying similar features in the datasets [74, 100] using the same coordinate system acquired through GNSS or total stations. Other studies [26, 112, 162] used manual co-registration by identifying similar features such as the tallest tree, trees with large crowns, or tree locations. These features were used to guide the manual shifting process and to correctly co-register the two datasets. Defining appropriate key points for co-registration is challenging, especially in forest point clouds with few distinct objects, and can become even more complicated in plantation forests where trees share similar characteristics [32]. Some authors suggest using software tools to co-register point clouds based on key points [33] or the Iterative Closest Point (ICP) algorithm [140, 155, 172] in CloudCompare. The quality of the fused data depends on the forest conditions and the data characteristics, namely the number of terrestrial scans and distance of the scanners from the target [115, 173]. Another type of data-level fusion included LiDAR data fusion with spectral bands and indices, where spectral information was projected onto the point cloud [74, 113, 153] using, for example, CloudCompare [74] and FUSION software [113]. Reflective targets help the co-registration of terrestrial images and point clouds, enabling the merging of RGB pixel colors to point locations through co-registration [153].
Feature-level Fusion
A total of 78% of the papers performed feature-level fusion by merging post-classification outputs, rasterized LiDAR-derived products, extracted features, and spectral bands and indices to derive a final output. Feature-level fusion in this context can be broadly categorized into pixel-based fusion and object-based fusion [174]. Pixel-based fusion primarily occurs among airborne platforms and between airborne and satellite platforms, mostly combining LiDAR and spectral data. Many of these studies rasterized the LiDAR data to generate canopy height models (CHM) and digital terrain models (DTM) and layer-stacked these outputs with MS and HS bands as inputs for subsequent classification algorithms [28, 38, 54, 61, 149]. In most of these pixel-based fusion cases, the pre-processing takes place separately, after which they are combined. For example, hyperspectral data is processed in ENVI, while LiDAR data products are created separately. The combined data stack is then used for classifications often using machine learning methods [28]. Object-based fusion involves direct segmentation at both the individual tree scale and plot scale, followed by fusion based on various extracted features for the objects. For example, LiDAR data can be used to segment individual tree canopies, often using inverse watershed algorithms, and then features extracted from spectral data are added to those segments essentially creating a new vector-format data. The resulting spatial or vector format outputs were then used, for example, to classify tree species with machine learning methods [47, 66, 75, 102]. Most commonly, feature-level data fusion takes place in a coding environment, such as R packages to segment trees, or python for post-processing the datasets with machine learning algorithms. Readily available software solutions to process different types of data and combine the resulting features seem to be lagging behind.
Gains of LiDAR Data Fusion
To examine the gains that LiDAR data fusion brings for each of the application categories outlined above, we examined the studies that directly compared the performance of their methods with and without fusion.
Classification (Tree Species/Land Cover)
Species classification based exclusively on LiDAR data has proven effective in particular circumstances including when the set of species to be discriminated have contrasting silhouette or stature [45, 59] or when the segmentation addresses broad class separation between evergreen and deciduous species [34]. In our review, when a LiDAR dataset was compared to LiDAR fused with spectral information, overall classification accuracy increased by 41%, on average. Conversely, when they used fused datasets instead of spectral information alone, overall accuracy increased by a mere 10-14%. A few studies reported a beneficial effect of the combined use of LiDAR and spectral information by examining the importance of the various predictors in a Random Forest classification model [63]. Finally, in some cases, LiDAR only was used at the segmentation step to delineate tree crowns or stands [35, 66]. Vegetation height estimated from LiDAR data fused with MS and HS data enhances the overall accuracy of species classification [28]. However, this generally benefited object-level classification more than pixel-level classifications.
Growing Stock Volume and Biomass
Volume and/or AGB assessment requires structural and species information. While LiDAR data provide information about structure, fusion with optical data is often sought for species-specific estimates. Among the papers in this section, data fusion was performed mainly at the ITA (45%) and ABA (50%) levels, and much less at the landscape level (5%). Data fusion at tree-level mostly uses fusion of ground-based and airborne point clouds [77], addressing occlusion issues and enabling extraction of tree attributes such as DBH and total height with greater accuracy. For larger acquisitions in complex terrain, fusion of ULS, photogrammetric point clouds and MS images shows significant improvement in explained variance and error. For example, [75] fused ULS and HS data at the individual tree level, increasing the R2 from 0.75 to 0.89. In [81] (ABA), by fusing ALS and MS data, the authors reduced RMSE from 18.4% (LiDAR alone) and 19% (MS alone) to 16.8%. In [89] (ITA), by fusing RGB and MS data, the authors increased their R2 from 0.77 to 0.81. Plot-level data fusion involved predominantly airborne or spaceborne data, which allowed larger scale assessment. While fusion with ALS mostly consists of combining continuous data over the area of interest [75, 88, 94, 95], applications with spaceborne data mostly consist of upscaling approaches [76, 81,82,83, 87]. In another study [94], fusing ALS and HS data increased R2 from 0.81 to 0.87 for ITA and 0.65 to 0.84 for ABA. In [77] (ITA), the utilization of both TLS-based DBH and ULS-based tree height resulted in a reduced RMSE ranging from 8.6% to 12.7%. These RMSE values compare favorably to the RMSE values of 10.1% to 20.4% when exclusively using TLS and 30.3% to 76.9% when relying solely on ULS.
Forest Structure
The primary objective in fusing ground-based LiDAR with ULS and ALS data is to capitalize on the advantages of the ground-based LiDAR, which effectively capture the lower part of the trees, in combination with the strengths of airborne LiDAR, which accurately represent the crowns. In [26], fused TLS and ULS were used to measure tree height, crown projection area (CPA) and crown volume (CV). In estimating height, the RMSE with TLS and ULS alone was 0.30 m and 0.11 m, respectively, while the fused dataset RMSE was 0.05 m. In estimating CPA, the RMSE with TLS and ULS alone was 3.06 m2 and 4.61 m2, respectively, while the fused dataset RMSE was 0.46 m2. Finally, for CV, the RMSE with TLS and ULS alone was 29.63 m3 and 30.23 m3, respectively, while the fused dataset RMSE was 8.30 m3. Another study [32] that fused ground-based LiDAR and ULS observed significant R2 improvements in tree height (9%), stem volume (5%), and crown volume estimates (18%). In [26, 33, 112, 115], there is a strong focus on co-registration issues before individual tree parameters were extracted. Furthermore, [33] achieved enhanced accuracy for DBH measurements through TLS and ULS data fusion: 2.1% compared to TLS alone and 20.7% compared to ULS alone for DBH. [113] fused ALS and MS data and reported improved R2 when compared with ALS alone: quadratic mean diameter (from 0.5 to 0.64), basal area (from 0.53 to 0.73), tree height (from 0.92 to 0.94), stem density (from 0.29 to 0.30) and stand density index (from 0.72 to 0.82). Among the papers that use ALS and satellite data, [108] derive total volume and basal area by fusing LiDAR and topographic information (TI). Using LiDAR alone the R2 is 0.67 for volume and 0.61 for basal area, while fusion with TI increased the R2 to 0.74 and 0.69, respectively. MS-ALS-TI fusion increased the R2 further to 0.85 and 0.84, respectively.
Tree Height
For tree height estimates, 50% of the papers focus on ITA, and 50% on ABA. For example, [126] spatial resolution of tree top height estimates was improved by fusing low-density ALS data with high resolution optical images by applying k-NN technique, which allowed tree height estimates for crowns that are not represented in the LiDAR data. In this paper, it is evident that a greater number of LiDAR points associated with tree crowns enhances the accuracy of tree top height estimation. With the fusion, they detected 97% of the total trees with an estimated tree-top mean absolute error of 2.45 m (measured error with LiDAR data alone was 3.70 m). In [122], the benefit of including LiDAR-derived topographic data for estimation of canopy heights from Tandem-X InSAR data is demonstrated. Furthermore, the use of the full-resolution DTM from Land, Vegetation, and Ice Sensor (LVIS) instead of the simulated GEDI DTM significantly decreased the RMSE from 4.6 m to 3.5 m, and the bias from 1.8 m to 1.3 m.
Segmentation
In a majority of the literature reviewed, data fusion was mainly used for single tree segmentation, using airborne data [135, 138, 143]. Segmentation challenges, especially for tree-level data, include georeferencing the data products and balancing data with different spatial resolution [138]. At the single-crown level, raw point clouds or point cloud-based metrics are easier to fuse than pixel-based information [139]. The results presented by [135] show a significant difference between fused data versus ALS alone: for low-density forests, the ITA method based on ALS alone correctly detects only 63% of trees, compared to 92% when fusing data from ALS and HS. For high-density forest, fusion detects 70% of the trees compared to 62% with ALS alone. In [137, 143], the authors fused ALS and MS data increasing their segmentation by 2-4% compared to ALS alone. In [138], fusion of ALS and HS increased their segmentation by 5% compared to single sensor accuracy.
Other
The ‘other’ applications included LiDAR data fusion studies focused on wetland/marsh areas, boreal forests and a natural disaster impact assessment [155, 156, 158]. For example, [158] fused airborne LiDAR with MS imagery to assess forest loss in a wetland zone. They document that forest/non-forest classification accuracy improved from 86-87% to 91-93% demonstrating a small ~5% increase in accuracy due to the inclusion of LiDAR metrics. [155] demonstrated that their automatic ALS and TLS point cloud co-registration resulted in a denser point cloud, in which the stems and canopy of individual trees were better represented than in the single LiDAR datasets, but provided no quantitative improvement on retrieval of canopy/forest/tree information in a boreal forest. [156] developed a method to assess wind damage by fusing ALS and MS imagery. They conclude that adding the structural metrics from ALS to the spectral information provides estimates of structural damages that cannot be retrieved with spectral data alone.
Fuel Load
At a landscape-scale, multiple studies have documented that fusing ALS data with Landsat and Sentinel-2 satellite images improve total fuel estimates [168]. Specifically, [161] demonstrated that 24-32% of the remaining variability in surface fuels, uncharacterized by ALS data, can be explained by Landsat NDVI time-series. Furthermore, ALS data combined with Landsat time-series achieve both higher classification accuracy and lower prediction errors in post-fire snag classes, and shrub cover estimates [165]. Similarly, airborne MS image-derived NDVI metrics, when fused with ALS, further improved classification overall accuracy of the post-fire regeneration types at stand-scale by 10-50% [163]. Similar data fusion studies also predicted canopy fuel variables, such as canopy fuel load (kg/m2), and surface fuel layers (including coarse woody debris biomass) with adjusted R2 ranging between 0.55-0.94 [166]. At the ITA scale, post-fire changes in DBH and biomass can be estimated by fusing MLS data with ULS/ALS, where the below-canopy measurements are enabled by the MLS data [162]. However, a fusion of ALS and TLS data for ITA metrics was recently documented to offer no particular advantage over either sensor used alone [169].
Discussion
The information from the structured literature review was discussed by an international panel of experts in Leiden, the Netherlands, May 11-12, 2023. The panel consisted of 11 scientists with expertise across all LiDAR platforms and their fusion with other datasets across the full range of forestry applications.
What is ‘Data Fusion’ and How Should This Term Be Used?
Through the literature search, it became apparent that there was confusion regarding what should be considered data fusion. Specifically, we found that the terms ‘data fusion’, ‘data combination’ and ‘data integration’ are used in a confusing manner. For example, we recognize that there are studies that perform data-level or feature-level fusion without calling it as such, but instead commonly referring to it as data combination [175, 176], data registration [173] or data integration [177, 178]. However, we found that those terms are also commonly used for instances where data fusion as defined here is not actually appropriate. These include, for example, instances where one dataset is used to train a model that makes predictions based on another dataset, which would be considered calibration/validation studies [179,180,181]. We do find a few instances of those [118, 132] in our data-level and feature-level fusion examples, although there are very few of these cases.
Based on our literature review of papers that considered (multi-sensor) ‘LiDAR data fusion’, we define data- and feature-level data fusion as: the merging of data or derived features from different sources, (instruments/devices) of which at least one is LiDAR, to improve the characteristics of the LiDAR dataset and/or enable enhanced forest observations. The term ‘data integration’ can be reserved for decision-level data fusion, where datasets are only combined to come to a conclusion (decision), but they are not used to generate a new dataset or data product as inputs for classification etc [24, 182]. The term ‘data combination’ can be used to indicate the entire process that includes both data fusion starting at the pre-processing step through data integration at the decision-making step (Fig. 4).
It is important to note that we only focused on multi-source data fusion, while other instances of data fusion are ignored: multi-temporal data fusion (datasets repeatedly collected at different times with the same sensor), MS-LiDAR (MS data and LiDAR collected at the same time by the same instrument), and co-registration of data from the same instrument (e.g. strip adjustment of ALS data collection and co-registration of TLS point clouds acquired from various points of view to create a forest scene). These types of fusion, though beyond the scope of this review, can still be relevant for monitoring forest growth, species categorization, identifying tree locations and could be considered by practitioners.
What are the Most Important Lessons Learned About Data Fusion in Forest Observations?
Our review indicates that all common applications are improved using data fusion. Single tree segmentation can be improved by fusing spectral or 2.5D structural information from LiDAR data, especially in low-density forests. Results obtained with canopy height model for ITA were slightly improved when LiDAR data is fused with MS images. This application is likely to be more relevant at a local scale, where detailed information about individual trees is required. In growing stock volume or above-ground biomass assessments, data fusion can improve model performance by improving tree species classification. These applications can be relevant at local to regional scales. The use of airborne and spaceborne data fusion expands the study areas to larger extents. Tree height or canopy height are correctly detected by LiDAR data alone, and there is no real need for LiDAR data fusion for further improvements, but data fusion can extend the spatial and temporal resolution of derived data products. LiDAR data fusion with spectral information, such as MS or HS data, improves tree species classification accuracy compared to using LiDAR data alone. While LiDAR alone can be effective in certain circumstances, combining LiDAR with spectral information enhances the accuracy of species classification models significantly. Fusion of ground-based LiDAR data with airborne LiDAR data improves the assessment of forest structure parameters, including tree density, crown diameter, stem density and stand volume. Fusion of ground-based and airborne LiDAR data allows the combination of strengths from both sources, capturing information above and below the canopy layer. LiDAR data fusion for fuel load estimation has been used for characterizing canopy and surface fuels. At a landscape scale, fusing LiDAR data with MS images enhances the total fuel estimates, classification accuracy of post-fire snag classes and prediction of canopy fuel variables. In summary, data fusion can further improve the accuracy of a resulting data product or application, and it can improve the spatial and/or temporal resolution of such data products, providing valuable information for practitioners. We note, though, that a lot of these gains are marginal. Therefore, it is important to further discuss the operationalization of these methods.
What are the Main Challenges in Data Fusion for Operational Applications?
We identified several challenges with operationalizing data fusion approaches. One fundamental challenge arises from the utilization of two distinct RS datasets to develop a particular solution. This necessitates acquiring multiple datasets, thereby increasing the overall cost, especially when combining data from independent acquisition platforms, such as ALS and HS data, or when dealing with large spatial extents. Although there are airborne systems available that allow simultaneous data collection from multiple sensors (e.g. LiDAR and MS image), data providers must subsequently process the acquired data, leading to additional costs. Data fusion is also a major challenge for the data user, as the effort required to process two or more RS datasets increases significantly. Consequently, separate processing steps must be developed for each dataset, increasing the overall processing time. Additionally, each step must be individually evaluated and quality-checked. To expedite processing, greater computing power becomes essential, which may be difficult to achieve, especially in practical applications. Moreover, the data processing demands specific expertise to ensure methodological correctness. Analysts may need to possess additional skills or collaborate with domain specialists to execute the analysis accurately. Both the processing time and the additional equipment and expertise required increase the cost of the analyses and can be a barrier. Another big challenge in data fusion is related to the data itself. Different data sources may have differences in resolution, accuracy, spatial or temporal coverage, which can affect the effectiveness of fusion techniques. If the quality of the data is low or the fusion process is not optimized, it might not add substantial benefits or may introduce additional uncertainties. A prevalent challenge in RS applications is the significant time lag between data collection (e.g., aerial flights) and the delivery of processed results to end users. The larger the surveyed area and the number of datasets fused, the longer it takes. IT also requires more validation and more rigorous accuracy assessment, which often reveals further deficiencies and errors that need to be addressed. This delay in information provision may render the data obsolete or limit its effectiveness in addressing situations with rapidly changing events, such as insect outbreaks or areas impacted by severe wind/fire damage.
What are the Priorities in Moving Data Fusion Forward?
We find that the RS community can further advance LiDAR data fusion enabling a wider range of applications from environmental monitoring and resource management to disaster responses. Several key areas should be a priority in propelling the applications and methodologies of LiDAR data fusion forward. First, our structured review shows that more studies on LiDAR data fusion are needed in the southern hemisphere to better understand the limitations and advantages of such applications in the extensive rainforests in the global south, which have been relatively underexplored compared to the northern hemisphere. The underrepresentation from the global south has important implications because these regions include a large majority of the tropical forests, where LiDAR fusion may have many benefits. For example, tropical forests typically include tall trees with several middle and understory layers of dense canopies, where TLS data fused with ALS data could fully characterize the forest structure. Secondly, even though improvements using data fusion for a variety of applications have been reported, compared to using LiDAR data alone, it is yet unclear to what extent these could be operationalized in a forestry setting. More information is required to properly balance the costs of additional data collection and processing, and the required expertise versus the benefits in accuracy or spatial and temporal resolution. Common data formats with metadata standards need to be established to develop interoperable algorithms among researchers to facilitate collaborations. As an example, variables that can be extracted from ALS point clouds are infinite and standardizing these variables is always a challenge. In [183], the authors suggested a list of 10 standard variables within 3 main classes (height, vertical variability, and cover) as a starting point to characterize the vegetation structure. Moreover, in [184], the authors recommend metrics such as the skewness or kurtosis or the coefficient of variation of vegetation height to describe vegetation structures. Both papers proposed that the data be made available in raster format to standardize subsequent studies or operations. Addressing sensor-specific biases, radiometric differences, and geometric distortions across different data sources is essential to harmonize fused datasets effectively. Moreover, it is necessary to develop robust methods to quantify and address uncertainties in data fusion processes, which will boost confidence in the final products. A rigorous validation and benchmarking of data fusion approaches with ground-based accuracy assessment and independent datasets are crucial. Finally, LiDAR data fusion studies should promote open data initiatives and foster collaboration among researchers, institutions, and data providers. This would facilitate access to diverse datasets and accelerate data fusion research, which will further enable data fusion methods and solutions that can operate in real-time especially for applications requiring quick and up-to-date information.
Conclusion
This paper presents a comprehensive review of LiDAR data fusion research for forest observations over the last decade. Our structured review indicates that there has been a slight upward trend in the number of publications on LiDAR data fusion for forestry observations and aerial platforms (both UAVs and airborne platforms) continue to be the most widely used option. We conclude that multi-sensor LiDAR data fusion has the potential to improve forest observations in a great variety of applications. Our team suggests a clear definition of the term “data fusion” to avoid confusion among the commonly used terms ‘data fusion’, ‘data combination’, and ‘data integration’. The review further highlights that data fusion poses several challenges, including costs, computational effort, and processing times, variability in data quality, spatial resolution, and a need for specialized expertise. Therefore, practitioners must carefully weigh the potential benefits of LiDAR data fusion in relation to the actual need for such benefits and the accompanying cost.
Data Availability
No datasets were generated or analysed during the current study.
References
Franklin JF, Spies TA, Van Pelt R, et al. Disturbances and structural development of natural forest ecosystems with silvicultural implications, using Douglas-fir forests as an example. For Ecol Manag. 2002;155:399–423.
Wulder MA, White JC, Nelson RF, Næsset E, Ørka HO, Coops NC, Hilker T, Bater CW, Gobakken T. Lidar sampling for large-area forest characterization: a review. Remote Sens Environ. 2012;121:196–209.
Dassot M, Constant T, Fournier M. The use of terrestrial LiDAR technology in forest science: application fields, benefits and challenges. Ann For Sci. 2011;68:959–74.
Lim K, Treitz P, Wulder M, St-Onge B, Flood M. LiDAR remote sensing of forest structure. Prog Phys Geogr Earth Environ. 2003;27:88–106.
Dubayah RO, Drake JB. Lidar remote sensing for forestry. J For. 2000;98:44–6.
Simard M, Pinto N, Fisher JB, Baccini A. Mapping forest canopy height globally with spaceborne lidar. J Geophys Res Biogeosci. 2011; https://doi.org/10.1029/2011JG001708.
Dubayah R, Blair JB, Goetz S, et al. The global ecosystem dynamics investigation: High-resolution laser ranging of the Earth’s forests and topography. Sci Remote Sens. 2020;1:100002.
Tang H, Dubayah R, Brolly M, Ganguly S, Zhang G. Large-scale retrieval of leaf area index and vertical foliage profile from the spaceborne waveform lidar (GLAS/ICESat). Remote Sens Environ. 2014;154:8–18.
Di Stefano F, Chiappini S, Gorreja A, Balestra M, Pierdicca R. Mobile 3D scan LiDAR: a literature review. Geomatics, Nat Hazards Risk. 2021;12:2387–429.
Kellner JR, Armston J, Birrer M, et al. New opportunities for forest remote sensing through ultra-high-density drone lidar. Surv Geophys. 2019;40:959–77.
Moran CJ, Kane VR, Seielstad CA. Mapping forest canopy fuels in thewestern united states with LiDAR-Landsat covariance. Remote Sens. 2020;12:1–37.
Narine LL, Popescu SC, Malambo L. Using ICESat-2 to estimate and map forest aboveground biomass: a first example. Remote Sens. 2020;12:1–16.
Nath B, Ni-Meister W. The interplay between canopy structure and topography and its impacts on seasonal variations in surface reflectance patterns in the boreal region of alaska—implications for surface radiation budget. Remote Sens. 2021; https://doi.org/10.3390/rs13163108.
Wallace L, Lucieer A, Watson C, Turner D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens. 2012;4:1519–43.
Duncanson L, Kellner JR, Armston J, et al. Aboveground biomass density models for NASA’s Global Ecosystem Dynamics Investigation (GEDI) lidar mission. Remote Sens Environ. 2022; https://doi.org/10.1016/j.rse.2021.112845.
Simonson WD, Allen HD, Coomes DA. Applications of airborne lidar for the assessment of animal species diversity. Methods Ecol Evol. 2014;5:719–29.
Marselis SM, Abernethy K, Alonso A, et al. Evaluating the potential of full-waveform lidar for mapping pan-tropical tree species richness. Glob Ecol Biogeogr. 2020;29:1799–816.
Andersen HE, McGaughey RJ, Reutebuch SE. Estimating forest canopy fuel parameters using LIDAR data. Remote Sens Environ. 2005;94:441–9.
Balestra M, Tonelli E, Vitali A, Urbinati C, Frontoni E, Pierdicca R. Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees. Remote Sens. 2023; https://doi.org/10.3390/rs15082197.
Calders K, Adams J, Armston J, et al. Terrestrial laser scanning in forest ecology: Expanding the horizon. Remote Sens Environ. 2020;251:112102.
Liang X, Hyyppä J, Kaartinen H, Lehtomäki M, Pyörälä J, Pfeifer N, Holopainen M, Brolly G, Francesco P, Hackenberg J. International benchmarking of terrestrial laser scanning approaches for forest inventories. ISPRS J Photogramm Remote Sens. 2018;144:137–79.
Windrim L, Bryson M. Detection, segmentation, and model fitting of individual tree stems from airborne laser scanning of forests using deep learning. Remote Sens. 2020; https://doi.org/10.3390/RS12091469.
Schmitt M, Zhu XX. Data Fusion and Remote Sensing: an ever-growing relationship. IEEE Geosci Remote Sens Mag. 2016;4:6–23.
Zhang J. Multi-source remote sensing data fusion: status and trends. Int J Image Data Fusion. 2010;1:5–24.
Ghamisi P, Rasti B, Yokoya N, Wang Q, Hofle B, Bruzzone L, Bovolo F, Chi M, Anders K, Gloaguen R. Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art. IEEE Geosci Remote Sens Mag. 2019;7:6–39.
Terryn L, Calders K, Bartholomeus H, et al. Quantifying tropical forest structure through terrestrial and UAV laser scanning fusion in Australian rainforests. Remote Sens Environ. 2022; https://doi.org/10.1016/j.rse.2022.112912.
Alonzo M, Bookhagen B, Roberts DA. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens Environ. 2014;148:70–83.
Sankey T, Donager J, McVay J, Sankey JB. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens Environ. 2017;195:30–43.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009;151:W-65–94.
Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10:1–11.
Joyce KE, Nakalembe CL, Gómez C, Suresh G, Fickas K, Halabisky M, Kalamandeen M, Crowley MA. Discovering inclusivity in remote sensing: leaving no one behind. Front Remote Sens. 2022;3:1–6.
Fekry R, Yao W, Cao L, Shen X. Ground-based/UAV-LiDAR data fusion for quantitative structure modeling and tree parameter retrieval in subtropical planted forest. For Ecosyst. 2022;9:100065.
Panagiotidis D, Abdollahnejad A, Slavík M. 3D point cloud fusion from UAV and TLS to assess temperate managed forest structures. Int J Appl Earth Obs Geoinf. 2022;112:102917.
Hell M, Brandmeier M, Briechle S, Krzystek P. Classification of tree species and standing dead trees with lidar point clouds using two deep neural networks: PointCNN and 3DmFV-Net. PFG - J Photogramm Remote Sens Geoinf Sci. 2022;90:103–21.
Dutta D, Wang K, Lee E, Goodwell A, Woo DK, Wagner D, Kumar P. Characterizing vegetation canopy structure using airborne remote sensing data. IEEE Trans Geosci Remote Sens. 2017;55:1160–78.
Ruiz LÁ, Recio JA, Crespo-Peremarch P, Sapena M. An object-based approach for mapping forest structural types based on low-density LiDAR and multispectral imagery. Geocarto Int. 2018;33:443–57.
Sun Y, Xin Q, Huang J, Huang B, Zhang H. Characterizing tree species of a tropical wetland in Southern China at the individual tree level based on convolutional neural network. IEEE J Sel Top Appl Earth Obs Remote Sens. 2019;12:4415–25.
Norton CL, Hartfield K, Collins CDH, van Leeuwen WJD, Metz LJ. Multi-temporal LiDAR and hyperspectral data fusion for classification of semi-arid woody cover species. Remote Sens. 2022; https://doi.org/10.3390/rs14122896.
Wernicke J, Seltmann CT, Wenzel R, Becker C, Körner M. Forest canopy stratification based on fused, imbalanced and collinear LiDAR and Sentinel-2 metrics. Remote Sens Environ. 2022; https://doi.org/10.1016/j.rse.2022.113134.
Hartling S, Sagan V, Maimaitijiang M. Urban tree species classification using UAV-based multi-sensor data fusion and machine learning. GIScience Remote Sens. 2021;58:1250–75.
Dash JP, Pearse GD, Watt MS, Paul T. Combining airborne laser scanning and aerial imagery enhances echo classification for invasive conifer detection. Remote Sens. 2017; https://doi.org/10.3390/rs9020156.
Onojeghuo AO, Onojeghuo AR. Object-based habitat mapping using very high spatial resolution multispectral and hyperspectral imagery with LiDAR data. Int J Appl Earth Obs Geoinf. 2017;59:79–91.
Yadav BKV, Lucieer A, Baker SC, Jordan GJ. Tree crown segmentation and species classification in a wet eucalypt forest from airborne hyperspectral and LiDAR data. Int J Remote Sens. 2021;42:7952–77.
Dechesne C, Mallet C, Le Bris A, Gouet-Brunet V. Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery. ISPRS J Photogramm Remote Sens. 2017;126:129–45.
Briechle S, Krzystek P, Vosselman G. Silvi-Net – A dual-CNN approach for combined classification of tree species and standing dead trees from remote sensing data. Int J Appl Earth Obs Geoinf. 2021;98:102292.
Yang J, Jones T, Caspersen J, He Y. Object-based canopy gap segmentation and classification: Quantifying the pros and cons of integrating optical and LiDAR data. Remote Sens. 2015;7:15917–32.
Plakman V, Janssen T, Brouwer N, Veraverbeke S. Mapping species at an individual-tree scale in a temperate forest, using sentinel-2 images, airborne laser scanning data, and random forest classification. Remote Sens. 2020;12:1–25.
Bigdeli B, Samadzadegan F, Reinartz P. Feature grouping-based multiple fuzzy classifier system for fusion of hyperspectral and LIDAR data. J Appl Remote Sens. 2014;8:083509.
Bigdeli B, Samadzadegan F, Reinartz P. Fusion of hyperspectral and LIDAR data using decision template-based fuzzy multiple classifier system. Int J Appl Earth Obs Geoinf. 2015;38:309–20.
Wu Q, Zhong R, Zhao W, Song K, Du L. Land-cover classification using GF-2 images and airborne lidar data based on Random Forest. Int J Remote Sens. 2019;40:2410–26.
Pervin R, Robeson SM, MacBean N. Fusion of airborne hyperspectral and LiDAR canopy-height data for estimating fractional cover of tall woody plants, herbaceous vegetation, and other soil cover types in a semi-arid savanna ecosystem. Int J Remote Sens. 2022;43:3890–926.
Dashti H, Poley A, Glenn NF, Ilangakoon N, Spaete L, Roberts D, Enterkine J, Flores AN, Ustin SL, Mitchell JJ. Regional scale dryland vegetation classification with an integrated lidar-hyperspectral approach. Remote Sens. 2019; https://doi.org/10.3390/rs11182141.
Su Y, Guo Q, Fry DL, Collins BM, Kelly M, Flanagan JP, Battles JJ. A vegetation mapping strategy for conifer forests by combining airborne LiDAR data and aerial imagery. Can J Remote Sens. 2016;42:1–15.
Hall EC, Lara MJ. Multisensor UAS mapping of plant species and plant functional Types in Midwestern Grasslands. Remote Sens. 2022; https://doi.org/10.3390/rs14143453.
Tong X, Li X, Xu X, Xie H, Feng T, Sun T, Jin Y, Liu X. A two-phase classification of urban vegetation using airborne LiDAR data and aerial photography. IEEE J Sel Top Appl Earth Obs Remote Sens. 2014;7:4153–66.
Jin H, Mountrakis G. Fusion of optical, radar and waveform LiDAR observations for land cover classification. ISPRS J Photogramm Remote Sens. 2022;187:171–90.
Torabzadeh H, Leiterer R, Hueni A, Schaepman ME, Morsdorf F. Tree species classification in a temperate mixed forest using a combination of imaging spectroscopy and airborne laser scanning. Agric For Meteorol. 2019;279:107744.
Esteban J, Fernández-Landa A, Tomé JL, Gómez C, Marchamalo M. Identification of silvicultural practices in mediterranean forests integrating landsat time series and a single coverage of als data. Remote Sens. 2021; https://doi.org/10.3390/rs13183611.
Trouvé R, Jiang R, Fedrigo M, White MD, Kasel S, Baker PJ, Nitschke CR. Combining environmental, multispectral, and LiDAR data improves forest type classification: a case study on mapping cool temperate rainforests and mixed forests. Remote Sens. 2023; https://doi.org/10.3390/rs15010060.
Kloiber SM, Macleod RD, Smith AJ, Knight JF, Huberty BJ. A semi-automated, multi-source data fusion update of a Wetland inventory for East-Central Minnesota, USA. Wetlands. 2015;35:335–48.
Sankey TT, McVay J, Swetnam TL, McClaran MP, Heilman P, Nichols M. UAV hyperspectral and lidar data and their fusion for arid and semi-arid land vegetation monitoring. Remote Sens Ecol Conserv. 2018;4:20–33.
Szostak M, Knapik K, Wȩzyk P, Likus-Cieślik J, Pietrzykowski M. Fusing Sentinel-2 imagery and ALS point clouds for defining LULC changes on reclaimed areas by afforestation. Sustain. 2019; https://doi.org/10.3390/su11051251.
Sun C, Cao S, Sanchez-Azofeifa GA. Mapping tropical dry forest age using airborne waveform LiDAR and hyperspectral metrics. Int J Appl Earth Obs Geoinf. 2019;83:101908.
Wan H, Tang Y, Jing L, Li H, Qiu F, Wu W. Tree species classification of forest stands using multisource remote sensing data. Remote Sens. 2021;13:1–24.
Parra A, Greenberg JA. Estimation of fractional plant lifeform cover for the conterminous United States using Landsat imagery and airborne LiDAR. IEEE Trans Geosci Remote Sens. 2022;60:1–14.
Zhang Z, Kazakova A, Moskal LM, Styers DM. Object-based tree species classification in urban ecosystems using LiDAR and hyperspectral data. Forests. 2016;7:1–16.
Tian X, Zhang X, Wu Y. Classification of planted forest species in southern China with airborne hyperspectral and LiDAR data. J For Res. 2020;25:369–78.
Feng B, Zheng C, Zhang W, Wang L, Yue C. Analyzing the role of spatial features when cooperating hyperspectral and LiDAR data for the tree species classification in a subtropical plantation forest area. J Appl Remote Sens. 2020;14:22213.
Verlič A, Durić N, Kokalj Ž, Marsetič A, Simončič P, Oštir K. Tree species classification using worldview-2 satellite images and laser scanning data in a natural urban forest. Sumar List. 2014;138:477–88.
Szostak M, Pietrzykowski M, Likus-Cieslik J. Reclaimed area land cover mapping using Sentinel-2 imagery and LiDAR point clouds. Remote Sens. 2020; https://doi.org/10.3390/rs12020261.
Tang J, Liang J, Yang Y, Zhang S, Hou H, Zhu X. Revealing the structure and composition of the restored vegetation cover in semi-arid mine dumps based on LiDAR and hyperspectral images. Remote Sens. 2022; https://doi.org/10.3390/rs14040978.
Zhong H, Lin W, Liu H, Ma N, Liu K, Cao R, Wang T, Ren Z. Identification of tree species based on the fusion of UAV hyperspectral image and LiDAR data in a coniferous and broad-leaved mixed forest in Northeast China. Front Plant Sci. 2022; https://doi.org/10.3389/fpls.2022.964769.
Fragoso-Campón L, Quirós E, Mora J, Gutiérrez Gallego JA, Durán-Barroso P. Overstory-understory land cover mapping at the watershed scale: accuracy enhancement by multitemporal remote sensing analysis and LiDAR. Environ Sci Pollut Res. 2020;27:75–88.
Lian X, Zhang H, Xiao W, Lei Y, Ge L, Qin K, He Y, Dong Q, Li L, Han Y. Biomass calculations of individual trees based on unmanned aerial vehicle multispectral imagery and laser scanning combined with terrestrial laser scanning in complex stands. Remote Sens. 2022;14:4715.
Qin H, Zhou W, Yao Y, Wang W. Estimating aboveground carbon stock at the scale of individual trees in subtropical forests using UAV LiDAR and hyperspectral data. Remote Sens. 2021;13:4969.
Chen A, Wang X, Zhang M, Guo J, Xing X, Yang D, Zhang H, Hou Z, Jia Z, Yang X. Fusion of LiDAR and multispectral data for aboveground biomass estimation in Mountain Grassland. Remote Sens. 2023;15:1–18.
Wang Y, Pyörälä J, Liang X, Lehtomäki M, Kukko A, Yu X, Kaartinen H, Hyyppä J. In situ biomass estimation at tree and plot levels: What did data record and what did algorithms derive from terrestrial and aerial point clouds in boreal forest. Remote Sens Environ. 2019;232:111309.
Fatoyinbo T, Armston J, Simard M, et al. The NASA AfriSAR campaign: Airborne SAR and lidar measurements of tropical forest structure and biomass in support of current and future space missions. Remote Sens Environ. 2021; https://doi.org/10.1016/j.rse.2021.112533.
Dalponte M, Frizzera L, Ørka HO, Gobakken T, Næsset E, Gianelle D. Predicting stem diameters and aboveground biomass of individual trees using remote sensing data. Ecol Indic. 2018;85:367–76.
Qi W, Saarela S, Armston J, Ståhl G, Dubayah R. Forest biomass estimation over three distinct forest types using TanDEM-X InSAR data and simulated GEDI lidar data. Remote Sens Environ. 2019;232:111283.
Ediriweera S, Pathirana S, Danaher T, Nichols D. Estimating above-ground biomass by fusion of LiDAR and multispectral data in subtropical woody plant communities in topographically complex terrain in North-eastern Australia. J For Res. 2014;25:761–71.
Campbell MJ, Dennison PE, Kerr KL, Brewer SC, Anderegg WRL. Scaled biomass estimation in woodland ecosystems: testing the individual and combined capacities of satellite multispectral and lidar data. Remote Sens Environ. 2021;262:112511.
Xi X, Han T, Wang C, Luo S, Xia S, Pan F. Forest above ground biomass inversion by fusing GLAS with optical remote sensing data. ISPRS Int J Geo-Information. 2016; https://doi.org/10.3390/ijgi5040045.
Chi H, Sun G, Huang J, Li R, Ren X, Ni W, Fu A. Estimation of forest aboveground biomass in Changbai Mountain region using ICESat/GLAS and Landsat/TM data. Remote Sens. 2017; https://doi.org/10.3390/rs9070707.
Hernando A, Puerto L, Mola-Yudego B, Manzanera JA, García-Abril A, Maltamo M, Valbuena R. Estimation of forest biomass components using airborne lidar and multispectral sensors. IForest. 2019;12:207–13.
de Almeida DRA, Broadbent EN, Ferreira MP, et al. Monitoring restored tropical forest diversity and structure through UAV-borne hyperspectral and lidar fusion. Remote Sens Environ. 2021; https://doi.org/10.1016/j.rse.2021.112582.
Shendryk Y. Fusing GEDI with earth observation data for large area aboveground biomass mapping. Int J Appl Earth Obs Geoinf. 2022;115:103108.
Gao L, Chai G, Zhang X. Above-ground biomass estimation of plantation with different tree species using airborne LiDAR and hyperspectral data. Remote Sens. 2022;14:1–18.
Huang R, Yao W, Xu Z, Cao L, Shen X. Information fusion approach for biomass estimation in a plateau mountainous forest using a synergistic system comprising UAS-based digital camera and LiDAR. Comput Electron Agric. 2022; https://doi.org/10.1016/j.compag.2022.107420.
Wang M, Im J, Zhao Y, Zhen Z. Multi-platform LiDAR for non-destructive individual aboveground biomass estimation for Changbai Larch (Larix olgensis Henry) using a hierarchical bayesian approach. Remote Sens. 2022; https://doi.org/10.3390/rs14174361.
Jiao Y, Wang D, Yao X, Wang S, Chi T, Meng Y. Forest emissions reduction assessment using optical satellite imagery and space LiDAR fusion for carbon stock estimation. Remote Sens. 2023;15:1–16.
Schreyer J, Tigges J, Lakes T, Churkina G. Using airborne LiDAR and QuickBird data for modelling urban tree carbon storage and its distribution-a case study of Berlin. Remote Sens. 2014;6:10636–55.
Benson ML, Pierce L, Bergen K, Sarabandi K. Model-based estimation of forest canopy height and biomass in the Canadian Boreal forest using radar, LiDAR, and optical remote sensing. IEEE Trans Geosci Remote Sens. 2021;59:4635–53.
Brovkina O, Novotny J, Cienciala E, Zemek F, Russ R. Mapping forest aboveground biomass using airborne hyperspectral and LiDAR data in the mountainous conditions of Central Europe. Ecol Eng. 2017;100:219–30.
Yan M, Xia Y, Yang X, Wu X, Yang M, Wang C, Hou Y, Wang D. Biomass estimation of subtropical arboreal forest at single tree scale based on feature fusion of Airborne LiDAR data and aerial images. Sustainability. 2023;15:1676.
Zhao Y, Ma Y, Quackenbush LJ, Zhen Z. Estimation of individual tree biomass in natural secondary forests based on ALS data and WorldView-3 imagery. Remote Sens. 2022; https://doi.org/10.3390/rs14020271.
Li S, Quackenbush LJ, Im J. Airborne lidar sampling strategies to enhance forest aboveground biomass estimation from Landsat imagery. Remote Sens. 2019; https://doi.org/10.3390/rs11161906.
Dhanda P, Nandy S, Kushwaha SPS, Ghosh S, Murthy YVNK, Dadhwal VK. Optimizing spaceborne LiDAR and very high resolution optical sensor parameters for biomass estimation at ICESat/GLAS footprint level using regression algorithms. Prog Phys Geogr Earth Environ. 2017;41:247–67.
Hardiman BS, Gough CM, Butnor JR, Bohrer G, Detto M, Curtis PS. Coupling fine-scale root and canopy structure using ground-based remote sensing. Remote Sens. 2017;9:1–13.
Pyörälä J, Saarinen N, Kankare V, Coops NC, Liang X, Wang Y, Holopainen M, Hyyppä J, Vastaranta M. Variability of wood properties using airborne and terrestrial laser scanning. Remote Sens Environ. 2019;235:111474.
Shamsoddini A, Trinder JC, Turner R. Paired-data fusion for improved estimation of pine plantation structure. Int J Remote Sens. 2015;36:1995–2009.
Kandare K, Dalponte M, Ørka HO, Frizzera L, Næsset E. Prediction of species-specific volume using different inventory approaches by fusing airborne laser scanning and hyperspectral data. Remote Sens. 2017;9:1–19.
Maack J, Lingenfelder M, Weinacker H, Koch B. Modelling the standing timber volume of Baden-Württemberg—a large-scale approach using a fusion of Landsat, airborne LiDAR and National Forest inventory data. Int J Appl Earth Obs Geoinf. 2016;49:107–16.
Terryn L, Calders K, Bartholomeus H, et al. Quantifying tropical forest structure through terrestrial and UAV laser scanning fusion in Australian rainforests. Remote Sens Environ. 2022; https://doi.org/10.1016/j.rse.2022.112912.
Mohammadi J, Shataee S, Namiranian M, Næsset E. Modeling biophysical properties of broad-leaved stands in the hyrcanian forests of Iran using fused airborne laser scanner data and ultraCam-D images. Int J Appl Earth Obs Geoinf. 2017;61:32–45.
Lang M, Gulbe L, Traškovs A, Stepčenko A. Assessment of different estimation algorithms and remote sensing data sources for regional level wood volume mapping in hemiboreal mixed forests. Balt For. 2016;22:283–96.
Zhu Y, Jeon S, Sung H, Kim Y, Park C, Cha S, Jo HW, Lee WK. Developing uav-based forest spatial information and evaluation technology for efficient forest management. Sustain. 2020;12:1–17.
Lahssini K, Teste F, Dayal KR, Durrieu S, Ienco D, Monnet JM. Combining LiDAR metrics and sentinel-2 imagery to estimate basal area and wood volume in complex forest environment via neural networks. IEEE J Sel Top Appl Earth Obs Remote Sens. 2022;15:4337–48.
Hawryło P, Wezyk P. Predicting growing stock volume of scots pine stands using Sentinel-2 satellite imagery and airborne image-derived point clouds. Forests. 2018; https://doi.org/10.3390/f9050274.
Gao S, Zhang Z, Cao L. Individual tree structural parameter extraction and volume table creation based on near-field lidar data: a case study in a subtropical planted forest. Sensors. 2021; https://doi.org/10.3390/s21238162.
Manzanera JA, García-Abril A, Pascual C, Tejera R, Martín-Fernández S, Tokola T, Valbuena R. Fusion of airborne LiDAR and multispectral sensors reveals synergic capabilities in forest structure characterization. GIScience Remote Sens. 2016;53:723–38.
Qi Y, Coops NC, Daniels LD, Butson CR. Comparing tree attributes derived from quantitative structure models based on drone and mobile laser scanning point clouds across varying canopy cover conditions. ISPRS J Photogramm Remote Sens. 2022;192:49–65.
Valbuena R, Hernando A, Manzanera JA, Martínez-Falero E, García-Abril A, Mola-Yudego B. Most similar neighbor imputation of forest attributes using metrics derived from combined airborne LIDAR and multispectral sensors. Int J Digit Earth. 2018;11:1205–18.
Apostol B, Petrila M, Lorenţ A, Ciceu A, Gancz V, Badea O. Species discrimination and individual tree detection for predicting main dendrometric characteristics in mixed temperate forests by use of airborne laser scanning and ultra-high-resolution imagery. Sci Total Environ. 2020;698:134074.
Paris C, Kelbe D, Van Aardt J, Bruzzone L. A novel automatic method for the fusion of ALS and TLS LiDAR data for robust assessment of tree crown structure. IEEE Trans Geosci Remote Sens. 2017;55:3679–93.
Su Y, Ma Q, Guo Q. Fine-resolution forest tree height estimation across the Sierra Nevada through the integration of spaceborne LiDAR, airborne LiDAR, and optical imagery. Int J Digit Earth. 2017;10:307–23.
Wang Q, Ni-Meister W. Forest canopy height and gaps from multiangular BRDF, assessed with Airborne LiDAR Data (Short Title: Vegetation Structure from LiDAR and Multiangular Data). Remote Sens. 2019; https://doi.org/10.3390/rs11212566.
Denbina M, Simard M, Hawkins B. Forest height estimation using multibaseline PolInSAR and sparse lidar data fusion. IEEE J Sel Top Appl Earth Obs Remote Sens. 2018;11:3415–33.
Qi W, Lee SK, Hancock S, Luthcke S, Tang H, Armston J, Dubayah R. Improved forest height estimation by fusion of simulated GEDI Lidar data and TanDEM-X InSAR data. Remote Sens Environ. 2019;221:621–34.
Lee WJ, Lee CW. Forest canopy height estimation using multiplatform remote sensing dataset. J Sensors. 2018; https://doi.org/10.1155/2018/1593129.
Xie Y, Fu H, Zhu J, Wang C, Xie Q. A LiDAR-Aided multibaseline PolInSAR method for forest height estimation: with emphasis on dual-baseline selection. IEEE Geosci Remote Sens Lett. 2020;17:1807–11.
Qi W, Dubayah RO. Combining Tandem-X InSAR and simulated GEDI lidar observations for forest structure mapping. Remote Sens Environ. 2016;187:253–66.
Dorado-Roda I, Pascual A, Godinho S, Silva CA, Botequim B, Rodríguez-Gonzálvez P, González-Ferreiro E, Guerra-Hernández J. Assessing the accuracy of gedi data for canopy height and aboveground biomass estimates in mediterranean forests. Remote Sens. 2021; https://doi.org/10.3390/rs13122279.
Zhao Y, Im J, Zhen Z, Zhao Y. Towards accurate individual tree parameters estimation in dense forest: optimized coarse-to-fine algorithms for registering UAV and terrestrial LiDAR data. GIScience Remote Sens. 2023;60:2197281.
Luo Y, Qi S, Liao K, Zhang S, Hu B, Tian Y. Mapping the forest height by fusion of ICESat-2 and multi-source remote sensing imagery and topographic information: a case study in Jiangxi Province, China. Forests. 2023; https://doi.org/10.3390/f14030454.
Paris C, Bruzzone L. A three-dimensional model-based approach to the estimation of the tree top height by fusing low-density LiDAR data and very high resolution optical images. IEEE Trans Geosci Remote Sens. 2015;53:467–80.
Pourshamsi M, Garcia M, Lavalle M, Balzter H. A machine-learning approach to PolInSAR and LiDAR data fusion for improved tropical forest canopy height estimation using NASA AfriSAR campaign data. IEEE J Sel Top Appl Earth Obs Remote Sens. 2018;11:3453–63.
Herrero-Huerta M, Felipe-García B, Belmar-Lizarán S, Hernández-López D, Rodríguez-Gonzálvez P, González-Aguilera D. Dense canopy height model from a low-cost photogrammetric platform and LiDAR data. Trees - Struct Funct. 2016;30:1287–301.
Malambo L, Popescu S, Liu M. Landsat-scale regional forest canopy height mapping using ICESat-2 along-track heights: case study of Eastern Texas. Remote Sens. 2023; https://doi.org/10.3390/rs15010001.
Liu X, Su Y, Hu T, Yang Q, Liu B, Deng Y, Tang H, Tang Z, Fang J, Guo Q. Neural network guided interpolation for mapping canopy height of China’s forests by integrating GEDI and ICESat-2 data. Remote Sens Environ. 2022;269:112844.
Chen H, Cloude SR, White JC. Using gedi waveforms for improved tandem-x forest height mapping: a combined sinc + legendre approach. Remote Sens. 2021;13:1–12.
Wang C, Elmore AJ, Numata I, Cochrane MA, Lei S, Hakkenberg CR, Li Y, Zhao Y, Tian Y. A framework for improving Wall-to-Wall Canopy Height Mapping by Integrating GEDI LiDAR. Remote Sens. 2022;14:1–24.
Swetnam TL, Gillan JK, Sankey TT, McClaran MP, Nichols MH, Heilman P, McVay J. Considerations for achieving cross-platform point cloud data fusion across different dryland ecosystem structural states. Front Plant Sci. 2018;8:1–13.
Yang W, Liu Y, He H, Lin H, Qiu G, Guo L. Airborne LiDAR and photogrammetric point cloud fusion for extraction of urban tree metrics according to street network segmentation. IEEE Access. 2021;9:97834–42.
La HP, Eo YD, Chang A, Kim C. Extraction of individual tree crown using hyperspectral image and LiDAR data. KSCE J Civ Eng. 2014;19:1078–87.
Li Y, Chai G, Wang Y, Lei L, Zhang X. ACE R-CNN: an attention complementary and edge detection-based instance segmentation algorithm for individual tree species identification using UAV RGB images and LiDAR data. Remote Sens. 2022; https://doi.org/10.3390/rs14133035.
Zhen Z, Quackenbush LJ, Zhang L. Impact of tree-oriented growth order in marker-controlled region growing for individual tree crown delineation using airborne laser scanner (ALS) data. Remote Sens. 2013;6:555–79.
Aubry-Kientz M, Laybros A, Weinstein B, Ball J, Jackson T, Coomes D, Vincent G. Multisensor data fusion for improved segmentation of individual tree crowns in dense tropical forests. IEEE J Sel Top Appl Earth Obs Remote Sens. 2021;14:3927–36.
Sumbul G, Cinbis RG, Aksoy S. Multisource region attention network for fine-grained object recognition in remote sensing imagery. IEEE Trans Geosci Remote Sens. 2019;57:4929–37.
Guan H, Zhang J, Ma Q, et al. A novel framework to automatically fuse Multiplatform LiDAR data in forest environments based on tree locations. IEEE Trans Geosci Remote Sens. 2020;58:2165–77.
Dian Y, Pang Y, Dong Y, Li Z. Urban tree species mapping using airborne LiDAR and hyperspectral data. J Indian Soc Remote Sens. 2016;44:595–603.
Man Q, Dong P, Yang X, Wu Q, Han R. Automatic extraction of grasses and individual trees in urban areas based on airborne hyperspectral and LiDAR data. Remote Sens. 2020;12:1–22.
Arenas-Corraliza I, Nieto A, Moreno G. Automatic mapping of tree crowns in scattered-tree woodlands using low-density LiDAR data and infrared imagery. Agrofor Syst. 2020;94:1989–2002.
Lee JH, Biging GS, Fisher JB. An individual tree-based automated registration of aerial images to lidar data in a forested area. Photogramm Eng Remote Sens. 2016;82:699–710.
Zahidi I, Yusuf B, Hamedianfar A, Shafri HZM, Mohamed TA. Object-based classification of QuickBird image and low point density LIDAR for tropical trees and shrubs mapping. Eur J Remote Sens. 2015;48:423–46.
O’Neil-Dunne J, MacFaden S, Royar A. A versatile, production-oriented approach to high-resolution tree-canopy mapping in urban and suburban landscapes using GEOBIA and data fusion. Remote Sens. 2014;6:29.
Kandare K, Ørka HO, Dalponte M, Næsset E, Gobakken T. Individual tree crown approach for predicting site index in boreal forests using airborne laser scanning and hyperspectral data. Int J Appl Earth Obs Geoinf. 2017;60:72–82.
Yin S, Zhou K, Cao L, Shen X. Estimating the horizontal and vertical distributions of pigments in canopies of Ginkgo plantation based on UAV-Borne LiDAR, hyperspectral data by coupling PROSAIL model. Remote Sens. 2022; https://doi.org/10.3390/rs14030715.
Kamoske AG, Dahlin KM, Read QD, Record S, Stark SC, Serbin SP, Zarnetske PL. Towards mapping biodiversity from above: Can fusing lidar and hyperspectral remote sensing predict taxonomic, functional, and phylogenetic tree diversity in temperate forests? Glob Ecol Biogeogr. 2022;31:1440–60.
Bulluck L, Lin B, Schold E. Fine resolution imagery and LIDAR-derived canopy heights accurately classify land cover with a focus on shrub/sapling cover in a mountainous landscape. Remote Sens. 2022; https://doi.org/10.3390/rs14061364.
Polewski P, Yao W, Cao L, Gao S. Marker-free coregistration of UAV and backpack LiDAR point clouds in forested areas. ISPRS J Photogramm Remote Sens. 2019;147:307–18.
Li W, Cao S, Campos-Vargas C, Sanchez-Azofeifa A. Identifying tropical dry forests extent and succession via the use of machine learning techniques. Int J Appl Earth Obs Geoinf. 2017;63:196–205.
Dandois JP, Baker M, Olano M, Parker GG, Ellis EC. What is the point? Evaluating the structure, color, and semantic traits of computer vision point clouds of vegetation. Remote Sens. 2017;9:1–20.
Xi Z, Hopkinson C, Rood SB, Barnes C, Xu F, Pearce D, Jones E. A lightweight leddar optical fusion scanning system (Fss) for canopy foliage monitoring. Sensors (Switzerland). 2019; https://doi.org/10.3390/s19183943.
Dai W, Yang B, Liang X, Dong Z, Huang R, Wang Y, Li W. Automated fusion of forest airborne and terrestrial point clouds through canopy density analysis. ISPRS J Photogramm Remote Sens. 2019;156:94–107.
Gopalakrishnan R, Packalen P, Ikonen VP, Räty J, Venäläinen A, Laapas M, Pirinen P, Peltola H. The utility of fused airborne laser scanning and multispectral data for improved wind damage risk assessment over a managed forest landscape in Finland. Ann For Sci. 2020; https://doi.org/10.1007/s13595-020-00992-8.
Peters D, Niemann O, Skelly R. Remote sensing of ecosystem structure: Fusing passive and active remotely sensed data to characterize a deltaic wetland landscape. Remote Sens. 2020;12:1–25.
Powell EB, Laurent KAS, Dubayah R. Lidar-imagery fusion reveals rapid coastal forest loss in delaware bay consistent with marsh migration. Remote Sens. 2022; https://doi.org/10.3390/rs14184577.
De Lima LC, Ramezani M, Borges P, Brunig M. Air-ground collaborative localisation in forests using lidar canopy maps. IEEE Robot Autom Lett. 2023;8:1818–25.
Kacic P, Hirner A, Da Ponte E. Fusing sentinel-1 and-2 to model gedi-derived vegetation structure characteristics in gee for the paraguayan chaco. Remote Sens. 2021;13:1–17.
Bright BC, Hudak AT, Meddens AJH, Hawbaker TJ, Briggs JS, Kennedy RE. Prediction of forest canopy and surface fuels from lidar and satellite time series data in a bark beetle-affected forest. Forests. 2017;8:1–22.
Qi Y, Coops NC, Daniels LD, Butson CR. Assessing the effects of burn severity on post-fire tree structures using the fused drone and mobile laser scanning point clouds. Front Environ Sci. 2022;10. https://doi.org/10.3389/fenvs.2022.949442.
Martín-Alcón S, Coll L, De Cáceres M, Guitart L, Cabré M, Just A, González-Olabarría JR. Combining aerial LiDAR and multispectral imagery to assess postfire regeneration types in a Mediterranean forest. Can J For Res. 2015;45:856–66.
Marino E, Ranz P, Tomé JL, Noriega MÁ, Esteban J, Madrigal J. Generation of high-resolution fuel model maps from discrete airborne laser scanner and Landsat-8 OLI: a low-cost and highly updated methodology for large areas. Remote Sens Environ. 2016;187:267–80.
Vogeler JC, Yang Z, Cohen WB. Mapping post-fire habitat characteristics through the fusion of remote sensing tools. Remote Sens Environ. 2016;173:294–303.
Braziunas KH, Abendroth DC, Turner MG. Young forests and fire: using lidar–imagery fusion to explore fuels and burn severity in a subalpine forest reburn. Ecosphere. 2022;13:1–20.
Liu M, Popescu S. Estimation of biomass burning emissions by integrating ICESat-2, Landsat 8, and Sentinel-1 data. Remote Sens Environ. 2022;280:113172.
Domingo D, de la Riva J, Lamelas MT, García-Martín A, Ibarra P, Echeverría M, Hoffrén R. Fuel type classification using airborne laser scanning and sentinel 2 data in mediterranean forest affected by wildfires. Remote Sens. 2020;12:1–22.
Rocha KD, Silva CA, Cosenza DN, et al. Crown-level structure and fuel load characterization from airborne and terrestrial laser scanning in a longleaf pine (Pinus palustris Mill.) forest ecosystem. Remote Sens. 2023; https://doi.org/10.3390/rs15041002.
Zhang L, Zhang L, Du B. Deep learning for remote sensing data: a technical tutorial on the state of the art. IEEE Geosci Remote Sens Mag. 2016;4:22–40.
Zhang S, Meng X, Liu Q, Yang G, Sun W. Feature-decision level collaborative fusion network for hyperspectral and LiDAR classification. Remote Sens. 2023;15:4148.
Tai H, Xia Y, Yan M, Li C, Kong XL. Construction of artificial forest point clouds by laser SLAM technology and estimation of carbon storage. Appl Sci. 2022; https://doi.org/10.3390/app122110838.
Pohjavirta O, Liang X, Wang Y, Kukko A, Pyörälä J, Hyyppä E, Yu X, Kaartinen H, Hyyppä J. Automated registration of wide-baseline point clouds in forests using discrete overlap search. For Ecosyst. 2022;9:100080.
Dalponte M, Ørka HO, Ene LT, Gobakken T, Næsset E. Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens Environ. 2014;140:306–17.
Arjasakusuma S, Kusuma SS, Phinn S. Evaluating variable selection and machine learning algorithms for estimating forest heights by combining lidar and hyperspectral data. ISPRS Int J Geo-Information. 2020; https://doi.org/10.3390/ijgi9090507.
Machala M, Zejdová L. Forest mapping through object-based image analysis of multispectral and LiDAR aerial data. Eur J Remote Sens. 2014;47:117–31.
Anderson JE, Plourde LC, Martin ME, Braswell BH, Smith ML, Dubayah RO, Hofton MA, Blair JB. Integrating waveform lidar with hyperspectral imagery for inventory of a northern temperate forest. Remote Sens Environ. 2008;112:1856–70.
Guan H, Li J, Chapman M, Deng F, Ji Z, Yang X. Integration of orthoimagery and lidar data for object-based urban thematic mapping using random forests. Int J Remote Sens. 2013;34:5166–86.
Potapov P, Li X, Hernandez-Serna A, Tyukavina A, Hansen MC, Kommareddy A, Pickens A, Turubanova S, Tang H, Silva CE. Mapping global forest canopy height through integration of GEDI and landsat data. Remote Sens Environ. 2021;253:112165.
Hudak AT, Lefsky MA, Cohen WB, Berterretche M. Integration of lidar and Landsat ETM+ data for estimating and mapping forest canopy height. Remote Sens Environ. 2002;82:397–416.
Latifi H, Fassnacht F, Koch B. Forest structure modeling with combined airborne hyperspectral and LiDAR data. Remote Sens Environ. 2012;121:10–25.
Zhang M, Li W, Tao R, Li H, Du Q. Information fusion for classification of hyperspectral and LiDAR data using IP-CNN. IEEE Trans Geosci Remote Sens. 2021;60:1–12.
Moudrý V, Cord AF, Gábor L, et al. Vegetation structure derived from airborne laser scanning to assess species distribution and habitat suitability: The way forward. Divers Distrib. 2023;29:39–50.
Kissling WD, Shi Y. Which metrics derived from airborne laser scanning are essential to measure the vertical profile of ecosystems? Divers Distrib. 2023;29:1315–20.
Acknowledgements
This article is based upon work from COST Action CA20118, supported by COST (European Cooperation in Science and Technology). Liang, X. would like to acknowledge the financial supports from the Natural Science Fund of China (32171789). Cabo, C. would like to acknowledge the support of the UK NERC project [NE/T001194/1]: ‘Advancing 3D Fuel Mapping for Wildfire Behaviour and Risk Mitigation Modelling’ and the Spanish Knowledge Generation project [PID2021-126790NB-I00]: ‘Advancing carbon emission estimations from wildfires applying artificial intelligence to 3D terrestrial point clouds’
Funding
This work was supported by the COST Action 3DForEcoTech (CA20118). Natural Science Fund of China (32171789). UK NERC (NE/T001194/1). Spanish Knowledge Generation project (PID2021-126790NB-100).
Author information
Authors and Affiliations
Contributions
Conceptualization: H.M., M.S., B.M.; Data curation: M.S., B.M.; Formal analysis: M.S., B.M.; Funding acquisition: M.M., H.M., M.S.; Investigation: M.S., B.M.; Methodology: M.S., B.M.; Project administration: M.M, H.M.; Supervision: H.M., M.S.; Validation: H.M., M.S., B.M.; Visualization: B.M., S.T.; Writing - original draft: B.M., M.S., S.T., L.X., M.M, P.X., S.A., S.K., V.C., V.G., H.M. and Writing - review & editing: B.M., M.S., S.T., L.X., M.M, P.X., S.A., S.K., V.C., V.G., H.M., C.C.
Corresponding author
Ethics declarations
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Balestra, M., Marselis, S., Sankey, T.T. et al. LiDAR Data Fusion to Improve Forest Attribute Estimates: A Review. Curr. For. Rep. 10, 281–297 (2024). https://doi.org/10.1007/s40725-024-00223-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40725-024-00223-7