Abstract
The paper presents the application of the k-means clustering in the process of automated snow extent mapping using orthophoto images generated using the Structure-from-Motion (SfM) algorithm from oblique aerial photographs taken by unmanned aerial vehicle (UAV). A simple classification approach has been implemented to discriminate between snow-free and snow-covered terrain. The procedure uses the k-means clustering and classifies orthophoto images based on the three-dimensional space of red–green–blue (RGB) or near-infrared–red–green (NIRRG) or near-infrared–green–blue (NIRGB) bands. To test the method, several field experiments have been carried out, both in situations when snow cover was continuous and when it was patchy. The experiments have been conducted using three fixed-wing UAVs (swinglet CAM by senseFly, eBee by senseFly, and Birdie by FlyTech UAV) on 10/04/2015, 23/03/2016, and 16/03/2017 within three test sites in the Izerskie Mountains in southwestern Poland. The resulting snow extent maps, produced automatically using the classification method, have been validated against real snow extents delineated through a visual analysis and interpretation offered by human analysts. For the simplest classification setup, which assumes two classes in the k-means clustering, the extent of snow patches was estimated accurately, with areal underestimation of 4.6% (RGB) and overestimation of 5.5% (NIRGB). For continuous snow cover with sparse discontinuities at places where trees or bushes protruded from snow, the agreement between automatically produced snow extent maps and observations was better, i.e. 1.5% (underestimation with RGB) and 0.7–0.9% (overestimation, either with RGB or with NIRRG). Shadows on snow were found to be mainly responsible for the misclassification.
Avoid common mistakes on your manuscript.
1 Introduction
A key environmental variable that controls snowmelt peak discharges is snow cover (Tekeli et al. 2005), especially in the mountains (Hock et al. 2006). Snow water equivalent (SWE) for a given basin allows us to estimate the volume of water that may be mobilized during snowmelt episodes. Hence, the SWE estimation procedures have a practical potential as they may be used to forecast snowmelt peak flows. Although SWE can be estimated using several remote sensing techniques (Kunzi et al. 1982; Chang et al. 1987; Pulliainen and Hallikainen 2001; Tedesco et al. 2004; Pulliainen 2006; Takala et al. 2011), its direct calculation involves multiplication of snow depth (HS) measurements by snow density (\(\rho \)) estimates (Jonas et al. 2009).
Although it is rather difficult to get spatially continuous estimates of snow density, the HS raster maps can be produced using: interpolation from pointwise data (Erxleben et al. 2002; Dyer and Mote 2006; Pulliainen 2006), terrestrial light detection and ranging (LiDAR) and tachymetric measurements (Prokop 2008; Prokop et al. 2008; Grünewald et al. 2010; Prokop et al. 2015; Schön et al. 2015), airborne LiDAR (Deems et al. 2013) and satellite sensors (Hall et al. 2002; Romanov and Tarpley 2007). The in situ measurements in small basins have recently become substituted by high-resolution snow mapping offered by unmanned aerial vehicles (UAVs). The HS estimation using UAVs is based on applying the DoD procedure, abbreviated after DEM (digital elevation model) of differences, which allows for the subtraction of a snow-free digital surface model (DSM) from a DSM with snow cover (Vander Jagt et al. 2015; de Michele et al. 2016; Bühler et al. 2016; Harder et al. 2016; Bühler et al. 2017; Miziński and Niedzielski 2017). The latter dataset is produced using the Structure-from-Motion (SfM) algorithm run on oblique aerial images acquired by a UAV.
Not only HS, but also snow extent (SE) characterizes spatial distribution of snow. The large-scale estimation of SE is common and is carried out using satellite observations (e.g. Dozier 1989; Rosenthal and Dozier 1996; Robinson and Frei 2000; Molotch et al. 2004). However, the spatial resolution of satellite data constrains the identification of snow patches and is not suitable for evaluating discontinuous snow cover in small basins. The knowledge about SE in small basins is important when using UAV-based HS information from the vicinities of edges of snow-covered terrain. Namely, it may be useful in deciding if small HS values actually correspond to snow-covered terrain or if they are artefacts.
The problem of coarse spatial resolution of satellite-based SE reconstructions can be solved using oblique terrestrial or aerial high-resolution imagery. The UAV-acquired (Zhang et al. 2012a) and vessel-based (Zhang et al. 2012b) sea ice images were utilized to automatically make “ice” or “no-ice” mosaics, which is a similar task to the search for “snow” or “no-snow” grid cells. In contrast, Julitta et al. (2014) proposed a method for detecting snow-covered terrain on a basis of processing photographs taken by the Earth-fixed camera from the PhenoCam network (phenocam.sr.unh.edu, access date: 19/02/2018). The common idea of the latter three papers was the use of the k-means unsupervised classification to produce a dichotomous SE numerical map. This paper provides further evidences for the applicability of the k-means method in the automated SE reconstruction. Namely, our study complements the three papers by combining several methodical approaches therein contained. Firstly, we use the k-means clustering with more than two classes following the concept of Zhang et al. (2012a, b). We do it to test the potential of the method in detecting shadowed snow cover, thus our objective conceptually differs from the identification of ice type carried out by Zhang et al. (2012a, b). Secondly, we utilize UAV-acquired aerial images as inputs to the k-means-based snow detection algorithm; thus, we entirely modify the camera location proposed by Julitta et al. (2014) (a terrestrial one-view camera position was replaced by airborne moving camera which takes overlapping photos to generate the SfM-based orthophotomap which is subsequently processed) and adopt the camera location from Zhang et al. (2012a). Table 1 shows the differences between the approaches employed in this paper and those utilized in the said three articles. It is apparent from Table 1 that, apart form the above-mentioned differences, none of the three papers discussed in this paragraph uses near-infrared images, which were found useful in the UAV-based HS reconstructions (Bühler et al. 2017; Miziński and Niedzielski 2017).
The objective of this paper is therefore to check the usefulness of the k-means clustering, with two to four classes, for the unsupervised classification of the UAV-acquired visible-light and near-infrared images as well as for their incorporation into the production of numerical SE maps.
2 Data
Numerous UAV flights targeted at a few study areas in the Kwisa River catchment in the Izerskie Mountains (part of the Sudetes, SW Poland) were performed. Two of them were: Rozdroże Izerskie (extensive mountain pass located at 767 m a.s.l., with nearby mountain meadow of size 100 \(\times \) 110 m) and Polana Izerska (mountain meadow of size 250 \(\times \) 170 m, with elevations ranging from 951 to 976 m a.s.l.). Aerial images of snow-covered terrain were acquired to cover three specific conditions: patchy snow cover (Rozdroże Izerskie on 10/04/2015), continuous snow cover (Polana Izerska W on 23/03/2016) and continuous snow cover with signatures of thawing in the vicinity of vegetation (Polana Izerska E on 16/03/2017). The study areas of Rozdroże Izerskie and Polana Izerska along with the selected three test sites are presented in Fig. 1.
Observations in Rozdroże Izerskie were carried out using the fixed-wing UAV named swinglet CAM (produced by senseFly, weight 0.5 kg, wingspan 80 cm), while fieldwork in Polana Izerska was conducted using the other fixed-wing UAVs, namely eBee (manufactured by senseFly, weight 0.7 kg, wingspan 96 cm) and Birdie (manufactured by FlyTech UAV, weight 1.0 kg, wingspan 98 cm). The swinglet CAM drone was equipped with a single camera bay, to which either Canon IXUS 220HS (red–green–blue = RGB) or Canon PowerShot ELPH 300HS (near-infrared–green–blue = NIRGB) cameras were mounted. Similarly, the one-bay eBee drone was equipped with removable cameras: Canon S110 RGB (RGB) or Canon S110 NIR (near-infrared–red–green = NIRRG). In Birdie’s bay, Parrot Sequoia sensor (high-resolution RGB camera with low-resolution four individual bands: NIR, red-edge = RE, red = R, green = G) was installed. Wavelengths for which spectral responses reveal maximum values for a few cameras are juxtaposed in Table 2. Five UAV missions were completed: 1 \(\times \) Rozdroże Izerskie RGB (swinglet CAM), 1 \(\times \) Rozdroże Izerskie NIRGB (swinglet CAM), 1 \(\times \) Polana Izerska W RGB (eBee), 1 \(\times \) Polana Izerska W NIRRG (eBee), 1 \(\times \) Polana Izerska E RGB (Birdie). From the available Parrot Sequoia bands, we used only the high-resolution three-band RGB camera to ensure the similar resolution of all sensors used. The altitudes above takeoff (ATO) of the flights were kept similar, namely 123–151 m, at which height the resolution of the ground surface in each image was approximately 4.1–4.5 cm/px. The UAVs and the data acquisition equipment used during the fieldwork are shown in Fig. 2. Table 3 juxtaposes basic UAV flight parameters and the number of images acquired in each flight.
3 Methods
The SfM algorithm, implemented in Agisoft Photoscan Professional version 1.2.5.2680, was used to produce orthophotomaps. Georeferencing was based on measurements carried out by standard onboard GPS receivers, and the geotagged images were processed in Agisoft Photoscan. We delineated three 100 \(\times \) 100 m orthophoto image squares (Fig. 1). As a result, five fragments of orthophoto images were extracted (\(3\times \) RGB, \(1\times \) NIRGB and \(1\times \) NIRRG). They became inputs to the analysis which aimed at the automated production of SE maps on the basis of the above-mentioned k-means clustering. In addition, they were used to produce reference SE maps, which were prepared by GIS experts who visually inspected the orthophotomaps and digitized terrain covered with snow. For a specific site and specific camera spectrum, the two SE spatial data, i.e. automatically and manually produced SE maps, were subsequently compared to validate the performance of the unsupervised classification.
In this section, we use a very simple approach to automatically estimate SE on the basis of orthophoto images produced from photographs taken by UAVs. The full automation is attained through the use of the unsupervised classification. Following the concept of Zhang et al. (2012a, b) and Julitta et al. (2014), the k-means clustering is utilized to discriminate between snow-covered and snow-free terrain.
Figure 3 presents the flowchart of the k-means-based production of SE numerical maps on the basis of UAV-based orthophoto images. The input raster is thus a fragment of the orthophotomap. It can be either RGB or NIRRG or NIRGB spatial data. Such an input raster is split into three 2D arrays (with spatial relations kept), each corresponding to one of three bands. For instance, if RGB data are processed, the first 2D array includes R values, the 2D second array stores G values, while the third 2D array consists of B values. Then, the three arrays are merged so that a single nonspatial array is composed of three rows which are used to store band values produced through flattening of specific 2D arrays. For example, R values in the first row are flattened from the 2D array for the R band, G values in the second row are flattened from the 2D array for the G band, and B values in the third row are flattened from the 2D array for the B band. The nonspatial array becomes the input to the k-means clustering. According to Zhang et al. (2012a, b), we allow more than two classes, which is attained through the use of a loop (Fig. 3).
We use the k-means implementation available in the OpenCV (Open Source Computer Vision) library (opencv.org), in particular OpenCV-Python. The cv2.kmeans function is utilized with the following setup of input parameters: data (RGB or NIRRG or NIRGB orthophoto images of size \(n \times m\) flattened to three R/G/B or NIR/R/G or NIR/G/B vectors, each of length nm, forming the nonspatial array), number of clusters (integers ranging from 2 to 4), iteration termination criteria (maximum number of iterations of 10, iterations break when threshold \(\varepsilon \) attains 1), number of attempts with different initial labels (integer equals to 10), and method of choosing initial centres of classes (random centres are assumed). Although RGB images are the most common in our analysis, we also test the applicability of the k-means method in automated SE mapping using two different NIR cameras (Table 2). The choice of the number of clusters (2, 3, 4) is explained by the need to carry out two specific exercises: detection of “snow” and “no-snow” raster cells (2 classes), identification of “snow” and “no-snow” raster cells with possible “artefact” detection (3 or 4 classes). The artefacts may be of different origins, for instance may be caused by SfM-failures or shadows. The convergence criteria are selected to ensure a reasonable computational time to finish jobs. The random determination of centres enables us to avoid preferring particular band configurations to reduce potential bias. The output comprises: compactness statistics (sum of squared distances measured from each point to the corresponding centre), labels (vector of length nm with labels to assign individual data to specific clusters coded as [0,1] for 2 classes, [0,1,2] for 3 classes etc.), centres (array of cluster centres, i.e. each centre is represented by three numbers R, G, B or NIR, R, G or NIR, G, B).
Subsequently, the classified arrays containing codes inherited from labels are reshaped into 2D matrices, the number of which is equal to \(k_{\max } - k_{\min } + 1\) (e.g. if we allow \(k=2, 3, 4\), we produce \(4 - 2 + 1 = 3\) reshaped 2D arrays). Then, spatial reference, which needs to be extracted from the input raster at the beginning of the entire procedure, is added to the reshaped arrays. The most straightforward case (2 classes) aims to classify the terrain into snow-free and snow-covered terrain, the latter being the estimate of SE. If orthophoto images include interfering elements, such as for instance SfM artefacts, more than two classes may be assumed to detect such features. The final result is the raster map, each cell of which represents one of k possible values.
The motivation for the proposed method is that SE and HS may be jointly used to refine SWE estimates (Elder et al. 1998). The values of HS quantify snow cover thickness in three dimensions, and therefore they directly contribute to estimations of SWE. However, SE coveys a simple zero/one (no-snow/snow) message solely in planar view, and its role in SWE assessment is indirect and may be sought in refining HS estimates: (1) at edges of snow patches, (2) along lines where continuous snow cover becomes discontinuous and subsequently transits to snow-free terrain or (3) in the vicinity of land cover objects protruding from continuous snow cover. Namely, based on HS maps, snow cover exists if HS \(> 0\). However, for thin snow cover (e.g. in one of the above-mentioned enumerated cases) HS estimation errors may be considerable, with potential insignificance of HS determinations. Having produced accurate SE numerical maps, it is thus possible to superimpose two extents (HS \(> 0\) and SE \(\ne 0\)) and find a real number \(h_0\) such that HS \(> h_0\), leading to the reduction of HS reconstruction uncertainty. As a consequence, SWE estimates may be refined. It is worth noting here that the similar comparison of SE and HS data has recently been carried out by Wang et al. (2017) who assumed HS \(\ge \) 4 cm for the purpose of validating SE maps.
4 Results
To choose the number of classes (k) for further analysis, a simple exercise was carried out. The differences between the classifications into two, three and four clusters were analyzed against a background of the RGB orthophoto image (Fig. 4). The analysis concerned three test sites. In the first test site (Rozdroże Izerskie on 10/04/2015), snow cover was discontinuous and shadows were cast by trees onto terrain. In the second test site (Polana Izerska W on 23/03/2016), snow cover was continuous and shadows were not present. In the third site (Polana Izerska E on 16/03/2017), snow cover continuously covered the terrain (apart from close vicinities of trees) and evident shadows were cast onto terrain. It is apparent from Fig. 4 that the most natural results were obtained with \(k=2\) (snow/no-snow classes). Misfit was noticed for the two sites in places where shadows were clearly visible in images. Increasing the number of clusters to three produced an additional class which was difficult to interpret. Namely, in Rozdroże Izerskie, the intermediate class either corresponded to shadows cast onto the terrain or captured the ground not covered with snow (light green vegetation, mainly grass), while in Polana Izerska E the intermediate class was found to work well in delineating SE and detecting shadows. The intermediate class was very small for Polana Izerska W, captured sparse tree branches and, importantly, the algorithm did not identify shadows, which were not present (Fig. 4). In the four-cluster case, the interpretation remained complex. While the findings for Rozdroże Izerskie and Polana Izerska E were similar to the three-cluster analysis (two intermediate classes, when combined, had the similar spatial extent to one intermediate class), the results for Polana Izerska W was different (one of two intermediate classes very significantly overrepresented tree branches, which in the 3-cluster case corresponded to spatially small class). Although the three- and four-cluster results reveal, in selected cases, some potential in identifying shadow-driven disturbances in SE maps, it is difficult to numerically identify the intermediate classes and their meaning. In particular, our exercises confirm the potential of the three-cluster classification to detect shadows on continuous snow cover, but fail to identify shadows when snow cover is discontinuous. Therefore, in the subsequent analyses the simple dichotomous setup is explored.
Figure 5a, b presents the input RGB and NIRGB orthophoto images of the partially snowed (snow patches covered approximately 24–25% of terrain—see Table 4) site of Rozdroże Izerskie along with the corresponding analyses of these orthophotomaps (rows 2–4 of the figure). The SE maps based on manual digitization and automated classification are presented in Fig. 5c–f, with the validation of the automated approach expressed as the difference between the two maps (G–H). The RGB- and NIRGB-based results are similar, with very comparable patterns of SE obtained from manual and automated approaches in the uncovered meadow. Considerable differences were identified along western and eastern edges of the study site of Rozdroże Izerskie, where the forests meets the meadow. Figure 6 presents this effect in a larger cartographic scale. It is apparent from the figure that shadows are responsible for the most evident mismatches; however, the shadow impact is different for RGB and NIRGB analyses. All in all, the k-means procedure applied to the RGB orthophoto image led to the underestimation of area of snow patches by 4.6% with respect to the human-digitized layer (Table 4). In contrast, if the NIRGB orthophotomap was taken as input, the true area of snow patches was overestimated by the k-means clustering by 5.5% with respect to the digitized data (Table 4). It is worth noting that the sky was clear or only slightly overcast at the time of observations (in the nearby World Meteorological Organization, WMO station in Liberec, located approximately 32 km from Rozdroże Izerskie, 0 okta was observed at 9:00 UTC and 2 oktas were recorded at 10:00 UTC). Therefore, the effect of shadows was highly pronounced in the case study from the site of Rozdroże Izerskie on 10/04/2015.
Figure 7 shows the similar analysis for the site of Polana Izerska W on 23/03/2016, when snow cover was continuous (snow occupied approximately 91–92% of terrain—see Table 4). Both RGB- and NIRRG-based SE maps produced using the k-means clustering were found to agree well with their human-digitized analogues. Discrepancies occurred mainly at the contact between the trees and meadow. The overestimations cover only 0.7–0.9% of the human-digitized layer (Table 4). In the case study from the site of Polana Izerska on 23/03/2016, shadows were not present and thus they did not impact the accuracy of the automated mapping procedure. The sky was highly overcast at the time of observations. Indeed, in the nearby WMO station in Liberec, approximately 25 km from Polana Izerska, the following total cloud cover characteristics were recorded: 6 oktas at 13:00 UTC and 8 oktas at 14:00 UTC.
Figure 8 presents the results obtained for the other \(100 \times 100\) m site in Polana Izerska E on 16/03/2017, with continuous snow cover which began to melt around vegetation (snow occupied approximately 95% of terrain—see Table 4). The SE map was generated by the k-means method on the basis of the RGB orthophotomap and was shown to be in agreement with its digitized version produced by GIS experts. The differences were as small as 1.5% of SE inferred by the experts. Although the shadows cast by trees were highly visible on the orthophoto image, their impact on the performance of the automated SE mapping accuracy was not very significant. The sunny spells were only intermittent as the sky was overcast. The total cloud cover in the WMO station in Liberec was 8 oktas at 12:00 UTC. The sunbeam reflected from snow in the southeastern part of the test site, but the scattered cloud cover cast shadow onto the rest of the area, leading to a considerable reduction of light delivery in the southwestern part of the test site (Fig. 8a). The uneven lighting conditions had no impact on snow detection (Fig. 8b).
To check the performance of the snow mapping approach, we carried out a simple experiment, the aim of which was to evaluate the classification of features with similar spectral responses (lake ice vs. snow) and assess the skills of the investigated clustering method in detecting snow in images with spectrally different characteristics (very thin snow layer with protruding grass). Data for the ice/snow exercise were acquired by eBee UAV on 29/12/2016 in Polana Izerska, while aerial images for the snow/grass case were collected by eBee on 11/03/2016 in the district of Drożyna in Świeradów-Zdrój (approx. 2.5 km northeastward from Polana Izerska). Orthophotomaps of these areas were produced and squares of sizes \(12 \times 12\) m were extracted so that mainly snow–ice and snow–grass boundaries occurred within each square, respectively. Figure 9 presents the two orthophotos with the corresponding SE maps produced automatically using the k-means method. In most places, the classifier correctly discriminated snow from ice (with one misclassification in the northern, i.e. top-centre, part of the image). In addition, water on ice and water flowing through dyke was also correctly classified as “no-snow”. Interestingly, snow on ice was successfully detected. Figure 9 also shows good skills of the k-means-based SE mapping method in identifying a thin layer of snow overlaying the grass. The mosaic of “snow” and “no-snow” classes agrees well with the orthophoto.
5 Discussion and Conclusions
The tested method for automated SE mapping, based on processing high-resolution orthophoto images produced by the SfM algorithm from UAV-taken photographs, enables us to discriminate between snow-covered and snow-free terrain. The performance of the method is promising because, in the investigated case studies, the errors of estimating the area of snow-covered terrain are between \(-6\%\) and 6%.
Indeed, the spatial extent of highly discontinuous snow cover was underestimated (overestimated) by 4.6% (5.5%) when RGB (NIRGB) orthophotomap served as input. Such an acceptable accuracy was improved when the method was used to estimate SE of continuous snow cover, with trees or bushes as the only protruding land cover elements. Namely, the areal underestimation (overestimation) of SE for continuous snow cover was of 1.5% (0.7–0.9%), with no clear picture whether the use of RGB or NIRRG camera influences the results.
The errors seem to be controlled either by the presence or nonexistence of shadows cast on the surface (driven by lighting conditions) or by the spatial pattern of snow (weather-driven variable that includes both patchy and continuous snow surfaces). The experiment with the use of the k-means method with more than two clusters, previously carried out by Zhang et al. (2012a) for detecting sea ice and ice type, enables us to identify shadows on continuous snow cover when three classes are allowed. Even though the finding has practical implications, further work is needed to judge which classes correspond to shadows.
The impact of shadows on the performance of SE mapping using the k-means method is uneven and depends on how intensive the shadows are with respect to the non-shadowed terrain. The SE maps based on the UAV observations carried out on 10/04/2015, when total cloud cover varied between 0 and 2 oktas, were found to be highly dependent on the shadows. However, the SE map produced from UAV-taken photographs on 16/03/2017, when total cloud cover was approximately equal to 8 oktas but simultaneously rare sunny spells occurred, was not impacted by shadows which were visible in the input orthophotomap. Therefore, it can be argued that the intensity of shadows matters, and only those which produce very high contrasts with respect to their surrounding deteriorate the skills of the automated mapping procedure. Figure 10 presents the responses of individual R/G/B bands for a fragment of the orthophoto image for the test site of Rozdroże Izerskie. The signature of the shadows is particularly well seen for the R band, while the lowest response is noticed for the B band.
The pattern of snow, i.e. continuous or discontinuous snow surface, was also found to impact the performance of the proposed SE mapping method. The approach was the most skillful in the process of mapping SE on 23/03/2016 when snow cover was continuous and stable, with air temperatures around 0 \(^{\circ }\)C (Table 3). A slightly worse performance was reported for mapping SE on 16/03/2017 when snow cover was continuous with significant signatures of thawing in the vicinity of vegetation, with air temperatures around 4 \(^{\circ }\)C (Table 3). The worst results, but still within a reasonable \(\pm 6\)% error, were obtained for mapping SE during a snowmelt episode at the beginning of spring on 10/04/2015 when snow cover was highly patchy, with air temperatures around 11–12 \(^{\circ }\)C (Table 3).
It is known that snow distribution is controlled by different processes (or dissimilar intensities of these processes) in open areas and in forests, and this impacts the performance of snow cover mapping methods (e.g. Vikhamar and Solberg 2002). The k-means classifier used in this paper was found to have uneven skills of snow detection, depending on the observed surface. If snow distribution observed from top view is discontinuous (either due to thin layer of snow or due to the presence of trees) or if snow signal is interfered by shadows, the errors of SE mapping are higher. Although our UAV-based snow cover mapping is incomparable with satellite products due to differences in spatial resolution and coverage, there exist certain similarities in the performance of UAV- and satellite-based SE estimates. For instance, the deterioration of snow detection skills was also noticed between forests and open areas when MODIS products are used (Wang et al. 2017). Limitations of optical sensors in forested areas were described by Rittger et al. (2013) who, in the context of assessing MODIS snow cover products, emphasized that some portion of snow under thick canopies cannot be observed, but for moderate canopies the penetration may be possible for some optical sensors.
Although SE layers may be determined from HS reconstructions (SE raster cells for which HS \(> 0\)), which may now be produced using the SfM processing of UAV-taken photographs (e.g. Bühler et al. 2016), our approach is very simple and thus computationally undemanding. We omit HS reconstructions and consider the SE field in two dimensions (a planar view) which ends up with the dichotomous output (1—snow vs. 0—no-snow).
The method outlined in this paper was tested on data collected in three sites, with different UAVs operating over each location. This confirms that the approach is independent of the data acquisition platform. Collecting spatial data using UAVs along with their processing in near-real time is known as “rapid mapping” (Tampubolon and Reinhardt 2015). Our platform-independent solution fits this idea as SE maps can be produced in the field, immediately after spatial data have been collected by a UAV. Information on SE, especially when combined with HS to estimate SWE, is important for water management purposes, since it enables basin managers to assess the risk of snowmelt floods.
References
Bühler, Y., Adams, M. S., Bösch, R., & Stoffel, A. (2016). Mapping snow depth in alpine terrain with unmanned aerial systems (UASs): Potential and limitations. The Cryosphere, 10, 1075–1088.
Bühler, Y., Adams, M. S., Stoffel, A., & Bösch, R. (2017). Photogrammetric reconstruction of homogenous snow surfaces in alpine terrain applying near-infrared UAS imagery. International Journal of Remote Sensing. https://doi.org/10.1080/01431161.2016.1275060.
Chang, A. T. C., Foster, J. L., & Hall, D. K. (1987). Nimbus-7 derived global snow cover parameters. Annals of Glaciology, 9, 39–45.
de Michele, C., Avanzi, F., Passoni, D., Barzaghi, R., Pinto, L., Dosso, P., et al. (2016). Using a fixed-wing UAS to map snow depth distribution: An evaluation at peak accumulation. The Cryosphere, 10, 511–522.
Deems, J. S., Painter, T. H., & Finnegan, D. C. (2013). Lidar measurement of snow depth: A review. Journal of Glaciology, 59, 467–479.
Dozier, J. (1989). Spectral signature of alpine snow cover from the landsat thematic mapper. Remote Sensing of Environment, 28, 9–22.
Dyer, J., & Mote, T. (2006). Spatial variability and trends in observed snow depth over North America. Geophysical Research Letters. https://doi.org/10.1029/2006GL027258.
Elder, K., Rosenthal, W., & Davis, R. E. (1998). Estimating the spatial distribution of snow water equivalence in a montane watershed. Hydrological Processes, 12, 1793–1808.
Erxleben, J., Elder, K., & Davis, R. E. (2002). Comparison of spatial interpolation methods for es-timating snow distribution in the Colorado Rocky Mountains. Hydrological Processes, 16, 3627–3649.
Grünewald, T., Schirmer, M., Mott, R., & Lehning, M. (2010). Spatial and temporal variability of snow depth and ablation rates in a small mountain catchment. The Cryosphere, 4, 215–225.
Hall, D. K., Riggs, G. A., Salomonson, V. V., DiGirolamo, N. E., & Bayr, K. J. (2002). MODIS snow-cover products. Remote Sensing of Environment, 83, 181–194.
Harder, P., Schirmer, M., Pomeroy, J., & Helgason, W. (2016). Accuracy of snow depth estimation in mountain and prairie environments by an unmanned aerial vehicle. The Cryosphere, 10, 2559–2571.
Hock, R., Rees, G., Williams, M. W., & Ramirez, E. (2006). Preface contribution from glaciers and snow cover to runoff from mountains in different climates. Hydrological Processes, 20, 2089–2090.
Jonas, T., Marty, C., & Magnusson, J. (2009). Estimating the snow water equivalent from snow depth measurements in the Swiss Alps. Journal of Hydrology, 378, 161–167.
Julitta, T., Cremonese, E., Migliavacca, M., Colombo, R., Galvagno, M., Siniscalco, C., et al. (2014). Using digital camera images to analyse snowmelt and phenology of a subalpine grassland. Agricultural and Forest Meteorology, 198–199, 116–125.
Kunzi, K.F., Patil, S., & Rott H. (1982). Snow-covered parameters retrieved from NIMBUS-7 SMMR data. IEEE Transactions on Geoscience and Remote Sensing GE-20, 452–467.
Miziński, B., & Niedzielski, T. (2017). Fully-automated estimation of snow depth in near real time with the use of unmanned aerial vehicles without utilizing ground control points. Cold Regions Science and Technology, 138, 63–72.
Molotch, N. P., Fassnacht, S. R., Bales, R. C., & Helfrich, S. R. (2004). Estimating the distribution of snow water equivalent and snow extent beneath cloud cover in the Salt Verde River basin, Arizona. Hydrological Processes, 18, 1595–1611.
Prokop, A. (2008). Assessing the applicability of terrestrial laser scanning for spatial snow depth measurements. Cold Regions Science and Technology, 54, 155–163.
Prokop, A., Schirmer, M., Rub, M., Lehning, M., & Stocker, M. (2008). A comparison of measurement methods: Terrestrial laser scanning, tachymetry and snow probing for the determination of the spatial snow-depth distribution on slopes. Annals of Glaciology, 49, 210–216.
Prokop, A., Schön, P., Singer, F., Pulfer, G., Naaim, M., Thibert, E., et al. (2015). Merging terrestrial laser scanning technology with photogrammetric and total station data for the determination of avalanche modeling parameters. Cold Regions Science and Technology, 110, 223–230.
Pulliainen, J., & Hallikainen, M. (2001). Retrieval of regional snow water equivalent from space-borne passive microwave observations. Remote Sensing of Environment, 75, 76–85.
Pulliainen, J. (2006). Mapping of snow water equivalent and snow depth in boreal and sub-arctic zones by assimilating space-borne microwave radiometer data and ground-based observations. Remote Sensing of Environment, 101, 257–269.
Rittger, K., Painter, T. H., & Dozier, J. (2013). Assessment of methods for mapping snow cover from MODIS. Advances in Water Resources, 51, 367–380.
Robinson, D. A., & Frei, A. (2000). Seasonal variability of northern hemisphere snow extent using visible satellite data. The Professional Geographer, 52, 307–315.
Romanov, P., & Tarpley, D. (2007). Enhanced algorithm for estimating snow depth from geostationary satellites. Remote Sensing of Environment, 108, 97–110.
Rosenthal, W., & Dozier, J. (1996). Automated mapping of montane snow cover at subpixel resolution from the landsat thematic mapper. Water Resources Research, 32, 115–130.
Schön, P., Prokop, A., Vionnet, V., Guyomarc’h, G., Naaim-Bouvet, F., & Heiser, M. (2015). Improving a terrain-based parameter for the assessment of snow depths with TLS data in the Col du Lac Blanc area. Cold Regions Science and Technology, 114, 15–26.
Takala, M., Luojus, K., Pulliainen, J., Derksen, C., Lemmetyinen, J., Krn, J.-P., et al. (2011). Estimating northern hemisphere snow water equivalent for climate research through assimilation of space-borne radiometer data and ground-based meas-urements. Remote Sensing of Environment, 115, 3517–3529.
Tampubolon, W., & Reinhardt, W. (2015). UAV data processing for rapid mapping activities. ISPRS Archives, XL-3/W3, 371–377.
Tedesco, M., Pulliainen, J., Takala, M., Hallikainen, M., & Pampaloni, P. (2004). Artificial neural network-based techniques for the retrieval of SWE and snow depth from SSM/I data. Remote Sensing of Environment, 90, 76–85.
Tekeli, A. E., Akyürek, Z., Şorman, A. A., Şensoy, A., & Şorman, A. Ü. (2005). Using MODIS snow cover maps in modeling snowmelt runoff process in the eastern part of Turkey. Remote Sensing of Environment, 97, 216–230.
Vander, J. B., Lucieer, A., Wallace, L., Turner, D., & Durand, M. (2015). Snow depth retrieval with UAS using photogrammetric techniques. Geosciences, 5, 264–285.
Vikhamar, D., & Solberg, R. (2002). Subpixel mapping of snow cover in forests by optical remote sensing. Remote Sensing of Environment, 84, 69–82.
Wang, X., Zhu, Y., Chen, Y., Zheng, H., Liu, H., Huang, H., et al. (2017). Influences of forest on MODIS snow cover mapping and snow variations in the Amur River basin in Northeast Asia during 2000–2014. Hydrological Processes, 31, 3225–3241.
Zhang, Q., Skjetne, R., Lset, S., & Marchenko, A. (2012a). Digital image processing for sea ice observations in support to Arctic DP operations. In: ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering, American Society of Mechanical Engineers, pp. 555–561.
Zhang, Q., van der Werff, S., Metrikin, I., Lset, S., & Skjetne, R. (2012b). Image processing for the analysis of an evolving broken-ice field in model testing. In: ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering, American Society of Mechanical Engineers, pp. 597–605.
Acknowledgements
The research was financed by the National Centre for Research and Development, Poland, through Grant No. LIDER/012/223/L-5/13/NCBR/2014. The mobile Laboratory for Unmanned Aerial Observations of Earth of the University of Wrocław, including UAVs and the associated equipment, was financed by several Polish institutions: the National Centre for Research and Development (LIDER/012/223/L-5/13/NCBR/2014), Ministry of Science and Higher Education (IP2014 032773), National Science Centre (2011/01/D/ST10/04171), the University of Wrocław (statutory funds). The authors thank Joanna Remisz and Jacek Ślopek for technical help in operating UAVs. The authors kindly acknowledge the authorities of Świeradów Forest Inspectorate, Poland, for productive partnership and support. The authors are particularly grateful to Mr. Lubomir Leszczyński and Ms. Katarzyna Mȩcina for their support in fieldwork management. We used ArcGIS with Python, equipped with the libraries: arcpy, sys, numpy and cv2. The SYNOP data, collected by the Institute of Meteorology and Water Management–National Research Institute (Instytut Meteorologii i Gospodarki Wodnej—Państwowy Instytut Badawczy; IMGW–PIB), were acquired from http://www.ogimet.com.
Author information
Authors and Affiliations
Corresponding author
Additional information
Handling Editor: Dr. Ismail Gultepe.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Niedzielski, T., Spallek, W. & Witek-Kasprzak, M. Automated Snow Extent Mapping Based on Orthophoto Images from Unmanned Aerial Vehicles. Pure Appl. Geophys. 175, 3285–3302 (2018). https://doi.org/10.1007/s00024-018-1843-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00024-018-1843-8