Keywords

1 Introduction

Due to geographical and environmental factors, Taiwan frequently experiences severe slope disasters during heavy rainfall and earthquakes. (Kaima et al. 2000; Wu et al. 2014). The types of slope failures can be classified based on modes of movement, including fall, topple, slide, slump, flow, and creep (Geoscience Australia). In particular, most slope failures can be categorized into deep-seated and shallow-seated slope failures. Shallow-seated slope failures typically occur in steep slopes with poorly cohesive geological materials, lacking vegetation cover, or on slopes with pre-existing landslide deposits. Nevertheless, rainfall often triggers shallow-seated sliding, which infiltrates the soil layer, altering the unsaturated soil's hydraulic characteristics. This includes changes in fluid movement and distribution, increasing soil moisture content while decreasing matric suction within the substrate. Consequently, the stability of the soil mass is reduced (Lu and Godt 2008; Bordoni et al. 2015; Cho 2016, 2017; Jeong et al. 2017).

Furthermore, deep-seated slope failures represent another significant area of concern (Varnes 1978). The mechanisms leading to large-scale collapses are complex, primarily influenced by geological structures and the rock layer distributions, including dip slopes, strike slopes, and oblique slopes. The causes of large-scale collapses can generally be attributed to seismic activity and rainfall. In the case of rainfall-induced collapses, the physical mechanisms are similar to those observed in shallow-seated failures. Rainwater infiltration into the subsurface raises pore water pressure within the soil and rock layers. It reduces effective stress of the surface soil layers and rock near the failure plane, which is the primary contributing factor.

In order to evaluate the slopes' stability, appropriate monitoring instruments are required (Segoni et al. 2018). Typical instruments used for slope monitoring currently are inclinometers, crack gauges, and piezometers. Although these monitoring instruments present high accuracy, reliability, and ease of measurement, they could be better for drilling and installation in rugged terrain.

According to the studies by Kromer et al. (2019), Kim and Gratchev (2021), and Núñez-Andrés et al. (2023), close-range photogrammetry proves extremely effective for rockfall monitoring in rock slopes, with precision measures reaching millimeter-level accuracy. For precise and stable observations of localized slopes over an extended period, close-range photogrammetry provides excellent monitoring quality and valuable benefits in reducing staffing and offering immediate monitoring and early warnings. Consequently, this study will create a long-term and stable slope monitoring system based on a close-range photogrammetry system.

Meanwhile, thermal imaging is increasingly being used for slope disaster prevention and mitigation alongside optical image analysis (Pappalardo et al. 2021; Yu et al. 2023). Unlike visible light imagery, it cannot provide detailed displacement information about objects. Fortunately, it can furnish temperature data, allowing monitors to track regional. Frodella et al. (2014) examined the possible usefulness of infrared thermal imaging technology for rock slope monitoring. Thermal images of diverse rock slope features (including structural discontinuities, cracks, steep slopes, and seepage) were attained at different intervals via aerial and ground-based photography. The investigation revealed that unstable regions on slopes, stemming from geographical influences, showed noteworthy differences in temperature in thermal image spectra at various time periods. Consequently, the combination of optical image analysis, which preserves the slopes' geometric information, and thermal image records to detect concealed unstable areas within these slopes can produce a more comprehensive monitoring dataset.

Based on the previous concerns, the main goal of this study is to utilize computer vision technology to merge thermal and optical imaging. The aim is to superimpose temperature data from thermal images onto optical images to analyze slope conditions (Chen 2022). Additionally, the study aims to develop equipment for practical on-site application of this method while exploring the incorporation of AI-assisted interpretation (Tseng 2023). Furthermore, in addition to maintaining the analysis of slope displacement, this research enables temperature difference analysis through thermal imaging. This facilitates the identification of potential hazard zones behind the slope, providing valuable assistance for future planning and tracking efforts.

After completing the development of on-site equipment in this study, the monitoring instruments will undergo evaluation via testing with a scaled-down model of a reverse slope. To serve as an example, a monitoring system will be installed on the rear slope of National Central University. Multiple sets of images will then undergo analysis to assess slope displacement. The research will monitor slope stability by post-processing three-dimensional information models. The experimental results show accuracy within 3 cm. Preliminary testing of AI-assisted interpretation demonstrated potential in detecting displacement. The subsequent outcomes will provide early warning, prevention, and post-disaster planning for slope disasters, resulting in practical decision-making data.

2 Instrument Design and Image Analysis Process

The instrumentation assembly plan for this study is based on establishing the target tasks and comprises four main components: hardware selection, instrument casing design, image change detection, and operational scripts. The hardware assembly design will be adjusted to ensure the monitoring operations run smoothly for photogrammetry. The design of the instrument casing will be revised based on practical assembly and imaging work to ensure safe and proper equipment operation.

Various training datasets will be compiled to enhance the accuracy of image change detection in diverse scenarios, such as changing scenes, lighting conditions, and viewing angles. Experiments will be performed regarding operational scripts to increase instrument automation and display outcomes clearly and straightforwardly.

The Raspberry Pi is a microcomputer developed by the Raspberry Pi Foundation (Halfacree and Upton 2012). It has grown in popularity as a miniature single-board controller in recent years due to its small size, convenience, and high development capabilities. Therefore, the basic hardware architecture of the surveillance system will initially encompass a microcontroller, the Raspberry Pi, an optical imaging camera module (Raspberry Pi Camera Module V2), and an infrared thermal imaging camera module (FLIR Lepton 2.5). The camera's low pixel count requires capturing numerous photos to enhance the quality of the model. A track-driven camera setup will be designed to capture more images. The camera will be mounted on a 3D-printed track, and GA12-N20 geared motors, as well as an L298N motor driver will be employed to move the camera along the track, enabling the capture of images from various locations. The protective casing for the electronic equipment in this study will be 3D-printed to ensure durability and reliability in natural environments (Fig. 1). The power supply will use lead-acid batteries and solar panels, allowing the monitoring equipment to operate for extended periods in the field using renewable energy sources (Chen 2022; Tseng 2023).

Fig. 1
A photograph of a 3-d printed rectangular casing with optical and thermal cameras.

Side view of electronic equipment and casing, as well as the optical and thermal cameras

The process starts by capturing images from the Raspberry Pi hardware. Then, there is system preprocessing, which comprises camera lens calibration and forming a 3D point cloud model. After the model is built, Python Open3D will be used for point cloud displacement analysis, along with commercially available software like Pix4D Mapper and Cloud Compare. Using MATLAB point image analysis package, two-dimensional image analysis and comparison will also be executed.

Besides, thermal image analysis will be accomplished utilizing Python OpenCV. This process entails procedures such as image normalization and temperature difference analysis. The resulting analysis will be visualized using MeshLab to map the thermal image temperature differences onto the point cloud model.

This study further employs a machine learning model for optical image change detection proposed by Chen et al. (2021), as shown in Fig. 2, utilizing conventional neural network architectures like Convolutional Neural Networks, Transformer models, and Siamese Neural Networks. The self-trained change detection model extracts feature maps from pairs of optical images captured at the exact location but at different time intervals. The machine learning model then adjusts these feature maps utilizing a pre-trained neural network model. The difference in the feature maps of the two images predicts the change detection map for this pair of images. This study pre-processes self-captured optical images to generate a dataset for improved change detection models, promoting practical monitoring applications in various locations. The quantitative evaluation of these models employs binary confusion matrices for image classification and performs image segmentation. Common metrics, such as mean F1-score and mean Intersection over Union (IoU), are utilized to quantitatively evaluate the efficacy of the produced change detection models.

Fig. 2
A flowchart. Time A and time B images pass through modified Res Net 18 model and semantic tokenizer, concatenate and pass through transformer encoder, split and pass through transformer decoder, and features subtracted to give F C N for change probability map.

Process of detecting inter-optical image changes (modified after Chen et al. 2021)

Additionally, this study utilizes a mathematical approach to analyze pairs of thermal infrared images captured at various time intervals for change detection in thermal infrared imaging. Linear interpolation is utilized to adjust the arrays in the two thermal infrared images captured at different intervals. The camera module's original grayscale values are subsequently converted to temperature values. The temperature values from the later time interval are subtracted from those of the earlier time interval to create a temperature change detection image for the pair of images. Then, this image is overlaid onto the change detection image from the 2D optical imaging of the same pair of time-interval images. This presentation enables visualization of change detection outcomes from both the thermal and optical camera modules in one image, effectively displaying changes captured by both camera systems.

3 Laboratory Vibration Table Experiment

Before installing the instruments on the slope, an indoor testing is necessary to validate the feasibility of the optical camera in the displacement analysis system. The objective is to compare the displacement values analyzed from optical images with known object displacement values and evaluate the suitability of the optical image analysis process used in this research.

Vibration table experiments are typically used in geotechnical engineering to simulate and analyze slope failures. Studying the behavior of soil failure allows for a better understanding of related mechanisms. Chen (2020) conducted experiments simulating the mechanism and deformation of seismic-induced rockslide. The experiments utilized a scaled-down model of a reverse slope and vibration table, which was considered optimal for the optical image analysis system employed in this study. Therefore, this study utilizes Chen (2020) results to classify the slope failure type and conducts displacement analysis via optical images of slope failure.

The scaled-down model utilized in this study and its support structure were positioned on a unidirectional vibration table with a measurement of 90 x 90 cm. The top and front views of the experimental site are shown in Fig. 3.

Fig. 3
2 diagrams of a vibration table experiment. Top and front views label single-axis servo hydraulic cylinder, model placed area, vibration table surface, overturn reserve area, damping plate, camera, test object, base of model, platform, base of shaker, inclined platform, and angle steel.

Top view and front view of experiment

The investigation began by adjusting the slope angle, along with the frequency and amplitude of vibrations. Before the onset of vibrations, 30 consecutive photographs were taken to record the experimental setup. After the camera returned to its initial position, the vibration started and images were captured continuously at one per second. The vibration ceased after taking 30 images, prompting the commencement of image analysis.

The Iterative Closest Point (ICP) method is widespread in 3D modeling to analyze distances. This method works by fitting all the point clouds around the feature points, extracting those feature points, and calculating the squared differences of the scale, direction, and rotation to obtain the best-fit three-dimensional model. Another commonly used 3D model calibration method is the Matching Bounding-Box Center (MBBC) method. This technique evaluates the location of the neutral point cloud in the model and establishes a box that envelops the point cloud model. The method then rotates and shifts the large square of two distinct point clouds to the center point, viewed as an overlap of the point cloud pattern. The point cloud is subsequently determined.

The image analysis for the indoor experiment will employ Pix4D Mapper software for image reconstruction. Pix4D Mapper, a 3D image processing software developed by Pix4D in Switzerland, utilizes the Structure from Motion (SfM) technique (Westoby et al. 2012) to compute the internal and external orientation parameters of the camera system. It extrapolates the coordinates of the image's characteristic points to construct a 3D image. After constructing a 3D image using Pix4D Mapper, the resulting model will be exported to Python for displacement analysis of the image, using the Open3D suite to calculate the displacement. The Euclidean Distance is calculated between the closest points in the two-point cloud diagrams based on the Cloud-to-Cloud Distance Computation (C2C) Method rather than the actual point cloud displacement distance.

Additionally, the slope's damage pattern is calculated using the PIV method both before and after the vibration process, and then compared to the average displacements within the landslide area through analysis. The PIV point image analysis tool suite provided by MATLAB will be used for 2D image analysis. Due to the back-and-forth shaking of the platform during vibration, the PIV mass point image analysis becomes disrupted. To counteract this, the summation method is utilized in this study to superimpose the total displacement during shaking for analysis. The feasibility of the system is confirmed by combining and comparing the results of the two analyses.

Four experiments were conducted according to the experimental configuration. The A second set of experiments, which resulted in the most minor error, was analyzed as an example. (consisted of slope: 10°, amplitude: 2.48 mm, frequency: 6 Hz.) The specimen was destroyed after approximately 5 s, from the vibration table’s shaking to the destruction of the specimen. According to the comparison before and after the experimental destruction (Fig. 4), except for the block at the top of the slope, which did not exhibit any noticeable movement, all other blocks demonstrated a significant inclination and displacement tendency. The average displacement distance of the blocks is 5 cm.

Fig. 4
2 photographs of a vibration table experiment with rectangular blocks of test objects with different heights arranged on an inclined platform. The blocks indicate a very minor displacement after the experiment.

Comparison of experimental results before and after failure

Figure 5 displays the results of the displacement analysis for the point cloud model. The white section indicates the inactive point between the two-point clouds, while the other colors illustrate the displacements at various scales. Figure 6 presents the displacement statistics for the point cloud system described in Fig. 5. The black points exclude values that deviate from the standard deviations, and hence are not included in the calculation. The green line indicates the mean value location, which measures 5.1 cm.

Fig. 5
A point cloud displacement map of rectangular blocks of different heights arranged on an inclined table. The junction of the points around the first block at the top has a lighter shade indicating inactive points, and the rest of the boundaries are in dark shades.

Point cloud displacement map

Fig. 6
A box plot against distances in meters plot a single box with median at 0.05, minimum at 0.00, maximum at 0.16, inter-quartile range between 0.025 and 0.08, and many outliers above the maximum till 0.30. Values are estimated.

Point cloud displacement statistics chart

Figure 7 displays the results obtained from the PIV analysis. The calculation entails adding up the total displacement over the previous 5 s, followed by averaging the result. This produces the average displacement magnitude present in the selected images. The total displacement can be determined to be 5.1 cm.

Fig. 7
A photograph of a vibration table experiment overlaid with a dotted grid and color-coded horizontally inclined arrows indicating particle movements. Area mean value = 0.0105 meters per second.

Analysis results of PIV method

Due to the camera's positioning limitations, the shooting angle was not perfectly aligned with the motion direction of the target object. Initially designed for measuring particle velocities in two-dimensional flow fields, PIV remains a reliable method regardless of this constraint. When the motion direction of the target object is not perpendicular to the camera's optical axis, the accuracy of distance measurements may be impacted by the variable depth information of the target object in the images. Consequently, this research revealed a minor divergence between the results of PIV and point cloud displacement analysis in the other tests.

4 NCU Field Experiment

National Central University (NCU) is located in the Dianzih Lake Formation, mainly consisting of lower gravel and upper red soil layers. The gravel is primarily composed of white quartzite, dark gray siliceous sandstone, and light gray sandstone, among others. The soil properties have been determined through basic physical property tests, and according to relevant standards, the soil is classified as Clay Loam (CL). The nearby hillside next to the dormitories, with approximate distances of 55 m, 40 m, 40 m, and 20 m on its four sides, has a potential for shallow landslides, making it a suitable test site for installing the monitoring system in this research.

The experimental area, as shown in Fig. 8, primarily consists of rocky terrain. Although the upper parts of the rocky slope and the surrounding areas may have vegetation cover, the toe of the slope is often exposed to sand and gravel, making it highly suitable for thermal imaging identification.

Fig. 8
A photograph of a sloping area covered with thick vegetation and a line of cars at the base with a layer of sand and gravel in-front.

UAV Modeling near NCU Dormitories (Jian 2019)

The initial step involves experimenting with optical point cloud displacement analysis. The camera should be affixed at a fixed distance from the slope while control points are placed to monitor the absolute coordinates of image modeling. Additionally, checkpoints are established to validate the modeling quality, as shown in Fig. 9. Both control points and checkpoints contain known three-dimensional coordinates. The modeling results can be transformed into real-world coordinates by utilizing the control points, and the modeling quality is verified by employing the checkpoints.

Fig. 9
A photograph of a rocky sloping area with sand and medium-sized smooth rocks. 3 checkpoints and 2 control points on either side are indicated by circles.

Layout diagram of control points and checkpoints (red for control points, black for checkpoints)

After an accuracy analysis, we artificially displaced the rocks on the slope to test displacement recognition. Place feature points on the rocks, then move 15 cm, as Fig. 10 shows. Figs. 11 and 12 depict our results. As previously noted, we employed the C2C method to calculate the data. The C2C method calculates the elevation difference rather than the actual horizontal displacement when the rock is displaced 15 cm, which exceeds the rock's height difference. In order to validate the capability of the constructed point cloud model to compute the displacement distance, this study identified feature points on the point cloud models from two distinct periods, illustrated in Figs. 13 and 14. The feature points' coordinates were utilized to ascertain the displacement distance. Specifically, the computed displacement for a 15 cm movement was 16.2 cm, falling within a 2.59 cm margin of error.

Fig. 10
2 photographs of a rocky sloping area with sand and medium-sized smooth rocks. A piece of rock is displaced by 15 centimeters to the right in the second photograph with a slight change in its orientation.

Illustration of artificially moved rocks

Fig. 11
A point cloud model of a sloping rocky area indicates a color-coded rock at the initial position and in the displaced position to the right by 15 centimeters. The rock has darker shade at the top and lighter shade at the bottom. The lighter shade around the displaced rock is larger in size.

Identification image of 15 cm rock displacement using C2C method

Fig. 12
A box plot against distance in meters with a single box. Median is 0.055. Inter-quartile range is from 0.035 to 0.085. Minimum is 0.02 and maximum is 0.155. Outliers above the maximum extend till 0.29. Values are estimated.

Chart of mean 15 cm rock displacement using C2C method

Fig. 13
A point cloud model of a sloping rocky area with feature points of rocks with a rock in the initial position indicated by an arrow, and a photograph of the same area. The co-ordinates of the rock is marked as (6.665, 6.637, 0.474).

Illustration of feature points in rocks at three-dimensional and two-dimensional image positions (initial position)

Fig. 14
A point cloud model of a sloping rocky area with feature points of rocks with a rock in the displaced position indicated by an arrow, and a photograph of the same area. The co-ordinates of the rock is marked as (6.75, 6.509, 0.402).

Schematic diagram of feature points in rock at three-dimensional and two-dimensional image positions (15 cm movement)

A goal of this study was to observe the changes in the rock mass due to temperature variations. Therefore, temperature difference analyses were conducted at five selected time points, which were 5:00 PM, 12:00 AM, 7:00 AM on the next day, 1:00 PM on the next day, and 5:00 PM on the next day, in conjunction with optical image analysis. The temperature difference analysis results are shown in Fig. 15.

Fig. 15
4 heatmaps with temperature distribution in the tree-covered area and in the exposed area below with estimated values on a scale of 0 to 55 degrees. a. 5 p m to 12 a m. 10 to 25 and 30 to 40. b. 12 a m to 7 a m. 15 to 25. c. 7 a m to 1 p m. 20 to 30 and 35 to 40. d. 7 a m to 1 p m. 20 to 30 and 35 to 40.

Temperature Difference Analysis Chart. (a) Temperature Difference from 5 PM to 12 AM. (b) Temperature Difference from 12 AM to 7 AM. (c) Temperature Difference from 7 AM to 1 PM. (d) Temperature Difference from 1 PM to 5 PM

According to the analysis results in Fig. 15, it can be observed that from 7:00 AM to 1:00 PM, there is ample time for the slope to absorb sunlight as the sun rises to its zenith. This results in a higher temperature. From 1:00 PM to 5:00 PM, the temperature difference is more significant. This can be attributed to the fact that the sun has passed its zenith at 11:59 AM and is gradually moving westward. As a result, the slope cannot receive direct sunlight, and the abundant tree shade above the slope contributes to cooling. Therefore, for the slope in this experimental area, measuring the temperature variations between 7:00 AM and 1:00 PM, as well as between 1:00 PM and 5:00 PM, provides better examples for analysis.

Another preliminary investigation aimed to establish the correlation between thermal and visible images and generate a point cloud model utilizing thermal image data. The initial procedure involves texture mapping of the optical image onto the 3D point cloud generated from Pix4D Mapper. With the MeshLab, the point cloud containing the optical texture map (Mesh) is then exported. Next, the thermal image is imported into MeshLab, and its corresponding position is manually searched. The thermal image's texture is then mapped onto the point cloud with optical texture mapping. The result is shown in Fig. 16.

Fig. 16
A heatmap with temperature distribution on a point cloud model of a sloping area overlaid with a 3-d textured surface of the mesh. The tree-covered area on the top is between 15 to 25 degrees and the exposed area below is between 35 and 45 degrees. Values are estimated.

3D Texture mapping of optical and thermal images

5 AI Image Analysis for Decision Support

With the development of artificial intelligence technologies like machine learning (ML) and deep learning (DL), which draw inspiration from the structure and function of the human brain, computer vision research has evolved significantly. Artificial neural networks (ANN), including various types such as convolutional neural networks (CNN) and recurrent neural networks (RNN), have become the primary focus in the field of computer vision (Bhatt et al. 2021). These AI technologies enable researchers to make predictions or decisions based on data using mathematical algorithms. Deep learning, in particular, has profoundly impacted on computer vision by allowing the automatic learning of patterns and features in labeled image datasets. This has significantly improved the ability to handle highly complex datasets and tasks related to image classification and detection Chen et al. (2021) adopted a novel approach to change detection in remotely sensed imagery by using machine learning techniques. Instead of relying on CNNs or complex architecture, such as U-Net, they aimed to improve efficiency by using a simpler network structure. They also incorporated Attention and Transformer mechanisms to process images in a sequence-like manner, enabling the model to capture comprehensive information. In this machine learning model, information from image segmentation is treated as tokens, analogous to natural language processing. Furthermore, this model utilizes encoder and decoder components inspired by Transformers to process these tokens. The output of the change detection is represented by the differences between two sampled images, which are compared with other methods of image change detection. The method resulting from the use of this approach is termed the Bitemporal Image Transformer (BIT) map, as presented in Fig. 17. The results indicate that this technique is effective at detecting changes with a high degree of accuracy.

Fig. 17
Two image sets with 4 satellite photographs each of different areas with roads, buildings, and vegetation at different times. Image segmentation by 10 methods including S T A Net, I F Net, S N U Net, Base, and BIT for the second set with objects of buildings and cars have varying levels of shades.

Using BIT for change detection compared to other methods (modified after Chen et al. 2021)

While using data of the same type can directly compare changes, combining data from different data types can also yield excellent change detection results. These sensors may have different spatial, spectral, and temporal resolutions, but they provide additional possibilities for supplementing information and improving the accuracy of change detection. Therefore, developing techniques to compare or integrate data from different sensors and sources for change detection is meaningful. By integrating data from different types of sensors and sources, it becomes possible to gain a more comprehensive understanding of the types, extent, and scale of changes occurring in the imagery.

In summary, optical and infrared thermal imaging technologies have their roles and development potential in slope monitoring. Optical imaging technology is used for high-precision measurements of slopes, taking advantage of its unique imaging characteristics. In parallel, infrared thermal imaging technology can be employed to detect changes in the surface temperature of slopes. In this study, an integrated approach can enhance safety and stability assessment for on-site slopes through machine learning applied to optical images and computer vision techniques for change detection, coupled with infrared thermal imaging to monitor temperature changes on the slope surface.

The study conducted monitoring tasks to detect changes on the surface of a small slope located near the NCU (Fig. 8). The equipment and techniques specified above were utilized in this endeavor. The optical imaging-generated change detection model exhibited excellent performance during training, attaining an average F1-score of 0.9359 and an average Intersection over Union (IoU) of 0.8850 for the bitemporal image test collection. The model's application with thermal imaging for change detection resulted in productive real slope monitoring outcomes, as shown in Fig. 18.

Fig. 18
3 parts. 2 photographs of a sloping surface covered with vegetation and small pieces of rocks captured at 6 a m and 2 p m overlaid with a heatmap. A bitemporal heatmap of the same indicates the highest temperature of the rocks and the lowest in the vegetation around.

NCU Parking lot slope change detection image (The first row is the image from the previous time period, taken on 2023/03/13 at 6 A.M., the second is the image from the later time period, taken on 2023/03/13 at 2 P.M., and the third is the machine learning change detection image. Black pixels represent no change; white pixels represent changes. The color bar on the lower right corresponds to temperature changes)

Since the slope adjacent to the NCU parking lot remains generally stable and experiences few rapid natural environmental changes on its surface, this study established the effectiveness of machine learning for optical imaging change detection by deliberately altering the objects on the slope's surface. This was accomplished by manually relocating stones on the slope to initiate changes. Bitemporal images were taken before and after manually moving stones (Fig. 19). This study utilized a machine learning model in combination with thermal imaging for change detection, and the results obtained are shown in Fig. 20. While optical imaging offers intuitive and visual results on the slope's surface, thermal imaging provides temperature difference information that is not available through optical imaging, making it a valuable asset for both research and user support.

Fig. 19
2 photographs of a sloping area covered with vegetation and small rocks in-between captured on 17 03 2023 at 2 p m and 6 p m. 2 pieces of rocks are circled in the 2 p m photograph and 5 in the second.

Manual stone relocation diagram (red circles indicate stone movement)

Fig. 20
3 parts. 2 photographs of a sloping surface covered with vegetation and small pieces of rocks captured at 2 p m and 6 p m overlaid with a heatmap. A bitemporal heatmap of the same indicates patches of changes with the lowest temperature surrounding it and higher temperatures in the other regions.

NCU parking lot slope change detection image (The first row is the image from the previous time period, taken on 2023/03/17 at 2 P.M., the second row is the image from the later time period, taken on 2023/03/18 at 6 P.M., and the third row is the machine learning change detection image. Black pixels represent no change; white pixels represent changes. The color bar on the lower right corresponds to temperature changes)

6 Conclusion and Suggestions

This study described a new system and methodology to present an innovative approach to combine optical and thermal images for slope monitoring. The optical images can be used for photogrammetry as point clouds for displacement detection. Both the laboratory and field tests provided promising accuracy. The fusion of optical and thermal also revealed the apparent evidence to indicate the possible weak area of the slope.

The optical images were extended with a modified machine-learning method for automatic detection. Preliminary results at the NCU site guaranteed the feasibility of practice.

However, it highlights specific challenges that need to be resolved in the next step.

  1. 1.

    Implementing machine learning techniques for change detection in optical images is complicated due to two main stumbling blocks. The first is the diverse nature of the data that can impede analysis and the generation of established change detection results. The second is that supervised AI methods necessitate a vast amount of training data, which is time-consuming and demands intensive resources.

  2. 2.

    The durability and robustness of the instrument equipment at the field site near the slope is an area that requires further research on developing reliable monitoring data solutions in the face of interference from natural environmental factors.