Skip to main content

Advertisement

Log in

Quantitative back analysis of in situ tests on guiding flexible barriers for rockfall protection based on 4D energy dissipation

  • Original Paper
  • Published:
Landslides Aims and scope Submit manuscript

Abstract

This paper presents a quantitative analysis of system dynamic response, rockfall trajectory control, and kinetic energy evolution of the tests on the guiding flexible protection system (GFPS) based on 4D energy dissipation. The system achieves 4D protection against rockfall hazards on high and steep slopes, with 3D spatial motion and energy evolution control of rockfall trajectories in time history. After clarifying the structural principle of the system, analyzing the kinetic energy characteristics of the rockfall along the hill, and defining the multistage energy-dissipation control principle of the system, an evaluation method for rockfall energy evolution for quantitative back analysis was established for the whole course of system protection. Furthermore, the rockfall’s energy decay characteristics in the guiding section were analyzed. Accordingly, a full-scale model of this system was constructed, and six in situ tests with and without the protection system were conducted. The system model was laid on a slope with a gradient of over 60°, a width of 130 m, and a height of 82 m, where the crushed rock developed. The maximum impacting energy was 2500 kJ. Quantitative back analysis of the experiment was carried out, and the results revealed that, compared with the unprotected tests, after the construction of the protection system, the motion duration of the falling block was extended by over 80%, the impact times increased by approximately five times, the bounce height reduced by roughly 80%, the transverse motion distance reduced by around 50%, and, for blocks that reached the bottom, the residual kinetic energy and the rolling distance away from the slope toe were both decreased by 70%. The combined energy-dissipation ratio was as high as 89%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

References

Download references

Acknowledgements

The work in this study was supported by the National Key R&D Program of China (Grant No. 2018YFC1505405), the Key Research Program of Transportation Industry (Grant No. 2020-MS3-101), the Department of Science and Technology of Sichuan Province (Grant No. 2020YJ0263), the Fundamental Research Funds for the Central Universities (Grant No. 2682019ZT04), the Sichuan Transportation Science and Technology Project (Grant No. 2020-B-01), the Sichuan Science, Technology Innovation Seedling Project (Grant No. 2021117), and Geological Survey Project of CGS (Grant No. DD20190637).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhixiang Yu.

Appendix A

Appendix A

In computer vision, trajectories are imaged through camera apertures or reconstructed in three dimensions through images, which is essentially a process of transforming each position point between the coordinates in real 3D space and the coordinates in the 2D space of pixels. The implementation of this transformation process must rely on accurate spatial projection relations of the pixel coordinate system (PCS), image coordinate system (ICS), camera coordinate system (CCS), and world coordinate system (WCS). As depicted in Fig. 23, PCS takes the image angle point O0 as the coordinate origin, the two vertical edges of the imaging plane as the U-axis and V-axis, and the unit pixel size as dx and dy. ICS takes the intersection of the camera’s optical axis and the image plane as the coordinate origin O. The X-axis and Y-axis are parallel to the U-axis, and V-axis of PCS, respectively, unit is meter. The coordinate of point O in the OCS is (u0, v0). CCS takes the camera’s optical center point (OC) as the coordinate origin. Furthermore, its XC and YC axes are parallel to the X- and Y-axes of the ICS, and the ZC axis is the direction of the continuum between OC and O. The projected lengths of the distance from OC to the measured point and the distance from OC to the ICS plane in the ZC axis are the shooting depth zC and the camera focal length f, respectively. The establishment of WCS depends on the test environment.

During the imaging process, the light reflected from real objects in the WCS is imaged on the XY plane of the ICS through the camera’s optical center, and the size of the resulting object is similar to that of the real object in meters. The computer recognizes pixel coordinates of PCS. Therefore, there is a translation and scale transformation relationship between PCS and ICS.

Based on the above relationship of each coordinate system, there is a conversion relationship between the 3D coordinate matrix of an object in WCS and the plane coordinate matrix in ICS, which depends on the conversion matrix constituted by the camera parameters, and the mathematical relationship (camera matrix) is depicted in Eq. 20: reverse process is the acquisition of image data by photography, and forward process is the reduction of 3D spatial trajectory:

$${z}_{{}_{C}}\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=\left[\begin{array}{ccc}\frac{1}{dx}& \gamma & {u}_{0}\\ 0& \frac{1}{dy}& {\nu }_{0}\\ 0& 0& 1\end{array}\right]\left[\begin{array}{ccc}f& 0& 0\\ 0& f& 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{cc}R& t\\ {0}^{T}& 1\end{array}\right]\left[\begin{array}{c}{x}_{\text{W}}\\ {y}_{\text{W}}\\ {z}_{\text{W}}\\ 1\end{array}\right]\text{=}NH\left[\begin{array}{c}{x}_{\text{W}}\\ {y}_{\text{W}}\\ {z}_{\text{W}}\\ 1\end{array}\right]\text{=}M\left[\begin{array}{c}{x}_{\text{W}}\\ {y}_{\text{W}}\\ {z}_{\text{W}}\\ 1\end{array}\right]$$
(20)

where H is the extrinsic matrix, N is the intrinsic matrix, and M is the camera matrix. The extrinsic matrix is a rigid body conversion from the WCS to the CCS, consisting of a rotation matrix (R) and a spatial translation matrix (t).

$$H=\left[\begin{array}{cc}R& t\\ {0}^{T}& 1\end{array}\right]$$
(21)

The intrinsic matrix N is used to scale from CCS to PCS. F is the perspective projection matrix from CCS to the ICS. T is the translational scale scaling transformation matrix from ICS to PCS, and γ is the distortion factor, which takes the value of 1 if image distortion is not considered.

$$N=TF;T=\left[\begin{array}{ccc}\frac{1}{dx}& \gamma & {u}_{0}\\ 0& \frac{1}{dy}& {\nu }_{0}\\ 0& 0& 1\end{array}\right]:F=\left[\begin{array}{ccc}f& 0& 0\\ 0& f& 0\\ 0& 0& 1\end{array}\right]$$
(22)

Simplifying Eq. 20 yields the camera matrix, which is a 3 × 4 matrix of the following form:

$${z}_{C}\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=M\left[\begin{array}{c}{x}_{\text{W}}\\ {y}_{\text{W}}\\ {z}_{\text{W}}\\ 1\end{array}\right]\text{=}\left[\begin{array}{cccc}{m}_{11}& {m}_{12}& {m}_{13}& {m}_{14}\\ {m}_{21}& {m}_{22}& {m}_{23}& {m}_{24}\\ {m}_{31}& {m}_{32}& {m}_{33}& {m}_{34}\end{array}\right]\left[\begin{array}{c}{x}_{\text{W}}\\ {y}_{\text{W}}\\ {z}_{\text{W}}\\ 1\end{array}\right]$$
(23)

With M and zC known, the coordinate matrix of an object in WCS and plane coordinate matrix in ICS can be directly inverted. When M is unknown, the camera calibration is required to establish the camera matrix. In two sets of videos, six feature points are selected, whose coordinates in WCS are known on different planes and are solved to determine matrix M. When the feature points are more than six, the least-squares solution can be used to reduce the impact of errors. Since the trajectory of the rockfall is a 3D curve in space and zC is unknown at each point of the trajectory, Eq. 23 is expanded and zC is eliminated as expressed in the following:

$$\left\{\begin{array}{c}\left(u{m}_{31}-{m}_{11}\right){x}_{\text{W}}+\left(u{m}_{32}-{m}_{12}\right){y}_{\text{W}}+\left(u{m}_{33}-{m}_{13}\right){z}_{\text{W}}\text{=}{m}_{14}-u{m}_{34}\\ \left(v{m}_{31}-{m}_{11}\right){x}_{\text{W}}+\left(v{m}_{32}-{m}_{12}\right){y}_{\text{W}}+\left(v{m}_{33}-{m}_{13}\right){z}_{\text{W}}\text{=}{m}_{14}-v{m}_{34}\end{array}\right.$$
(24)

Equation A3 is a linear equation of space combined and solved from the equations of two planes, that is, the intersection of two planes, and the 3D coordinates of the points cannot be determined. According to the optical principle, when reversing the 3D coordinates of a plane pixel, the 3D information of the scene can be recovered from two or more images taken from different locations in the same scene, as shown in Fig. 24.

Fig. 24
figure 24

Recovering 3D information of the scene from two images 

Camera matrices M1 and M2 can be determined by Eq. 23, which are expanded as

$$\left\{\begin{array}{c}\left({u}_{1}{m}_{31}^{1}-{m}_{11}^{1}\right){x}_{\text{W}}+\left({u}_{1}{m}_{32}^{1}-{m}_{12}^{1}\right){y}_{\text{W}}+\left({u}_{1}{m}_{33}^{1}-{m}_{13}^{1}\right){z}_{\text{W}}\text{=}{m}_{14}^{1}-{u}_{1}{m}_{34}^{1}\\ \left({v}_{1}{m}_{31}^{1}-{m}_{11}^{1}\right){x}_{\text{W}}+\left({v}_{1}{m}_{32}^{1}-{m}_{12}^{1}\right){y}_{\text{W}}+\left({v}_{1}{m}_{33}^{1}-{m}_{13}^{1}\right){z}_{\text{W}}\text{=}{m}_{14}^{1}-{v}_{1}{m}_{34}^{1}\end{array}\right.$$
(25)
$$\left\{\begin{array}{c}\left({u}_{2}{m}_{31}^{2}-{m}_{11}^{2}\right){x}_{\text{W}}+\left({u}_{2}{m}_{32}^{2}-{m}_{12}^{2}\right){y}_{\text{W}}+\left({u}_{2}{m}_{33}^{2}-{m}_{13}^{2}\right){z}_{\text{W}}\text{=}{m}_{14}^{2}-{u}_{2}{m}_{34}^{2}\\ \left({v}_{2}{m}_{31}^{2}-{m}_{11}^{2}\right){x}_{\text{W}}+\left({v}_{2}{m}_{32}^{2}-{m}_{12}^{2}\right){y}_{\text{W}}+\left({v}_{2}{m}_{33}^{2}-{m}_{13}^{2}\right){z}_{\text{W}}\text{=}{m}_{14}^{2}-{v}_{2}{m}_{34}^{2}\end{array}\right.$$
(26)

The unique solution for the couplings Eqs. 25 and 26 is the world coordinate of the point P (xW, yW, zW), that is, the intersection of two straight lines in 3D space. Due to the error in extracting coordinates, the least-squares method is often used to solve (xW, yW, zW).

According to the method mentioned above, the video taken by the high-speed camera HC1 ~ HC3 is captured and analyzed to obtain the 3D trajectory shown in Fig. 15.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, L., Yu, Z., Jin, Y. et al. Quantitative back analysis of in situ tests on guiding flexible barriers for rockfall protection based on 4D energy dissipation. Landslides 19, 1667–1688 (2022). https://doi.org/10.1007/s10346-022-01845-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10346-022-01845-3

Keywords

Navigation