7.1 A Concept for Real-Time Simulations During the Tunneling Process

The mechanized tunneling process is characterized by a staged procedure of soil excavation at the tunnel face and lining erection, providing at the same time a continuous support of the soil by means of supporting fluids at the tunnel face and pressurized grouting of the tail gap. The construction logistics, interactions between the TBM, the support measures and the soil, including the groundwater, are the determining factors for the efficiency, the safety of the tunnel advancement and the risk of damage on the built-environment. Currently, decisions affecting the steering of TBMs are based upon engineering expert knowledge and monitoring data. However, using monitoring data implies that information (data) related to already passed situations is used to extrapolate on the future behavior of the soil-tunnel interactions.

Fig. 7.1
figure 1

The concept of real-time computational steering in mechanized tunneling with continuous model update

In contrast, numerical methods have recently become an important tool to investigate and predict soil-structure interaction effects in the design phase of tunneling projects. The numerical simulations can also be employed to perform the system predictions simultaneously to the construction process in order to support the selection of adequate process parameters. Therefore, as an additional tool to usual process monitoring, process simulations, which must be obtained in real-time, can be used to effectively support controlling and steering of the tunneling construction processes.

The concept of real-time simulations for computational steering requires to set up predictive simulation models in the design stage of a tunneling project, which are continuously updated with monitoring data during the construction process, see Fig. 7.1. This allows to investigate different scenarios of the TBM operational parameters for subsequent excavation steps and can assist the engineers to make informed decisions aiming to optimize the construction process.

7.2 Process Simulation Models for Logistic Processes

Mechanized tunneling systems are an extraordinary combination of different manufacturing conditions in the construction industry. On the one hand, they consist of the manufacturing processes of a quasi-stationary factory plant, with its performance-determining core processes of tunneling, ring building and the scheduled inspection and maintenance of system-critical components, which are an indispensable part of the production cycle. The overall system performance also depends to a large extent on internal support processes in the supply chain, such as backup logistics, logistics in the tunnel, construction site equipment and the external supply and disposal of the construction site. On the other hand, disruptive influences from the complex interactions between subsoil and tunneling technology (face stability, advance, wear, bonding, groundwater, etc.), which are common in tunneling, as well as further disruptive influences from the machine technology (grout injection, electronics, hydraulics, pipe and cable extensions) and from the supply chain (tunnel trains, mixing plant) must be managed [46]. Figure 7.2 presents the interaction of performance and supply processes in mechanized tunneling.

Fig. 7.2
figure 2

Interaction of performance and supply processes in mechanized tunneling based on [14]

Typical consequences of the interaction of the many influencing factors are, for example, excavation delays or production downtimes. According to this, the production output of a tunneling machine depends only to a comparatively small extent on the actual advance speed during excavation or on the duration of installation of the segment ring. In previous tunneling projects, unproductive time proportions of 40%-60% of the total working time were found [25]. System decisions have so far mostly been made on the basis of a limited decision horizon. The overall context remains unconsidered due to its complexity.

Nowadays, simulation models are used to support the planning and analysis of production processes. With the help of a simulation model, analyses of complex systems while considering uncertainties can be carried out. Process-oriented simulation approaches are particularly suitable for the analysis of mechanized tunneling due to the repetitive construction sequence [39, 66]. Existing simulation approaches for mechanized tunneling, however, have so far mostly been focused on an isolated consideration of the production or logistics processes of a construction site [28, 29, 5]. Detailed models for the analysis of process interactions exist so far only for individual projects [50]. However, such project-specific simulation models can only be transferred to other tunnel construction projects with partly very different logistics systems with great effort [17].

The use of process simulation for an operational analysis and management of the advancement processes including all support processes and the scheduling of maintenance work are investigated and the main results are summarized in the following sections. In Sect. 7.2.1, modular simulation models for mechanized tunneling are described considering interdependencies of production processes, uncertain boundary conditions and disturbances. In the following section, Sect. 7.2.2, the influence of maintenance scheduling on the overall project performance is investigated. New concepts for a robust and optimized planning of maintenance intervals considering the wear of cutting tools are presented. Simulation models support a robust design of logistics processes in the planning stage of a project, but can also support decision making during project execution. The online application of simulation models is therefore presented in Sect. 7.2.3. Concepts for a real time assessment and the computational steering of the logistics and production processes are introduced.

7.2.1 Simulation of Logistics and Production Processes

In mechanized tunneling, there is a high proportion of unproductive time, as various processes are frequently disturbed or the main tunneling processes have to be interrupted for logistical reasons. Since relationships between the process chains of mechanized tunneling have a major influence on the overall performance and efficiency of the tunnel boring machine (TBM), it is important to identify an overall context and investigate sensitivities of individual processes as well as process interactions. For this purpose, process simulation can be used, which not only provides a static view, but also a representation of the complex dynamic relationships. In this way, stochastic processes can also be analysed and their influence on the important tunneling processes can be investigated.

When modeling the individual core and support processes, a large number of boundary conditions must be taken into account. In addition to resources and material availability, these include current soil characteristics and settlements. This information is partly incomplete and subject to uncertainties. For a realistic analysis of the production processes, these boundary conditions must be classified and modeled. Subsequently, these boundary conditions can be consistently integrated into the simulation models. In addition to the fuzzy boundary conditions, typical production-related disturbances during tunneling must be included in the simulation. Addressing these issues, the simulation of logistics and production processes was investigated in the collaborative research center 837. The main results of this research were published in [46, 64, 70] and are summarized in this section.

7.2.1.1 Classification of Process Interactions

In this research project, the graphical modeling language SysML (System Modeling Language) was used to represent the complex interactions of the numerous processes of a mechanized tunneling system. It represents a subclass of the Unified Modeling Language (UML) according to the Object Management Group [58]. With the help of diagrams, a conceptual model description can be made, with which the production-technical processes and effective relationships can be classified and formalised in the context of mechanized tunneling using shield machines.

The hierarchical structure description is carried out by means of block diagrams (SysML Block Definition Diagram), representations of the intrinsic behavior of all system elements with the help of state diagrams (SysML State Machine Diagram) and a description of the interactions between the system elements with sequence diagrams (SysML Sequence Diagram).

With the help of the state machine diagrams (stm), requirements for the respective element and the consequences of the process execution can be illustrated. In addition, processes and states including their activation conditions can be described in a selected level of detail. As an example, Fig. 7.3 shows the intrinsic behavior of the erector in the state diagram. Processes are shown as rectangles, arrows represent conditions or consequences and the grey elements represent signals. The stm diagram shows the two main states of the erector, idle and ringbuild. The erector leaves the idle state as soon as the AdvanceFinished signal arrives. Then the ring build process starts with its sub-processes pickUp and assemble. PickUp depends in turn on the availability of segments. Thus, the process sequence of the erector including all conditions and requirements is shown within the diagram.

Fig. 7.3
figure 3

State machine diagram of the erector to illustrate the modeling of processes (left), Sequence diagram to illustrate the modeling of process interdependencies during ring build (right) [64]

In addition to the description of the intrinsic behavior of an element, the formal description of the interactions between the individual system elements is also very important. An interaction describes, which changes of state of a system element leads to which reactions and thus changes of state of another system element. The interactions are structured hierarchically, represent the information to be exchanged and depend on the detail of the system elements used. As an example, Fig. 7.3 shows a sequence diagram to describe the process interaction required for the process ring build described above. It can be seen, that the AdvanceFinished signal is sent here from the cutting wheel. The signal whether a segment is available is in turn sent from the segment feeder.

Through formal modeling based on SysML, all essential elements, processes and interactions in mechanized tunneling can be described and discussed with the responsible technical planners. This increases the understanding of the system and simplifies the implementation in the form of discrete-event simulation models [46]. The formal description of all defined system elements can be found in [25, 63, 64, 70].

7.2.1.2 Representation of Uncertain and Incomplete Boundary Conditions

For simulation studies, the quality of the input data is crucial for the quality of simulation results. Therefore, the processing of input data is an essential step in the development of simulation models. In tunneling, many input data are subject to uncertainties (e.g. geological data) and process durations are subject to natural fluctuations. The advance speed of the TBM and the excavation performance of the cutting wheel, for example, are significantly influenced by the ground conditions encountered, which in turn have a high degree of prediction inaccuracy. The installation of the segments by the erector is controlled manually, so that scattered sub-process durations also occur here. These uncertainties and fluctuations have to be displayed in the input data of the simulation model in order to provide a realistic representation of the real tunneling system [46].

A frequently used method for representing uncertainties and varying input data, which was also selected here, is the use of probability functions [1]. However, generating reasonable probability distributions that are as close to reality as possible is not trivial. Distribution fitting methods can be used for this purpose. In this process, probability distributions are closely fitted to the histogram of a given data set by suitably varying the distribution parameters. In a second step, goodness-of-fit tests should be performed to assess the quality of the fitted distributions to represent the data set [64].

Once a suitable probability distribution is identified, it can be implemented in the simulation model to generate discrete values at random within a Monte Carlo simulation. To generate robust predictions, running a large number of randomised simulations based on well-chosen distribution functions is advisable.

7.2.1.3 Simulation of Disturbances

In order to simulate production disturbances, their causes must be known. The diverse problems and disturbances of mechanized tunneling were divided into the following three classes in this research project: Production disturbances, logistical problems and propagation effects [46]. One speaks of production disturbances when a disturbance occurs directly at one of the main elements, such as the cutting wheel, the erector or the slurry circuit. As a result, the production process must be interrupted directly [46].

Logistical problems, such as disturbances in the delivery of material, lack of capacity or maintenance work, on the other hand, do not usually affect production directly. This is because support processes have a certain buffer time. In this way, troubleshooting can be made possible without, in the best case, interrupting production at all. For example, a malfunction in the delivery of segments has no effect on ring construction if a ring has already been delivered to the TBM. If, on the other hand, the disruption continues until another full driving cycle has been completed, so that no more segments are available for further installation, the production process must be interrupted. The same applies, for example, to malfunctions in the slurry circuit when the excavated material is pumped with liquid [46].

In the class of propagation effects, processes are considered that can react sensitively to disturbances of other processes. For example, a production disturbance that reaches a certain temporal threshold can cause the freshly delivered annular gap mortar to harden. Even if the triggering production disturbance is no longer present, tunneling cannot then continue without a new supply of annular gap mortar [46].

For the simulation of disturbances, time-related assumptions must be made. On the one hand, the time interval until a failure occurs must be determined (TBF: Time Between Failure) and, on the other hand, the time interval how long it takes to repair the failure (TTR: Time To Recover) has to be defined. The characteristic values should be specified for the process simulation with probability distributions. Shift reports, for example, can serve as a data basis. Especially for technical components, these parameters are also supplied by system manufacturers [46].

In this research project, various tunnel excavations were analysed with regard to their disturbances. Typical disturbances were classified and processed for reuse. The classified disturbances were analysed according to both frequency and intensity. Fig. 7.4 shows an example of a distribution function for an evaluation of the MTBF for the cutting wheel, evaluated on the basis of a reference project. Further evaluations of TBF and TTR can be found in [63, 64]. In addition to the time dependency, wear-related disturbances were also identified, the cause of which is triggered by a certain number of process repetitions or a distance driven. Significant correlations were identified between the frequency and intensity of the disruption and factors such as geology, maintenance strategy and quality of the material or lubricants used [46].

Fig. 7.4
figure 4

Density-histogram plot for TTR of the cutting wheel with real project data (histogram) and fitted distribution (Johnson SB) [63]

Individual disturbances can be combined into disturbance scenarios and scaled if necessary. This allows a set-up configuration to be analysed quickly under different boundary conditions. The analysis of disturbances and other varying variables requires stochastic simulation experiments (Monte Carlo simulation). This means that discrete values are generated for each simulation run based on the distribution functions. Thus, a large number of simulation runs must be carried out to ensure a meaningful and significant estimation of the distribution and statistical measures (e.g. mean values, variances or quantile values) of the resulting variables, such as utilisation, project duration and possible costs [46].

7.2.1.4 Simulation Models for Mechanized Tunneling

For the analysis of the logistics and production processes, process simulation models were developed, taking into account the previously described tunnel construction-specific boundary conditions. Simulations are used when investigations on the real system are not possible or are associated with too much effort and there are no analytical procedures that adequately represent the problem. In particular, the dynamic behavior and uncertain, statistical influences that make a classical investigation difficult can be included in the investigation of logistics and production processes via process simulation.

Implementation

The presented simulation approach is implemented in the simulation framework AnyLogic [78]. This software allows a multi-method simulation (agent-based modeling, system dynamic modeling, discrete-event modeling). All simulation models created are hierarchically structured and configured according to a modular concept.

AnyLogic includes a native Java programming environment with which the complex interactions of the tunnel construction processes can be implemented in detail. The exchange of signals was carried out via an Observer-Observable design pattern (see [36]). Thus, signals are distributed to the corresponding elements via a central event manager instead of a direct exchange between the single elements.

Following the example in Sect. 7.2.1.1, for example, the signal AdvanceFinished is sent by the cutting wheel after completion of the advance and received by the event manager (see Fig. 7.5). In parallel, the SegmentAvailable signal is sent by the segment feeder to indicate that a segment is ready for installation. The event manager forwards these signals to the erector, which starts ring building. The end of ring building is again transmitted to the cutting wheel via the Event Manager and the signal RingbuildFinished. The use of the EventManager thus enables a flexible communication structure that allows the exchange of modular model elements without the need for re-implementation of the entire signal exchange [64].

Fig. 7.5
figure 5

Conceptual drawing of the event manager to realize a flexible process interaction [64]

The individual state diagrams of the model components are visualized directly in the simulation model. This allows the behavior of the overall system to be viewed and checked at any time during the simulation [46].

The disturbance of the individual elements can be passed on to the respective affected elements due to the implemented process dependencies [64]. Analogous to the duration of the production processes for tunneling and ring building, the duration until the next occurrence of a disturbance, as well as the duration for repair (cf. 7.2.1.3), are mapped via distribution functions. In addition, the well-known package Stochastic Simulation in Java (SSJ), which was developed by the University of Montreal, can be integrated via the Java environment. This allows random values to be generated from more specialised probability functions in addition to the basic distribution functions available in AnyLogic [64].

Model design

The developed model displays the main elements of a mechanized tunneling project. The structure can vary depending on the main objective of the simulation study and the selected level of detail as well as depending on the project specific characteristics. The model consists for example of the main elements of a project TBM, backup system, underground logistics system and surface facilities. Each main element can consist of further sub-elements. For example, a TBM can be designed from individual elements such as a cutting wheel, erector, thrust cylinders and grout injection pump. As an example, a model construction of a project with a slurry shield machine is shown in Fig. 7.6.

Fig. 7.6
figure 6

Block definition diagram of a mechanized tunneling project [64]

In addition to these elements, components for defining geotechnical aspects and visualizing the results were also realised. The material flow between the elements can be defined interactively through special connections. The different components are implemented very flexibly and can be used for different machine configurations and supply chains. Similar to the SysML formalisation in Fig. 7.6, a second level of hierarchy can be modeled to allow alternative configurations by swapping elements. The behavior of the mechanized tunneling components is implemented directly in AnyLogic through state diagrams following the formal descriptor of the interactions in Sect. 7.2.1.1.

Based on the results developed, a brief summary of possible investigations is given in Table 7.1. For each investigation, the objectives are briefly summarised and a corresponding paper is referenced in case of further interest.

Tab. 7.1 Possible Simulation investigations

7.2.2 Optimization of Maintenance Strategies

The overall efficiency of a project is highly influenced not only by the performance of the support processes, but also by the maintenance of cutting tools. Cutting tools, which are in direct contact with the ground during tunneling, are subject to a constant wear process. When tunneling in unstable ground conditions, however, the maintenance of the cutting tools is very costly and can cause long downtimes due to the tunnel face support. In order to reach the tools, the support medium in the excavation chamber must be replaced by compressed air. The modification of the face support carries the risk of settlements at the surface as well as blow-outs, which can lead to fractures and thus endanger the surface development as well as the workers in the excavation chamber. The number of entrances into the excavation chamber should therefore be kept as low as possible. However, a condition-based maintenance of the cutting tools is not possible due to the inaccessibility of the tools during the excavation. Also, there are only a few wear prediction models to determine the wear of the tools on the basis of the soil parameters and driving parameters. In addition, uncertainties and fuzziness, especially in the soil parameters, make it difficult to plan maintenance stops accurately. Furthermore, it is not possible to enter the excavation chamber at every point of the tunnel alignment, e.g. due to existing buildings above [19].

7.2.2.1 Formalisation of Tool Wear and Maintenance Strategies

Tool wear

In mechanized tunneling, the ground is excavated with a tool-equipped cutting wheel. The cutting tools, whose type varies depending on the ground conditions (e.g. discs, scrapers and bucket), are hence exposed to a constant wear process. The wear pattern depends mainly on soil properties, such as excavatability, consistency, transport behavior and ambient pressure [45], but also on the design of the cutting wheel, e.g. the type of cutting tools and their arrangement on the cutting wheel [47]. A more detailed description and investigation of tool wear can be found in Sect. 3.3.

In materials engineering, several types of material wear are defined. In mechanized tunneling in soft ground, mainly abrasion, adhesion, tribochemical reactions and surface disruption occur. The decisive factor in soft ground is abrasive wear and thus the abrasiveness of the soil [26].

As there is no sufficient possibility to monitor the tool condition during the tunneling process, wear prediction models have to be used to determine necessary inspection intervals and maintenance processes. Currently, prediction models of wear behavior are mostly based on input data obtained in laboratory index tests under idealised boundary conditions [45]. Plinninger and Restner [61] give an overview of the index tests developed over the years to investigate the abrasiveness of soils (e.g. LCPC test). However, these index tests do not consider TBM steering parameters (e.g. penetration or rotational speed of the cutting wheel) and their results can therefore only be transferred to the real system to a limited extent [26, 49].

In order to consider these boundary conditions, Köppl [47] developed an empirical model based on 18 evaluated hydroshield projects. His model takes into account the abrasiveness of the soil using a Soil Abrasivity Index (SAI) and additionally considers the type and arrangement of the tools on the cutting wheel as well as the penetration. The SAI takes into account the equivalent quartz content eQu, the stresses using the shear strength of the soil \(\tau_{\text{c}}\) and the grain size distribution of the ground using the grain size \(D_{60}\) [47],

$$\displaystyle\text{SAI}=\left(\frac{\text{eQu}}{100}\right)^{2}D_{\text{60}}\leavevmode\nobreak\ \tau_{\text{c}}\;.$$
(7.1)

With the help of the SAI a maximum cutting path for each cutting tool can be calculated depending on the different tool types [47],

$$\displaystyle s_{\text{c,e(z)}}=\begin{cases}312.0+\exp(-0.0048(\text{SAI}_{\text{z}}-1398.2)),&\text{discs}\\ 280.9+\exp(-0.0050(\text{SAI}_{\text{z}}-1300.7)),&\text{scrapers/buckets.}\end{cases}$$
(7.2)

This enables the estimation of the lifespan for each cutting tool and thus also the maximum advance distance until the next maintenance stop.

During the advance process, each cutting tool follows a helix-shaped path, representing the maximum cutting path \(s_{\text{c,e(z),i}}\). When considering the cutting wheel geometry and tool position, the maximum cutting path can be translated into the maximum longitudinal length of an excavated tunnel section \(L_{\text{c(m)z,i}}\) in order to determine the next maintenance position. The wear level of a tool \(e_{\text{cd,e(m)z}}\),

$$\displaystyle e_{\text{cd,e(m)z}}=\frac{s_{\text{d,e(z)}}}{s_{\text{c,e(z)}}\gamma_{\text{cl}}}=\frac{e_{\text{c,e(z)}}}{\gamma_{\text{cl}}}\;,$$
(7.3)

is estimated by comparing the current driven cutting path of a tool \(s_{\text{d,e(z)}}\) to the maximum cutting path \(s_{\text{c,e(z)}}\). The safety factor \(\gamma_{\text{cl}}\) reduces the maximum cutting path to ensure a workability of the cutting tools under worse boundary condition (\(0<\gamma_{\text{cl}}\leq 1.0\)). This wear level, calculated in Eq. 7.3, is used to determine the tools that have to be replaced during a maintenance stop [19].

If the wear limit of a tool is exceeded and the hard-metal parts at the top of the tool are worn out, the wear resistance of the tool decreases significantly. If the tool wears out to a certain extent, even the tool holder can be damaged, which can in turn lead to a significant damage of the cutting wheel and a long standstill (see Fig. 7.7).

Fig. 7.7
figure 7

Schematic wear behavior of a cutting tool (left); Wear limit of a scraper defined by the depth of the hard metal parts, wear limit of the tool holder to be damaged given by the geometrical boundaries (here: \(2.0h_{\text{d;max;SM}}\)) (right) based on [19]

Maintenance strategies

The maintenance of cutting tools in tunneling in soft ground is very complex and subject to many uncertainties. Due to the face support, the tools are not freely accessible and prevent condition-based maintenance. Therefore, maintenance is usually carried out periodically, with additional preventive stops before passing a critical tunnel section (e.g. high water pressure or sensitive surface structures). The interval of maintenance is mainly determined by the predicted wear limit of the tools and the expected geotechnical conditions. Depending on the maximum travel distance (\(L_{\text{c(m)z,i}}\)) of the quickest worn out tool, the maintenance stops are planned and the tools that have reached their wear limit (see Fig. 7.8, right, \(i=1,18\)) or will reach this limit in the near future are replaced [19].

Fig. 7.8
figure 8

Schematic of the dependencies of maintenance interval \(L_{\text{c(m)z,i}}\) and lifespan of cutting tools \(s_{\text{c,e(z)}}\) based on [14, 19]

The maintenance interval must neither be too long, as otherwise there is a risk that tools will exceed their wear limit and thus increase the risk of massive damage to the cutting wheel, nor must the interval be too short, as maintenance stops are very time-consuming and cost-intensive. Therefore, their number must be limited to a reasonable minimum.

Before maintenance of the cutting tools, the required tools as well as necessary materials must be transported to the TBM. In the case of planned maintenance, the materials can be transported into the tunnel without additional effort already during a regular drive of the tunnel vehicle. In the case of unplanned maintenance in the event of corrective maintenance, additional time for waiting for material has to be added to the maintenance duration, as materials and tools are only transported to the machine after the shutdown has occurred.

The maintenance process itself is then divided into three main processes, the mobilisation process, inspection and replacement of tools, and demobilisation.

The mobilisation process (\(t_{\text{mob}}\)) describes all preparatory work up to assessing the condition of the tools and replacing them if necessary. To enter the excavation chamber, the support medium in the excavation chamber must first be lowered and replaced by compressed air (\(t_{\text{low}}\)). However, this process is not always recommendable at all points along the alignment and carries the risk of an unstable face due to insufficient support pressure or even of blow-outs [TTB]. Furthermore, the limited working time under pressure as well as compression and decompression times of the workers (\(t_{\text{compress}}\)) have to be considered. The compression durations depend on the level of the prevailing pressure and the duration of stay [41]. After the workers have been successfully compressed, the working platforms are installed (\(t_{\text{installation}}\)). Subsequently, the cutting wheel and tools can be accessed and cleaned (\(t_{\text{cleaning}}\)). Accordingly, the duration of the mobilisation process (\(t_{\text{mob}}\)) can be calculated as [19]

$$\displaystyle t_{\text{mob}}=t_{\text{low}}+t_{\text{compress}}+t_{\text{installation}}+t_{\text{cleaning}}\;.$$
(7.4)

The replacement work (\(t_{\text{replace}}\)) consists of three processes, a visual tool inspection (\(t_{\text{inspect}}\)) of all tools, retightening of loose bolts (\(t_{\text{bolt}}\)) and replacement of all worn tools (\(t_{\text{e}}\)) (Eq. 7.5). The duration of the tool change depends on the different tool types and the current condition of the tools. If there is severe damage, e.g. worn tool holders or a damaged cutter wheel structure, the maintenance process becomes more time consuming as it requires welding. We have

$$\displaystyle t_{\text{replace}}=\sum_{i=1}^{n}t_{\text{inspect,i}}+\sum_{i=1}^{n_{\text{bolt}}}t_{\text{bolt,i}}+\sum_{i=1}^{n_{\text{e,d}}}t_{\text{e,d,i}}+\sum_{i=1}^{n_{\text{e,s}}}t_{\text{e,s,i}}+\sum_{i=1}^{n_{\text{e,b}}}t_{\text{e,b,i}}\;,$$
(7.5)

where \(t_{\text{replace}}\) is the duration of replacement work (min), \(t_{\text{inspect}}\) is the duration of tool inspection work (min/tool), \(n\) is the number of cutting tools (pcs), \(t_{\text{bolts}}\) is the duration for re-tightening one bolt (min/bolt), \(n_{\text{bolt}}\) is the number of re-tighted bolts (pcs), \(t_{\mathrm{e,d,i}}\) is the duration for exchanging one tool (min/tool) and \(n_{\mathrm{e,d/s/b}}\) is the number of exchanged tools (pcs). Here, d, s and b represent discs, scapers and buckets, respectively.

After the inspection is completed, the demobilisation process (\(t_{\text{demob}}\)) starts. This comprises the processes of demounting the working platforms (\(t_{\text{unmount}}\)), decompressing the workers (\(t_{\text{decompress}}\)) and refilling the excavation chamber with the support medium (\(t_{\text{refill}}\)). The total duration for the demobilisation process sums up according to [19]

$$\displaystyle t_{\text{demob}}=t_{\text{unmount}}+t_{\text{decompress}}+t_{\text{refill}}.$$
(7.6)

These processes are carried out separately one after the other to ensure the safety of the workers. The total duration of a maintenance stop can then be estimated according to [19] as

$$\displaystyle t_{\text{maint}}=t_{\text{mob}}+t_{\text{replace}}+t_{\text{demob}}.$$
(7.7)

In addition to the replacement of worn tools, tools can also be preventively replaced by new ones during maintenance, that are likely to exceed the wear limit by far until the next maintenance stop to reduces the risk of severe damage to the cutting wheel. This limit for the preventive replacement of tools is a key factor for the performance and efficiency of a project [19].

7.2.2.2 Modeling of Wear and Maintenance

For modeling wear and maintenance processes, a new agent is introduced into the simulation approach described in Sect. 7.2.1. The agent CuttingTool represents each cutting tool, its condition and remaining lifespan individually. Since the number of rotations is set by the shield driver and can thus be simplistically assumed to be constant, the penetration of the tool depends mainly on the fluctuating advance speed. As a result, the penetration and thus the wear also fluctuates within a homogeneous soil section. For the soil, a new agent is introduced, which contains all important parameters describing the abrasiveness of the soil (see Fig. 7.9, left).

Fig. 7.9
figure 9

Formal Model description (left) based on [19], state chart of the TBM including the states maintenance and repair (right) [14]

Besides, the agent cuttingWheel and Erector were simplified and combined in the agent TBM to focus on the analysis of wear and maintenance. The states maintenance and repair were added, as well as a list of all cutting tools. If the TBM reaches the maintenance position planned, the agent TBM changes its state to maintenance. After each advance, the wear status of each individual tool is determined using the wear model according to Köppl [47] as described above. If a tool reaches its wear limit before scheduled maintenance takes place, the cutting wheel is stopped and changes to the technicalFailure state. In this case, a technical failure interrupts the current process. In both cases, the duration for the maintenance of the cutting tools is determined on the basis of the prevailing boundary conditions. After the determined duration has elapsed, the cutting wheel changes back to the operable state.

7.2.2.3 Analysis of Robustness Measures

To evaluate the influence of the uncertainties affecting the wear behavior of cutting tools on the maintenance scheduling and to find a robust and efficient maintenance strategy, a robustness analysis has to be performed.

Robustness is defined as the sensitivity of results to influencing parameters. In mechanized tunneling, influencing parameters are divided into controllable parameters, noisy parameters and uncertain parameters. With regard to maintenance and wear, controllable parameters are the maintenance interval and the safety factor for the wear limit. Uncertain, noisy parameters are soil properties and the penetration of the TBM [19].

The robustness of maintenance strategies can be characterised by three specific values. These are the total duration of maintenance, the number of replaced tools and worn tool holders, and the location of maintenance stops along the tunnel route in relation to the given boundary conditions. A robust maintenance strategy aims to optimize these three values and reduce the risk of unplanned maintenance stops. This is because unplanned maintenance stops not only cause higher costs, but also increase the risk of work accidents and damage to sensitive surface structures [19].

For the assessment of robustness, a comparative value must be used. In the presented research, cost functions for maintenance work are used for a comparable definition describing the robustness and efficiency of a maintenance strategy while considering uncertainties. For this, cost factors are needed, which are divided into the following time dependent and material dependent costs [19]:

  • Time-dependent costs:

    • general expenses of the jobsite [Euro/h],

    • planned/unplanned maintenance. stops [Euro/h].

  • Material costs:

    • cutting discs [Euro/disc],

    • scraper [Euro/scraper],

    • bucket [Euro/bucket].

  • Fix costs:

    • compressing/decompressing operation costs [Euro/ operation],

    • planned/unplanned maintenance. stops [Euro/stop].

In order to find an optimal maintenance strategy, the characterising parameters maintenance interval \(L_{i}\) and safety factor \(\gamma_{\text{cl}}\) are varied using the crossed-array method of Taguchi [77]. Thus, all possible parameter variations are considered and evaluated using the cost objective function. Unplanned downtimes should always be avoided as mentioned before, therefore strategies that lead to unplanned maintenance stops are considered unfeasible.

7.2.2.4 Optimization of Logistics and Maintenance Processes

The optimization of maintenance intervals and maintenance strategies always aims at maximising the availability of the system. However, contradictory effects play a decisive role here. For example, regular entries into the excavation chamber allow a better estimation of the wear condition and thus prevent unscheduled stops. Too frequent inspection intervals in turn lead to a reduction in availability. For mechanized tunneling, minimising project duration by maximising availability and avoiding unscheduled stops while reducing the number of cutting tools to be changed is of particular importance.

Within the optimization framework, the developed process model was extended so that the influence of different maintenance strategies as well as different wear conditions of cutting tools can be determined (see Fig. 7.10). The variation of the wear condition of a cutting tool that is still classified as acceptable during an inspection process describes the use of a proactive maintenance strategy. The coupling of Monte Carlo simulations and parameter variations thus results in a multi-criteria optimization, which determines a Pareto optimum as a result. The simulation parameters length of the maintenance interval, averaged condition of all cutting tools as well as the limit state of serviceability of a single cutting tool can be used to compare the effects of different maintenance strategies on the project time and the number of cutting tools to be changed. A more detailed description of the optimization framework can be found in [15, 19].

Fig. 7.10
figure 10

Procedure model for the evaluation of maintenance strategies with regard to their robustness by using process simulation [19]

Case study

To illustrate the developed simulation approach, a short case study is presented, which compares a corrective to a preventive maintenance strategy. The case study is based on a metro line project constructed in sand and clay soil conditions. The tunnel alignment is 1800 m long in total and made of four distinct geological sections consisting of either sand or clay with varying support pressure. The TBM is equipped with compressed air work facilities. A man lock and a material lock are used to transfer workers and tools into the excavation chamber. The cutterhead is equipped with disc cutters, scrapers and buckets to excavate the soil. The rotational speed of the cutting wheel is set to 2 rotations per minute, while the advance speeds varies according to Weibull distributions gained from the evaluation of processing data of finished tunneling projects with an average advance speed of 15.5 mm/min for the one geological section and 25.5 mm/min for the other one. The time for the ring build is set to 40 minutes, since the ring build process has no influence on the wear or maintenance processes, thus does not influence the results of the model [18].

Different maintenance strategies are analyzed by varying the maintenance interval. The first strategy that has been analyzed uses only corrective maintenance. The second strategy considers a maintenance interval of 86 rings, after a first rough estimation. In both cases, the wear limit for the tool exchange is set to 10%, so that the tools, which are nearly worn completely, are exchanged preventively. For each strategy a Monte Carlo analysis has been performed, conducting 10,000 simulation runs. This way, the uncertainty of the input parameters are taken into account. Consequently, the simulation result is not one certain value but can be evaluated using a histogram [18].

First, the total downtime caused by the maintenance processes has been evaluated. The graphical comparison of the results for each strategy (see Fig. 7.11) shows that the total downtime of the corrective maintenance is much higher than for the periodic maintenance, even though the total number of maintenance stops is higher using periodic maintenance. In both cases, there is a small possibility of significantly higher downtimes up to approx. 148 days for corrective maintenance [18].

Fig. 7.11
figure 11

Total downtime in days for 10,000 simulation runs

The number of replaced tools increases with the rising number of maintenance stops. While during the corrective maintenance 406 tools are replaced on average, during the periodic maintenance 551 tools are replaced on average. The utilization of the tools is higher using corrective maintenance, but there are severe damages of the cutterhead and tool holders, which cause an increase of the needed repairing effort [18].

Regarding the position, at which an entering of the excavation chamber is conducted, periodic maintenance has the advantage that the positions are predefined and known beforehand, while in corrective maintenance, they vary according to the position the damage is detected. By varying further parameters, like the wear limit for the tool exchange or the amount of wear at which the damage is detected, it became visible that these parameters also influence the results of an optimal maintenance interval at which the total downtime is at its minimum [18].

7.2.3 Real-Time Use of Process Simulation Models

The use of real-time data for online simulation is getting increasingly important for an improved prediction of a projects outcome and also for a support of the steering of processes.

In tunneling, many parameters deviate from their predicted values. The main reasons are that the assumptions made in the planning phase are subject to uncertainties and fuzziness. Further, unforeseen events can occur during execution that were or could not be considered during the planning phase. Therefore, logistics and production processes must be adapted and controlled continuously. The developed simulation models for the analysis and optimization of production and logistics processes can also be used for a real-time analysis and a simulation-based steering of these processes. For this purpose, a control concept in the form of a control loop has been specified, which includes the recording of the current status of the production and logistics processes, the evaluation of the current data and the planning of suitable counter measures (see Fig. 7.12).

Fig. 7.12
figure 12

Control loop for the holistic process simulation approach for tunneling projects

First, an adequate recording of real-time data, which reflects the status of the construction site, is important. In mechanized tunneling, a lot of data is already recorded during excavation. In order to evaluate the current construction progress and to enable an adjustment of the production processes, deviations between the planned and the recorded data are determined. This enables an adaption of the logistics processes and an adjustment of the prognosis. If significant deviations are detected, suitable measures can be taken to accelerate the production progress or prevent possible incidents. The effect of deviations in the project flow can be evaluated and counter measures can be proven with the help of simulation models.

Further, by identifying threshold values, an early detection of disturbances is possible. Simulation models can help to evaluate a real time prediction of disturbance effects and an evaluation of the adaption of control variables.

This section summarizes the main results of the investigated real-time use of simulation models, which were also published in [43, 67].

7.2.3.1 Concept for a Real-Time Use of Simulation Models

Compared with the traditional offline simulation that uses stationary input parameters, the capability of real-time simulation to dynamically incorporate new project data and adapt to changes in the operating environment offers the promise of improving the accuracy of project forecasting [75].

To employ the simulation model to perform the system predictions simultaneously to the construction process, it is essential to provide the ability to do a continuous comparison between the initial and the current values of the input data. If the deviation is significant, an update of the input data could improve the prognosis for the next phase of the project.

During the online-update, real-data and model-generated data is compared during the progress of the constructions. According to this comparison, an update of the simulation inputs is suggested to improve the model and its prediction and to improve the system accordingly when possible.

7.2.3.2 Integration of Real-Time Data

The most essential step to manage the online-update of the simulation system is the effective management of the simulation inputs, which means to be able to update the inputs of the model at any time and to set the suitable data gathering method to collect real-time inputs. In agent-based models, each agent operates in a discrete manner, using the parameters and variables implemented in the agent. The initial values of these variables are set at the beginning of the simulation, and the simulation starts to operate from this state. What is important to consider, is that interaction between the agents is not disturbed by the change of the initial state. However, it is based on the change of state of each variable during the execution of the simulation model.

The main challenge in the update process is to maintain the validity of the model after the update. Although the agents are isolated during execution, to implement the update, extra variables or time-related functions need to be added, which may require a validation step after. Additionally, updating the non-deterministic parameters, which generally are represented using probability distribution functions, may affect the model. Updating such data need an external data analysis, before being implemented in the agent-based models.

For offline runs, waiting for the model to complete the run is not as critical or demanding. On the other hand, when the model is providing real-time predictive analytics, model run turn-around time, including multiple scenarios and Monte Carlo runs, become critical. Whether the implementation is for schedule adherence, predictive bottleneck detection, or other efficiency and improvement alerts, two key factors influence the overall solution, the simulation speed, and model validity and accuracy [2].

To optimize speed, many approaches are used to distribute and parallelize the model to speed the model execution without affecting the validity of the system. Model accuracy, on the other hand, can be even more difficult to maintain, especially when real-time data is used. The simulation model should be designed or adjusted in a way that allows growing with the system and expanding to allow a correct relationship with live data. The user should be able to apply real-time constraint changes, and in some case expand the model in order to maintain the correct relationship with live data.

The main concept of the online update of the simulation model can be summarized in four basic steps [67]. The diagram illustrated in Fig. 7.13 displays these four steps in order to achieve a real-time update.

Fig. 7.13
figure 13

Proposed methodology to update an agent-based simulation model modified based on [67]

Step 1

First, one should choose the main input parameters to be updated, which affect the objectives of the simulation the most. In the simulation model, hundreds of input parameters are used to simulate the real-time system. These parameters can be divided into two types:

  • Real system inputs: This means the input parameters of the main processes in the simulated system, such as velocities, types of soil, the cutting speed, the flowrates of the pumps, etc.

  • Internal input parameters which are set by the designer to regulate the flow of the process in the simulation model.

Updating all data can be time-consuming and might not be necessary, if they don’t affect, or have a negligible effect on the update of the desired outcomes. It is advisable to do a sensitivity analysis in order to decide the most essential input parameters that can significantly affect the results [67].

Another important process for the update is the collection of the real time data. In mechanized tunneling, hundreds of sensors are used to monitor the production progress, usually with the help of process controlling tools. This data can be requested with the help of API-requests and used for the simulation update [43].

Step 2

For the second step, it is important to choose the integration method to implement the new input parameters, using data collected from the real-time configuration, in the simulation model.

In general, we deal with two types of data in real-time systems, accordingly in simulation models. The data can be deterministic such as amounts of supplied materials or the fluid amount in the tanks, or it can be non-deterministic such as the duration of segment installation, the advance speed through the different types of soils, and the probability of failures occurrence during production.

Different adaption methods are required to integrate these data into the simulation model. For deterministic input parameters, simple methods are used such as fitting the units, finding mean values, find maximum or minimum values of datasets, or excluding irregularities, etc.

To update non-deterministic data, a different approach is used. Figure 7.14 explains the suggested concept to update dynamically the PDF used to evaluate the probability of certain parameters offline. In this figure, the blue curve displays the PDF assumed in planning, which was generated with a probabilistic approach. During the execution of the project a new set of data, the real-time data, is available (green curve). At a point in time, this data is gathered and analyzed to evaluate the validity of the proposed PDF.

Using both sets of data, we can predict an updated posterior PDF (grey curve) that is more realistic and gives a better prediction of the project performance in the next phase of execution. This can be done using simple data fitting methods or more sophisticated methods [43, 67]. In literature, several approaches have been investigated (i.e. [75, 79]), mainly using a Bayesian approach.

Fig. 7.14
figure 14

Updating probability distribution functions [67]

Step 3

If the model is designed for offline simulation during the planning phase, some modification might be necessary for the next step to adapt the real-time data for the online-update modeling.

In order to update the input parameters, they are connected to a dynamic database. In an agent-based simulation model, each agent has a set of input parameters. The initial values of these inputs are connected to database tables in order to read and write values during the simulation. Extra read/write functions are added to the model’s agents to write/read inputs at different points during the execution of the simulation. The same concept is used to register the simulation outcomes before and after the update.

Step 4

The last step includes the validation of the concept taking into consideration the non-deterministic nature of the simulation variables.

Validation is required to prove that the model is a sufficient representation of the real system, a verification of the results is done after running the simulation by comparing the results with real-time results from similar projects [62]. To ensure the validity of a model after the update, many different techniques are used such as predictive validation, historical data validation, comparison to other models, and fixed value test.

For the application of the online-update concept, validation tests were carried out to verify the results. The scheme in Fig. 7.15 gives a rough overview of the suggested validation concept. The illustrated concept suggests running the simulation in a first run to get an initial prediction of the project duration using a fixed seed (reproducible simulation runs). At a specific point, the output parameters of the simulation will be stored in a database output file and reinserted in a second, updated simulation using the updating concept. The results of these two simulation runs are then compared in expectation of a 100 per cent correlation. Using this principle, further validation tests have been conducted to also proof the update considering randomness.

The conducted validation tests have shown, that the proposed update methodology produces valid results. Details about the validation results can be found in [67].

The proposed methodology can therefore be used to turn offline simulation models into a useful tool to support the decision making process during the execution of the project. First applications on a real reference project have shown, that integrating real time data and updating simulation models improves the prognosis of a project duration noticeably [43].

Fig. 7.15
figure 15

Validation concept for integrating real time data into simulation models based on [67]

7.2.3.3 Simulation-Based Support of an Incident Analysis

Based on the incident analyses and maintenance investigations carried out, robust incident management can be supported on the basis of current actual information, which recognizes problems at an early stage on the basis of the recorded actual data- and simulation-based updated forecasts.

On the one hand, specific countermeasures and suitable measures in the event of a downtime can be evaluated with the help of updated simulation models. With the help of the classification of different measures and the definition of key values, different scenarios can be simulated and evaluated on the basis of the simulation results before being implemented on site. This supports a steering of the processes and enables the best possible control under the given boundary conditions.

On the other hand, during the progress of the construction work, different tools are used to monitor the workflow and record the parameters of each process. These parameters include the most obvious values as the advance speed and the segment installation in addition to hundreds of parameters to monitor pressures and flowrate of the fluids and materials.

A thorough analysis of these data can conclude to correlations between these values and a correlation between the changes in these values and the changes in the workflow. Such analyses can explain the unexpected delay in processes and tasks due to the unexpected disturbances in another section of a project. Good examples would be the relation between the blockage in the mortar lines and the irregular flows and pressure peaks during annular gap grouting. In projects with high tool wear, also a worn cutting wheel can be announced by increased specific energy during dismantling.

7.3 Real-Time Prediction of Tunneling-Induced Settlements

Tunneling-induced settlements can be computed by the process-oriented finite element simulation model presented in Sect. 7.2.1.4. To handle the real-time prediction of tunneling-induced settlements for large, numerically expensive models, surrogate models are needed to obtain numerical values quickly enough to make extensive predictions possible. In particular, so-called hybrid surrogate models have proven to be valuable tools in developing such real-time predictors. Also, aspects of prediction with uncertain data, interval data or fuzzy data have been investigated and assessed. Details are given in the following sections.

7.3.1 Hybrid Surrogate Modeling Concept

Mechanized tunneling simulations using the process-oriented FE model are time-consuming and therefore best employed in the design stage of a tunneling project. For the purpose of supporting the steering phase in tunnel construction in real-time, surrogate models are required. Various surrogate modeling approaches have been developed for a large number of engineering applications. In mechanized tunneling, different surrogate modeling approaches can be employed for different specific tasks. In [55], an Artificial Neural Network (ANN) with a feed-forward structure has been adopted as a surrogate model with low dimensional outputs for deterministic real time analyses in mechanized tunneling.

With the purpose of delivering real-time predictions of expected surface settlements at multiple surface locations, the benefits of another type of ANN, called Recurrent Neural Networks (RNN), and the Proper Orthogonal Decomposition (POD) approach are combined within a hybrid RNN-POD surrogate model concept. The hybrid surrogate model, which has a low dimensional input and a high dimensional output, can be used for deterministic input-output mapping [12], interval input-output mapping [35] and fuzzy input-output mapping [11]. The concept of hybrid surrogate modeling approach is explained below.

For predefined sections of a tunneling project, the simulation model needs to map the relationship between process parameters, geotechnical parameters and the corresponding time variant surface settlements. The process parameters are taken to be deterministic values, whereas geotechnical parameters may be presented as an interval or fuzzy numbers rather than a specific value. The general concept of hybrid surrogate models is depicted in Fig. 7.16, with the bold italic texts indicating that additional steps are required to handle uncertain input-output data relationships, depending on the type of intervals or fuzzy numbers.

Fig. 7.16
figure 16

Scheme of the hybrid surrogate modeling approach

First, in an offline stage, a representative numerical simulation model for a tunnel drive through a specific tunnel section from time step 1 to \(N\) is set up, see Fig. 7.16a. By varying the deterministic values of input parameters (both geotechnical and process parameters), deterministic output data (surface settlements) are collected. This deterministic input-output data set is utilized to establish a deterministic surrogate model. This surrogate model can be used directly to predict the complete surface settlement field in step \(N+1\) when there is no uncertainty from geotechnical parameters. When dealing with uncertainty of interval data or fuzzy data type, an additional step to compute the system outputs for uncertain geotechnical parameters is required.

Interval (fuzzy) analyses based on the just-built deterministic surrogate model together with an optimization approach, see [34], are performed for predefined intervals (fuzzy numbers) of soil parameters, which are generally retrieved from geotechnical reports. The result obtained from these analyses is used to create the hybrid surrogate model, which is capable of predicting interval (fuzzy) input-output relationships in the online stage for step \(N+1\).

Figure 7.16b shows how to apply the proposed surrogate model in the online stage, i.e. during the tunnel construction. The prediction results depend on deterministic chosen values of the steering parameters in time step \(N+1\) and the recorded history from time step 1 to \(N\). The procedure is performed through three consequent steps. In the first step, an RNN approach is employed to predict the settlement behavior at selected monitoring points for time step \(N+1\). Afterwards, the complete time-variant surface settlement field from step 1 to \(N\) is approximated by the POD-RBF approach, which is a combination of the POD method and Radial Basis Functions (RBF). Finally, a reconstruction missing data technique (i.e. Gappy POD) is performed to predict the complete settlement field in time step \(N+1\). For the prediction of interval or fuzzy settlement fields in case of uncertain input parameters, Non-Negative Matrix Factorization (NNMF) [59] is employed together with the POD method. More specifically, in case of deterministic data, a trained RNN and a POD-RBF network are required for step 1 and step 2, respectively; GPOD technique is applied in step 3. However in case of interval data, with the adoption of the concept of midpoint-radius representation of interval data, two surrogate models are constructed to handle midpoints and radiuses of interval input and output data separately in each step, step 1 and step 2. In step 3, GPOD and NNMF techniques are employed for the reconstruction of midpoints and radiuses. Similarly, with the concept of \(\Delta\) representation for fuzzy numbers, several surrogate models based on both POD and NNMF methods are constructed for the prediction of fuzzy input-output data relationships. In the following sections, the methods used in the hybrid surrogate modeling approach are explained in more details.

7.3.1.1 Recurrent Neural Networks

For the prediction of time-variant surface settlement at selected monitoring points, a type of ANN called Recurrent Neural Network (RNN) is utilized. The method is capable of extrapolating the time-variant processes by means of combining information from the hidden layer of previous time steps and the inputs of the current step to update the values of the hidden layers of the current time step via context neurons. To illustrate the concept of the RNN approach, a simple RNN network structure with 3 layers: an input layer with 2 neurons \({P}_{1}\) and \({P}_{2}\) (representing two common adjustable steering parameters: the tail void grouting pressure and the face support pressure), a hidden layer with \(H\) hidden neurons and an output layer \({S}_{T}\) with \(T\) output neurons (\(T\) is the number of selected points), is considered. Input signals of the RNN at each time step \(n\) (the steering parameters \({}^{\left[n\right]}P_{k}\), \(k=1,2\)) are processed layer by layer to get the network outputs (the settlements \({}^{\left[n\right]}{S}_{t}\), \(t=1,\dots,T\)). The signal value \(\nu\) of the hidden neuron \(h\), where \(h=1,\dots,H\), at time step \(n\) and the outputs \(S_{t}\) of the network are computed by

$$\begin{aligned}{}^{[n]}{\nu_{h}} & =\varphi^{1}\Big(\sum_{k=1}^{2}{}^{[n]}{P}_{k}\cdot\omega_{hk}+\sum_{d=1}^{D}{}^{[n-d]}{\nu_{h}}\cdot{}^{d}{c_{h}}+b_{h}\Big),\end{aligned}$$
(7.8)
$$\begin{aligned}{}^{\left[n\right]}{S}_{t} & =\varphi^{2}\Big(\sum_{h=1}^{H}{}^{[n]}\nu_{h}\cdot\omega_{th}+b_{t}\Big).\end{aligned}$$
(7.9)

Here, \(\omega_{hk}\) is the weight from input neuron \(k\) to hidden neuron \(h\), \(\omega_{th}\) is the weight from the hidden neuron \(h\) to the output neuron \(t\), \({}^{[n-d]}{}{\nu_{h}}\) is the output signal of hidden neuron \(h\) at time step \([n-d]\), where \(d=1,\dots,D\) are the time delays, \({}^{d}{}c_{h}\) is the context neuron weight of the delayed time step \(d\) and \(b_{h}\), \(b_{t}\) are additional bias values of hidden neuron \(h\) and output neuron \(t\).

Within the network training, unknown network parameters, i.e. the synaptic weights \(\omega\), the context weights \(c\) for each delayed time step and the bias values \(b\), are adjusted by iteratively evaluating the error in each training step (epoch). The training process ends, when the stopping criteria is met (e.g. training and validation error smaller than a predefined tolerance, reaching the maximum number of epochs). In this work, the Levenberg-Marquardt back-propagation algorithm, which is one of the most widely used methods for non-linear optimization, is adopted for the training of the RNNs.

7.3.1.2 Proper Orthogonal Decomposition and Radial Basis Functions

Basically, a high-dimensional matrix \(\mathbf{S}\) can be approximated as a linear combination of the truncated basis vectors \(\hat{\boldsymbol{\Upphi}}\) as

$$\displaystyle\mathbf{S}\approx\hat{\boldsymbol{\Upphi}}\cdot\hat{\mathbf{A}}.$$
(7.10)

The truncated basis vectors are obtained from solving an eigenvalue problem of the covariance matrix

$$\displaystyle\mathbf{C}=\mathbf{S}^{\top}\cdot\mathbf{S}.$$
(7.11)

At this step, the truncated amplitude matrix \(\hat{\mathbf{A}}\) contains constant values associated with the given matrix \(\mathbf{S}\). Hence, they are only an approximation for snapshots that were generated in the original high-dimensional snapshots matrix \(\mathbf{S}\).

To obtain a rather continuous approximation, each amplitude vector \(\hat{\mathbf{A}}_{i}\) is expressed as a nonlinear function of input parameters on which the system depends. The amplitudes \(\mathbf{A}\) can be related to the function by an unknown matrix of constant coefficients \(\mathbf{B}\) as

$$\displaystyle\hat{\mathbf{A}}_{i}=\mathbf{B}\cdot\mathbf{F}_{i}.$$
(7.12)

with \(\mathbf{F}_{i}\) being a set of predefined interpolation functions \(f_{j}({z})\) of input parameters \({z}\). In this work, an inverse multiquadric radial function, a type of RBF (see [40] for a description), is selected as the interpolation function. The output system response corresponding to an arbitrary set of input parameters is thus approximated by

$$\displaystyle\mathbf{S}^{a}\approx\hat{{\boldsymbol{\Upphi}}}\cdot\mathbf{B}\cdot\mathbf{F}^{a}.$$
(7.13)

7.3.1.3 Gappy Proper Orthogonal Decomposition and Non-Negative Matrix Factorization

To predict the complete settlement field, two dimensionality reduction techniques, named Gappy Proper Orthogonal Decomposition (GPOD) and Non-Negative Matrix Factorization (NNMF) are adopted in the context of missing data reconstruction problems. The GPOD method, which is a combination of the basic POD method with a linear regression [30], is applied in case of an unconstrained output prediction, i.e. in case of a deterministic input-output relationship. Whereas, in case of uncertainties quantified as intervals or fuzzy data, the NNMF method [59] is utilized to guarantee the non-negativity constraints of the reconstructed results.

A complete snapshot \(\mathbf{S}_{j}\), which belongs to a set of snapshots, can be approximated as a linear combination of the first \(K\) POD basis vectors \({\boldsymbol{\Upphi}}\) and an amplitude vector \(\mathbf{A}_{j}\). The amplitude vector is calculated by minimising the error norm

$$\displaystyle\min\|\mathbf{S}_{j}-\hat{\boldsymbol{\Upphi}}\cdot\hat{\mathbf{A}}_{j}\|^{2}_{L_{2}}.$$
(7.14)

In case of an incomplete data snapshot \(\mathbf{S}^{*}\), the same least square approach can be effectively used to restore missing data by

$$\displaystyle\min\|\mathbf{S}^{*}-\hat{\boldsymbol{\Upphi}}\cdot\hat{\mathbf{A}}^{*}\|^{2}_{L_{2}}.$$
(7.15)

However, due to missing elements, the \(L_{2}\) norm cannot be evaluated correctly. The GPOD procedure thus works following the concept of a gappy norm based on available data. The missing data problem is solved by computing the intermediate repaired vector \(\widetilde{\mathbf{S}}^{*}\) in terms of truncated POD basis vectors \(\hat{\boldsymbol{\Upphi}}\) and associated amplitude vector \(\hat{\mathbf{A}}^{*}\) as

$$\displaystyle\widetilde{\mathbf{S}}^{*}\approx\hat{\boldsymbol{\Upphi}}\cdot\hat{\mathbf{A}}^{*}.$$
(7.16)

The coefficient vector \(\hat{\mathbf{A}}^{*}\) can be computed by minimizing the error between the intermediate vector \(\widetilde{\mathbf{S}}^{*}\) and the available vector \(\mathbf{S}^{*}\) using the solution of a least squares problem given by a linear system of equations

$$\begin{aligned}\mathbf{M}\cdot\hat{\mathbf{A}}^{*} & =\mathbf{R},\end{aligned}$$
(7.17)
$$\begin{aligned}\mathbf{M} & =({\hat{\boldsymbol{\Upphi}}}^{\top},\hat{\boldsymbol{\Upphi}}),\end{aligned}$$
(7.18)
$$\begin{aligned}\mathbf{R} & =({\hat{\boldsymbol{\Upphi}}}^{\top},\mathbf{S}^{*}).\end{aligned}$$
(7.19)

Given a non-negative matrix \(\mathbf{S}^{+}\), the NNMF algorithm is searching for two non-negative matrices \(\mathbf{W}\) and \(\mathbf{H}^{+}\) that satisfies the following optimization problem

$$\displaystyle\min\leavevmode\nobreak\ \leavevmode\nobreak\ \frac{1}{2}\|\mathbf{S}^{+}-\mathbf{W}\cdot\mathbf{H}^{+}\|^{2}_{L_{2}} \textrm{, subject to }\mathbf{W},\mathbf{H}^{+}\geq 0.$$
(7.20)

The alternating non-negative least squares algorithm proposed in [44], which ensures the convergence of the minimization problem, is implemented to find \(\mathbf{W}\) and \(\mathbf{H}^{+}\). The reconstruction procedure for a non-negative vector \(\mathbf{S}^{+}\) now can follow the steps of the GPOD method. Similar to the POD approach, the objective function containing the distances between the available incomplete data vector and the predicted vector is minimized. The amplitude vector \(\mathbf{H}^{+}\) is obtained considering the non-negativity constraint by solving the non-negative least squares problem

$$\begin{aligned}\mathbf{M}^{+}\cdot\mathbf{H}^{+} & =\mathbf{R}^{+},\end{aligned}$$
(7.21)
$$\begin{aligned}\mathbf{M}^{+} & =(\mathbf{W}^{\top},\mathbf{W}),\end{aligned}$$
(7.22)
$$\begin{aligned}\mathbf{R}^{+} & =(\mathbf{W}^{\top},\mathbf{S}^{+}),\end{aligned}$$
(7.23)

where the non-negative basis matrix \(\mathbf{W}\) is extracted from the available non-negative data matrix \(\mathbf{S}^{+}\). Finally, by replacing the missing elements in \({\mathbf{S}}^{*}\) and \(\mathbf{S}^{+}\) by those in the corresponding reconstructed vectors, the complete unconstrained and non-negative constrained vectors of the system response are reconstructed.

7.3.2 Surrogate models for Real-Time Prediction with Deterministic Data

In this section, the performance of the proposed hybrid surrogate model for deterministic data is demonstrated by means of an application concerned with the numerical simulation of the advancement process of a TBM driven tunnel. The main goal is to demonstrate the capability of the surrogate model to provide reliable predictions of the expected settlements induced by mechanized tunneling. To this end, the model predictions will be evaluated by comparing the predictions with reference results obtained from the original process-oriented finite element model ekate. Considering \({S_{i}}\) as the settlement at point \(i\) of the settlement field and \(M\) as the number of outputs of the surrogate model, the error \(E\) between the prediction and the FE result is calculated using the \(L_{2}\) norm error equation,

$$\displaystyle E=\sqrt{\frac{\sum_{i=1}^{M}\left(\mathbf{S}_{i}^{\text{FE}}-\mathbf{S}_{i}^{\text{pred}}\right)^{2}}{\sum_{i=1}^{M}\left(\mathbf{S}_{i}^{\text{FE}}\right)^{2}}}\times 100\%\;.$$
(7.24)
Fig. 7.17
figure 17

Simulation model of a tunnel section with deterministic input parameters. a Model geometry, b investigated surface area for settlement prediction

A numerical model representing a tunnel section of 48 meters length, constructed with an overburden of 8.5 meters by a TBM, is depicted in Fig. 7.17a. The model is discretized with 11,072 quadrilateral two-field finite elements with quadratic approximations for the displacements and linear approximations of the water pressure. The tunnel lining has a thickness of 0.3 meters and each lining ring has a length of 1.5 meters. The simulation is represented for a tunnel with a diameter of 8.5 meters excavated completely within the first soil layer of a ground model comprising of two layers of soft cohesive soils. The tunnel alignment is assumed to follow an existing street on the surface ground of an urban area, where there are two essential buildings on one side of the street. To consider the existing buildings, rectangular plate-like substitute models are adopted with an equivalent thickness of 5 meters and a stiffness of 50 GPa at the top surface, see Fig. 7.17. A step-by-step procedure of the mechanized tunneling process is considered in the simulation, which consists of different phases: soil excavation at the tunnel face, application of the face support pressure, advancement of the TBM, installation of a tunnel ring and application of the tail void grouting pressure.

The groundwater table is assumed to be at the ground surface. A Drucker-Prager plasticity model is selected for the modeling of the constitutive behavior of the soil, while the material behavior of the tunnel lining and the TBM shield is assumed to be linear elastic. During the tunnel advance, the support pressure, which plays an important role to avoid tunnel face collapse, is kept constant with a value of 180 kPa at the tunnel axis. In contrast, the tail void grouting pressure filling into the annular gap \(GP\), which is necessary to prevent large deformations of the surrounding soil and large settlements at the ground surface, is simulated as a time variant process parameter. In this example, this operational controllable parameter \(GP\) is regarded as one of the input parameters of the surrogate model. Another input parameter is the elastic modulus \(E_{1}\), which defines the stiffness of the first soil layer. The outputs of the hybrid RNN-GPOD model are the vertical displacements following the Z-direction of surface points. Instead of taking settlements of all surface points as model outputs, settlements of only 105 points within an effective area as depicted in Fig. 7.17b are considered for the generation of the surrogate model due to the fact that surface settlements beyond a distance of 42 meters in Y-direction from both sides of the tunnel axis are almost zero.

To demonstrate the steering supported concept, the proposed hybrid surrogate model is adopted to predict the complete surface displacement field in the subsequent excavation steps with the assumption that the history of TBM advance operation and settlement evolution are available. The prediction is made based on the known history of the excavation process together with future possible applied values of the steering parameters. In this example, the prediction is performed for the step \(23\) assuming that the TBM currently advances to the \(22^{\text{nd}}\) step of the excavation process. Outputs of the employed RNN are the settlements of 11 selected monitoring points. The position of the 11 points are selected based on the usual position of settlement measurement sensors on the surface in a real tunnel project. For the number of monitoring points, the prediction quality of the GPOD would be theoretically better with a larger number of available data points, however the training and prediction of the RNN might be more complicated. Therefore, the number of monitoring points in the hybrid modeling approach is determined considering both conditions such that the RNN is capable to provide good predictions and to have an appropriate accuracy of the GPOD.

Within the range of the two investigated parameters \(E_{1}\) and \({}^{[n]}GP\), ten particular values for \(E_{1}\) and six scenarios of time varying \({}^{[n]}GP\) are defined, that constitutes an input space of 60 sampling points. Each sampling point is a combination of one value of \(E_{1}\) and an applied scenario of grouting pressure \({}^{[n]}GP\). In total, 60 simulations are carried out using the FE model described in Sect. 6.4.3 to create the surrogate model. The specific variation range of the two input parameters are 20 to 110 MPa for \(E_{1}\) and 130 to 230 kPa for \({}^{[n]}GP\), respectively. To validate the prediction capability of the surrogate model, the FE simulation data set is divided into training and validation data sets. The data from 54 randomly selected simulations is used for the training and the data from the rest 6 simulations is employed for the testing of the proposed surrogate model. Figure 7.18 presents a comparison between the prediction results from components of the hybrid surrogate model and the reference FE solutions for a representative validation case with the value of \(E_{1}\) is 90 MPa. The settlement prediction accuracy of 105 points from all previous excavation steps (steps 1 to 22), i.e. with a total number of 2310 elements, using the POD-RBF model, is shown in Fig. 7.18a. The prediction quality of 11 selected points by RNN and 105 surface points at excavation step 23 by the hybrid RNN-GPOD technique are presented in Figs. 7.18b and (c), respectively. It can be concluded that the settlement prediction obtained from the hybrid surrogate model agrees very well with the reference solutions from FE simulations, which can be indicated by the values of \(R^{2}\) coefficients. The closer \(R^{2}\) is to 1, the better the prediction is. The \(L_{2}\) norm error is only \(5\%\), whereas the computation time is significantly reduced from 6 hours with a FE simulation to less than 1 seconds with the surrogate model. This enables the real-time application of the presented approach for the selection of appropriate steering parameters during tunnel construction to control the surface settlement within a tolerable range.

Fig. 7.18
figure 18

Regression plot between predicted settlements from surrogate models and reference settlements from FE simulations. a POD-RBF model, b RNN model, c RNN-GPOD model

7.3.3 Surrogate Models for Real-Time Prediction with Uncertain Data

The concept of performing reliability analysis during tunnel construction stage, i.e. to support decision making in steering phase, of mechanized tunneling processes is presented in a synthetic example.

7.3.3.1 Real-Time Prediction with Interval Data

Reliability analyses can be carried out taking into account polymorphic uncertain data using different approaches: stochastic, interval and combined interval-stochastic approaches using the hybrid surrogate model with deterministic input-output data relationship presented in Sect. 7.3.2 [34]. In this section, similar analyses are re-performed with the proposed surrogate modeling strategy for interval data. More details about the proposed strategy can be found in [35], while a detailed description of the representative numerical model can be found in Sect. 7.3.2. The prediction of interval surface settlement fields resulting from interval geotechnical data in a mechanized tunneling process is illustrated by extending the analysis on the synthetic example in Sect. 7.3.2. The direct interval results obtained from the proposed strategy are compared with the reference solution based upon the deterministic surrogate model and an optimization approach within an interval analysis in terms of prediction accuracy and computation time. Similar to the example in Sect. 7.3.2, data obtained from FE simulations is utilized for the generation of a deterministic surrogate model. The interval settlement results computed from the deterministic surrogate model and an optimization approach are then employed to train the proposed hybrid RNN-GPOD surrogate model for interval data. As a result, by adjusting the controllable steering parameters, corresponding interval bounds of the surface settlements field for further time steps of the mechanized tunneling process are quickly predicted.

Instead of a deterministic value as shown in the example in Sect. 7.3.2, the modulus of elasticity of the first soil layer \(E_{1}\) is assumed to be quantified by an interval \(\bar{E}_{1}=[45;52]\) MPa in this example. To construct a data set of interval settlements, which can be used for the generation of a surrogate model for an interval input-output data relationship, 100 interval analyses using the deterministic surrogate model together with Particle Swarm Optimization (PSO) approach are executed. Using the midpoint-radius representation for interval data, the interval data set is divided into two sub data sets for midpoints and radiuses. Subsequently, the midpoint and the radius of the settlements at the selected monitoring points are predicted by two individual deterministic RNNs. Regression values between the target values and the predicted values from the trained RNNs for midpoint and radius are \({}^{\text{mid}}R=0.9999\) and \({}^{\text{rad}}R=0.99761\), which shows a very good prediction capability of the two trained RNNs. For the reconstruction of the complete interval settlement field in the next time step, i.e. step 23, the GPOD method is employed to predict the midpoints of the interval settlements, whereas the NNMF is utilized for the prediction of the radiuses of the interval settlements to satisfy the non-negative constraint.

Fig. 7.19
figure 19

Interval settlement predictions. a Interval settlement field with \(\bar{E}_{1}=[45;52]\) MPa, b P-boxes of the settlement at a monitoring point with \(E_{1}\) as an interval stochastic number \(\mu_{1}=[40;50]\) MPa and \(\sigma=5\) MPa

The computed interval settlement field is represented by its lower and upper bounds as shown in Fig. 7.19a. The proposed prediction strategy shows a good accuracy with respect to the reference solution in both the upper and lower bounds. The \(L_{2}\) norm errors are 6.2% and 8.9% for the lower and upper bounds, respectively. The interval settlement field is predicted from the proposed approach in only one step with the computation time less than a second, whereas it requires around 1.5 hours to obtain the settlement bounds for all surface points of the field in case of using the optimization approach. The considerable reduction in computation time is thus the most important and attractive benefit of the proposed approach. In addition, with the largest absolute prediction error among all monitoring points is just 1.9 mm, the proposed surrogate model shows a promising capability for practical applications.

Another comparison is carried out for a reliability analysis based on the p-box approach. In this approach, the elastic modulus \(E_{1}\) is treated as a normal distributed interval stochastic number with interval mean value \(\mu_{1}\) = [40; 50] MPa and deterministic standard deviation \(\sigma=5\) MPa. A reliability analysis is performed using an interval Monte Carlo simulation with 1,000 interval samples. For a comparison purpose, the input samples are used for both approaches: directly within the surrogate model for interval data (midpoint-radius representation) and within the traditional approach combining the PSO optimization approach and the deterministic surrogate model to predict the bounding distributions of the settlements.

The bounds of the cumulative distribution functions (p-box) of the settlement at one representative monitoring point obtained by both approaches are shown in Fig. 7.19b. As compared to the classical optimization approach, the p-box of the settlements obtained by the new surrogate model shows an appropriate performance with relative \(L_{2}\) norm errors of 5% and 3% for the lower and upper bounds of the cumulative distribution functions, respectively. It should be noted that a computation time of 12 hours is required to obtain the p-box with the optimization approach, while the necessary computation time is only 10 minutes employing the proposed interval surrogate model. The significant time reduction can lead to much more efficiency in case of performing reliability analyses for high dimensional outputs, where optimization runs would be required for obtaining the interval bounds of each output.

7.3.3.2 Real-Time Prediction with Fuzzy Data

In case of uncertainties quantified as fuzzy data, the hybrid surrogate model can also be used to predict time variant fuzzy settlement fields of a mechanized tunneling process. A simulation model for a tunnel section of 144m length constructed by a TBM is shown in Fig. 7.20a. In this example, the TBM is assumed to advance underpassing an existing railway system. Two rail tracks, which are embedded on a compacted ballast layer, are situated on the top ground surface. When the machine advances under the railway system, it is essential to minimize or to reduce the effects of the tunneling process on the existing surface infrastructure. Therefore, the simulation-based real-time surface settlement prediction and reliability analyses performed in this section can support the TBM driver to select appropriate steering parameters for the advancement of the TBM.

Fig. 7.20
figure 20

Simulation model of a tunnel section with soil parameters as fuzzy data. a Model geometry, b modulus of elasticity \(E_{1}\) as a fuzzy number with 2 \(\alpha\)-cuts \(\tilde{E}_{1}=\langle 52,60,70,75\rangle\) MPa

The elastic modulus of the top soil layer (the low terrace gravel) \(E_{1}\) is considered as an uncertain parameter defined by a fuzzy number with 2 \(\alpha\)-cuts, i.e. \(\tilde{E}_{1}=\langle 52,60,70,75\rangle\) MPa as shown in Fig. 7.20b. A set of 154 selected nodes from the FE mesh constitutes the investigated surface area. The respective settlement of these nodes are considered as outputs of the surrogate model. Inputs of the surrogate model are the two operational steering parameters, the grouting pressure \(GP\) and the face support pressure \(SP\). With the assumption that the TBM is currently preparing to underpass the railway, which corresponds to the \(36^{\text{th}}\) step of the excavation process, different steering scenarios can be investigated to support for the advancement. The surrogate model is thus employed to quickly predict the propagated fuzzy settlements of 154 surface nodes at time step 37 with changes of support and grouting pressures considering the uncertainty of the modulus of elasticity of the top soil layer \(E_{1}\) under the type of fuzzy data. More details about the surrogate model generation, data split and model validation results can be found in [11].

Four surfaces representing for the predicted fuzzy settlements of the complete surface field at time step 37 are depicted in Fig. 7.21a. Considering settlement predictions from all validation cases, it can be seen that the surrogate model provides the prediction results with a good agreement with reference solutions from pre-performed fuzzy analyses. Specifically, the \(L_{2}\) average relative errors are around 6% for the inner \(\alpha\)-cut 2 and 7.5% for the outer \(\alpha\)-cut 1. Similar to the conclusions from the example with interval data, the biggest advantage of the proposed approach is the considerable reduction in the computation time. For a fuzzy input number with two \(\alpha\)-cuts, the resulting fuzzy settlement field is predicted in only 2 seconds instead of around 8 hours as compared to the fuzzy analysis with \(\alpha\)-cut optimization approach, e.g. a fuzzy analysis for this example requires 8 hours for 154 outputs.

Fig. 7.21
figure 21

Fuzzy settlement predictions. a Fuzzy settlement field with \(\tilde{E}_{1}=\langle 52,60,70,75\rangle\,\mathrm{MPa}\), b P-boxes of the settlement at a monitoring point with \(E_{1}\) as a fuzzy number and \({}^{[n]}GP\), \({}^{[n]}SP\) stochastic processes following a Gaussian distribution with mean values of \({}^{[n]}GP=170\) kN and \({}^{[n]}SP=150\) kN

Using the proposed fuzzy surrogate model, reliability analyses are now performed considering polymorphic uncertain data with the p-box approach. In this approach, the grouting pressure \({}^{[n]}GP\) and the support pressures \({}^{[n]}SP\) are treated as stochastic processes assuming to follow a Gaussian distribution, while the modulus of elasticity \(E_{1}\) is regarded as a fuzzy number with two \(\alpha\)-cuts (\(\tilde{E}_{1}=\langle 52,60,70,75\rangle\) MPa)s. The distribution for both pressures is assumed to have the same standard deviations at the value of \(\sigma=30\) kN, whereas mean values of \({}^{[n]}GP\) and \({}^{[n]}SP\) are 170 kN and 150 kN, respectively. The reliability analysis is carried out using a Monte Carlo simulation with 1000 samples.

For an illustration, the potential behavior of a surface point lying ahead of the TBM, which belongs to the intersection line between tunnel alignment and the railway, is investigated based on the reliability results. Minimum and maximum cumulative distribution functions of the settlement of the investigated point at time step 37 are depicted as four curves in Fig. 7.21b. The curves corresponds to two nested intervals of \(\tilde{E}\) obtained from classical optimization approach and the proposed surrogate model. As compared to the probability boxes obtained from optimization approach, the boxes produced by the surrogate model are in very good agreement with an average relative error of all curves around 2.5%. However, without the help of parallelization techniques, the computation time for an analysis considering a fuzzy input number with two \(\alpha\)-cuts drops dramatically to just 20 minutes instead of around 1 day in case of using the optimization approach. For reliability analyses with high dimensional outputs, where a fuzzy analysis is required to produce the fuzzy bounds for each output, the proposed surrogate model approach leads to much more impressive efficiency. The method thus enables the possibility to quickly investigate the consequences of certain process parameters on the expected settlements in the subsequent excavation stages, which opens up an opportunity for real-time predictions to support the machine driver in steering the TBM.

7.4 Real-Time Prediction of Building Damage

To be able to provide adequate real-time predictions of building damage, a variety of models to assess building damage have been investigated. In particular, models based on the finite element method, which are replaced by artificial neural networks (ANN) for real-time applications have proven to be very useful.

The tunneling induced building damage risk can be quantified by comparing the maximum of the calculated structural strains with limiting strains, leading to different kinds of damages (micro-cracking or macro-cracking).

7.4.1 Models for Building Damage Assessment

The theory of sensitivity analysis is a key concept in damage prediction, and along with ANNs and specifically feed-forward neural networks form the basis of the research in this project as described in the following sections.

7.4.1.1 Finite Element Model of the Considered Building

To determine the structural damage of the building, the maximum strains are used regardless of their location within the building. Basically, the building is idealized by means of shell elements for the masonry and slab elements for the reinforced concrete (Fig. 7.22, left).

Fig. 7.22
figure 22

Building model (left); material model in compression and tension for concrete (top right) and masonry (center right); consideration of settlements (bottom right) [13]

The behavior of concrete under compression as well as in tension is modeled via the Eurocode II [22] and the Model Code [32] (Fig. 7.22, top right). In compression \(f_{c}\) defines the maximum strength at the corresponding strain \(\varepsilon_{c}\) and \(\varepsilon_{c,\text{lim}}\) defines the maximum strain. The biaxial material behavior of masonry and concrete is modeled using the failure curves of Kupfer et al. [48, 81]. Under tension, the behavior of the reinforced concrete is assumed to be uncracked up to the stress \(\sigma_{sr}\). This is followed by cracking, in which the concrete contributes up to a stress of approximately \(1.3\sigma_{sr}\). Beyond this point, only the reinforcement bears loads up to the stress \(f_{y}\). If \(f_{y}\) is exceeded, yielding occurs until the ultimate stress \(f_{t}\) is reached.

An isotropic damage model is used to account for the nonlinear material behavior of masonry. This model is parabolic in compression and linear under tension (Fig. 7.22, center right). The maximum compression strength \(f_{m}\) and the corresponding strain \(\varepsilon_{f}\) as well as the failure strain \(\varepsilon_{u}\) change for various masonry types. An overview of different masonry types can be found in [42]. The tensile behavior of masonry is assumed to be linear-elastic up to the tensile strength \(f_{mt}\) and can be described by Hooke’s law. The gradient for the tensile strength can be estimated using the elastic modulus under compression \(E_{0m}\) [4]. Due to the brittle character of masonry, usually no tensile softening is applied [37]. Numerous recalculations have shown that the model is capable of predicting the load-bearing behavior in good agreement with experiments [71].

The support type at the footing depends on the settlements determined beforehand. In case the settlements were estimated using simple analytical models, i.e. when no soil-structure interaction (SSI) is considered for the settlement prediction, the bearing should be idealized using nonlinear springs [3, 52]. These springs consider the calculated settlement as an initial gap and provide a realistic representation of the soil [71]. To estimate the spring stiffness of the soil, for example, Pasternak’s model [60] can be used.

In the case the settlements were estimated via numerical models–taking an existing SSI into account–the supports should be idealized with fixed bearings (Figure 7.22, bottom right). In this case, the settlements \(s_{i}\) act as constrained variables on the building by displacing the supports. Also, the use of springs would result in a further load distribution in the building, thus causing lower strains.

Altogether, the model defines 21 independent parameters in Table 7.2; 12 capture the geometry, 8 the material properties and one the loading. Due to variable window sizes in the facade the variation of widths and heights are documented in rows 5 and 6 instead of the true dimensions. Moreover the table introduces abbreviations and summarizes sources and belonging distribution functions for all parameters. Characteristic intervals or distribution properties are documented, too. Additionally, the material model accounts for the Young’s moduli of concrete and brickwork. Both are modeled fully correlated to the material’s compressive strength and thus excluded from the list.

Tab. 7.2 Stochastic characteristics of input parameters

7.4.1.2 Fundamentals of Sensitivity Analysis

By means of sensitivity analysis (SA) the impact of input parameters’ variance on the total variance of the result can be highlighted. It often serves to eliminate irrelevant scattering input from complex models beforehand and improves the efficiency of computation. This global approach accounts for variation in all input parameters at once and thus capture potential parameter interaction, too. Another advantage is model independence. Elementary effects might be applied to arbitrary models. By contrast, alternative approaches as regression analysis or sigma normalized derivatives work with linear models only [68]. While elementary effects require less simulation to estimate sensitivities of computationally intensive models, but deliver qualitative results only. Sobol’ indices evaluate the relative impact of the input’s variance on the output’s one quantitatively [10, 73]. A comparison between the elementary effects and the sobol indices was performed in [57].

7.4.1.3 Basics of the Elementary Effects Method

The method of elementary effects [53] is based on successive variation of the input and quantifies the impact on the result. Basically a model \(Y\) with \(k\) independent input parameters \(X_{i},i=1,\ldots,k\) is considered. The input is normalized and spans a \(k\)-dimensional unit hypercube \(\Omega\) which is discretized into a \(p\)-level grid. Discretization determines the step size \(\Delta=p/\left(2p-2\right)\) in which each parameter is varied in random order [68]. Variation of each parameter delivers a trajectory that might be imagined as a virtual diagonal between start and endpoint. For in total \(r\) trajectories the elementary effect \(E\!E_{i}\) is

$$\displaystyle E\!E_{i}=\dfrac{Y\left(X_{1},\dots,X_{i}+\Delta,\dots,X_{k}\right)-Y\left(X_{1},\dots,X_{i},\dots,X_{k}\right)}{\Delta}.$$
(7.25)

Next, the sensitivity measures are obtained from Eq. 7.26 to Eq. 7.28. The mean \(\mu\) and the absolute mean \(\mu^{*}\) reflect a parameter’s mean impact, while the absolute one excludes potential misinterpretation due to signs [9]. The variance \(\sigma^{2}\) covers nonlinear effects and interaction to other parameters, so we have

$$\begin{aligned}\mu_{i} & =\dfrac{1}{r}\sum_{j=1}^{r}E\!E_{i}^{j},\end{aligned}$$
(7.26)
$$\begin{aligned}\sigma_{i}^{2} & =\dfrac{1}{r-1}\sum_{j=1}^{r}\left(E\!E_{i}^{j}-\mu_{i}\right)^{2},\end{aligned}$$
(7.27)
$$\begin{aligned}\mu_{i}^{*} & =\dfrac{1}{r}\sum_{j=1}^{r}|E\!E_{i}^{j}|.\end{aligned}$$
(7.28)

Eq. 7.29 describes the procedure to generate randomly distributed parameters in associated limits. Therein, \(\mathbf{J}_{k+1,k}\) and \(\mathbf{J}_{k+1,1}\) denote a matrix and a vector of ones, respectively. The input to generate trajectories is summarized in the vector \(\mathbf{x}^{*}\). Its entries are picked from the interval \(\left[0,1/(p-1),2/(p-1),\ldots,1-\Delta\right]\) by chance. \(\mathbf{B}\) symbolizes a lower triangular matrix of ones. The diagonal matrix \(\mathbf{D}^{*}\) is the identity matrix having random signs \((+,\,-)\) with equal probability of occurrence. The matrix \(\mathbf{P}^{*}\) permutes the order in which the parameters are augmented by \(\Delta\). Thus, the sample matrix \(\mathbf{B^{*}}\) can be calculated as

$$\displaystyle\mathbf{B^{*}}=\left(\mathbf{J}_{k+1,1}\cdot\mathbf{x}^{*}+\left(\Delta/2\right)\left[\left(2\mathbf{B}_{k+1,k}-\mathbf{J}_{k+1,k}\right)\mathbf{D}_{k,k}^{*}+\mathbf{J}_{k+1,k}\right]\right)\mathbf{P}_{k,k}^{*}.$$
(7.29)

Eq. 7.29 can be extended to grouped input, too. Frequently, several input parameters are grouped when suspected irrelevant. The initial procedure to get the elementary effect according to Eq. 7.25 by subtracting a functional value at \(X\) from the functional value at \(X+\Delta\) cannot be applied to grouped parameters, since they would be altered in different directions. Thus, [9] recommends to use the absolute mean \(\mu_{i}^{*}\) instead. Doing so, it immediately becomes impossible to identify nonlinear relationships or interactions among input parameters. However, investigations in [68] prove the differences between \(\mu_{i}^{*}\) and \(\sigma_{i}\) negligible and nevertheless document sufficiently precise statements. Just the sample matrix \(\mathbf{B^{*}}\) must be adjusted to a grouped one (index \(gr\)) according to Eq. 7.30 to get the elementary effects. Therein \(\mathbf{G}\) is a group matrix with \(k\) rows and \(g\) columns. Similarly \(g\) counts the number of groups. If an input parameter \(X_{i}\) belongs to a group \(j\) the element \(G_{i,j}\) of the group matrix is 1 and 0 otherwise. If \(g=k\) each parameter has its own group and thus sampling delivers the same results as the original approach according to Eq. 7.29. We have

$$\displaystyle\mathbf{B^{*}_{gr}}=\mathbf{J}_{g+1,1}\cdot\mathbf{x}^{*}+\left(\Delta/2\right)\left[\left(2\mathbf{B}_{g+1,g}\left(\mathbf{G}_{k,g}\mathbf{P}_{g,g}^{*}\right)^{T}-\mathbf{J}_{g+1,k}\right)\mathbf{D}_{k,k}^{*}+\mathbf{J}_{g+1,k}\right].$$
(7.30)

In case of unlimited distribution functions, as it is with the Gaussian distribution, the tails must be truncated. The \(\pm\,\infty\) unit space is mapped onto the new limits by means of Eq. 7.31. Here, \(\mathbf{B}^{*}\) might be put in according to Eq. 7.29 or Eq. 7.30, respectively. The upper quantiles \(Q_{u}\) and the lower quantiles \(Q_{l}\) can be arbitrarily chosen. Most frequently the 0.5% and 99.5% quantiles are picked. Subsequent evaluation of the cumulative density function \(F^{-1}(x)\) delivers appropriate values for \(\mathbf{B}^{*}_{\text{new}}\),

$$\displaystyle\mathbf{B}^{*}_{\text{new}}=F^{-1}\cdot\mathbf{B}^{*}\cdot\left[Q_{u}-Q_{l}\right]+Q_{l}.$$
(7.31)

7.4.1.4 Surrogate Models

To gain reliable variance-based sensitivities a great number \((n> 10^{4})\) of finite element simulations is necessary [68]. In risk analysis, the number of simulations required equals the inverse probability of occurrence of the limit state of interest [6]. Evaluation of such a number of simulation is time demanding. Thus, surrogate models to approximate the interesting model response (e.g. maximum strains induced by settlements) are beneficial. Here, artificial neural networks (ANN) were used. The basis for the ANN is the input data. These are generated by means of the Latin-Hypercube Sampling.

7.4.1.5 Brief Overview on Feed-Forward Neural Networks

In the existing problem of structural damage only the final condition with maximum cracks is simulated and time-dependent crack growth as a result of different settlement states is not investigated. Therefore, feed-forward neural networks (FFNN) provide sufficiently accurate approximations.

Fig. 7.23
figure 23

Structure of the employed FFNN for the damage prediction [13]

Figure 7.23 shows the structure of the used three-layer FFNN with \(k=4\) input parameter \(x_{k},\,k=1\dots 4\). The hidden layer has 20 neurons and the output layer 1 neuron. Each neuron has also a bias neuron \(b\) to achieve better approximations [38]. The output is the maximum strain \(\varepsilon_{\text{max}}\) that occurs in the building. In principle, a FFNN can be used to map several response parameter such as displacements or strains according to the applied loads. The procedure to create the network is basically the same, only the number of neurons in the output layer increase.

The input for the \(h\)-th neuron \(\nu_{h}\) in the hidden layer is determined by the addition of the bias neuron \(b_{h}\) and the sum of the product of the \(k\)-th input value \(x_{k}\) with the corresponding weights \(w_{hk}\) (Eq. 7.32). Therefore,

$$\begin{aligned}\nu_{h}=\varphi^{1}\left(\sum_{k=1}^{4}x_{k}\cdot w_{hk}+b_{h}\right),\quad\text{with}\quad\begin{array}[]{@{}l@{}}k=1,2,\dots,4\\ h=1,2,\dots,20.\end{array}\end{aligned}$$
(7.32)

In Eq. 7.32, the tangent hyperbolic activation function \(\varphi^{1}\) is used. Compared to the recurrent neural network (RNN) signal processing (see Eq. 7.8), the FFNN has no time delayed context signals. The FFNN output \(\varepsilon_{\text{max}}\),

$$\displaystyle\varepsilon_{\text{max}}=\varphi^{2}\left(\sum_{h=1}^{10}\nu_{h}\cdot w_{\varepsilon h}+b_{\varepsilon}\right),$$
(7.33)

is computed by using the linear activation function.

The unknown variables, the weights \(w_{hk}\), \(w_{\varepsilon h}\) and the bias neurons \(b_{h}\), \(b_{o}\) are determined on the basis of the available data during the training using optimization algorithms by minimizing the quadratic error to the exact solution. Basically, a large number of algorithms can be used to determine the weights. A detailed overview is given in [21]. Analogously to the RNN, the Levenberg-Marquardt algorithm is used here.

7.4.2 Application

Damage assessment of buildings is usually based on the category of damage, see e.g. [31, 54, 80]. In this section, the tunneling induced building damage risk is quantified by comparing the maximum of the calculated structural strains with limiting strains, which–in the case of brittle materials such as concrete or masonry–lead to either no damage, micro-cracking or macro-cracking [51].

Table 7.3 shows a common assignment of limiting tensile strains to corresponding categories of damage. While strains in category 0 lead to no damage, strains in categories 1–2 usually only cause aesthetic or optical damage, strains in category 3 impair structures’ serviceability, whereas strains in categories 4–5 even affect the structures’ ultimate load-bearing capacity [7].

Tab. 7.3 Limiting tensile strains of masonry and related categories of damage

7.4.2.1 Sensitivity Analysis

In a first step, irrelevant parameters with no impact on the variation of the result are identified by the method of elementary effects. For this, the elementary effects of grouped parameters are contrasted to those of the \(k=21\) single parameters. Grouped are the two reinforcement parameters \(A_{s}\) and the remaining 10 geometry parameters \(G\) listed in Table 7.2. Grouping leads to a reduced number of input parameters \(g=11\). Computation utilizes \(r=4\) trajectories. Thus, in the grouped case only \(r\left(g+1\right)=48\) instead of \(r\left(k+1\right)=88\) finite element simulations are necessary. If a parameter is relevant or not is decided based upon comparison to a threshold value of 5%. The threshold follows from Eq. 7.34 regarding the absolute mean of the elementary effects \(\mu^{*}\). In case of not uniformly distributed parameters the limits are chosen equal to 0.0135% quantiles to grant meaningful sensitivities not prone to false prognosis due to too small limits,

$$\displaystyle E_{i}=\dfrac{\mu^{*}_{i}}{\sum_{i=1}^{n}\mu^{*}_{i}}\quad\text{for}\quad i=1,\dots,n\,.$$
(7.34)

Grouped and ungrouped results are qualitatively equivalent (Fig. 7.24). For instance, the impact of the geometry input turns out negligible for both. Thus, in the following only 6 relevant material parameters remain to set-up a surrogate model. Three of them determine the material properties of the facade (compressive strength of concrete \((f_{c})\) and brickwork \((f_{m})\) as well as tensile strength of concrete \(\left(f_{mt}\right)\)), while the rest reflects the settlements \(s\) of the soil.

Fig. 7.24
figure 24

Elementary effects of the nonlinear FE-simulation for 11 (upper) and 21 input parameters (lower) [56]

7.4.2.2 Feed-Forward Neural Network

The total data basis for the feed-forward neural networks (FFNNs) are 500 FE calculations. The data is divided into two sets of 250 values each and are obtained using Latin Hypercube Sampling. The same values of the material parameters are used for both sets. The settlement values are derived from the interval analysis, taking into account interval-related soil parameters (Sect. 7.3.3.1). One set is determined via the lower and the other set via the upper limit values of the settlements. Consequently, the difference between these two sets are the settlement values. The 10 settlement values from the soil model are not used for the generation of inputs of the FFNN. Only the difference between the settlement at the building’s edge and in the center is used. As a result, the size of inputs of the FFNN (\(\mathbf{SB}\)) is reduced to 4 inputs instead of 10. The reason for this is that the parameter domain can be reduced and less data is needed to generate a suitable FFNN. Based on the 250 data points of the log-normal distributions, about 96 % of all possible values of the material parameters are considered in the FFNN.

To approximate the maximum strains in the building, a neural network architecture with one hidden layer and 20 neurons was used. Tests with this architecture showed the smallest deviation between the training and test data and achieved a coefficient of determination of \(R^{2}\,\approx\,0.90\) (Fig. 7.25). The data was randomly split into three sub-sets for training, validation and testing with the ratios of 70 %, 15 % and 15 %, respectively.

Fig. 7.25
figure 25

Regression plots of the FFNN; training and testing results for the upper bound settlement values (left) and training and testing results for the lower bound settlements values (right) [13]

Contrary to expectations, more results are located in the higher categories of damage when using the lower bound of the settlements than compared to the upper bound of the settlements (cf. Fig. 7.25). This can be explained by the fact that the absolute settlement value is not significant, but rather the settlement difference between the building’s edge and the center.

7.5 Real-Time TBM Steering Support Minimising Building Damage Risks

During the tunneling process, the surface settlement can lead to damage in adjacent buildings through subsidence and tilts of these structures and therefore should be controlled during the tunneling process. In this context, it is essential to predict the surface deformations and then to evaluate the associated risk of damage to existing buildings in the construction phase. To answer the question, if the damage of buildings can be assessed in real-time during the tunnel construction process and if TBM process parameters can be changed to reduce the damage risk, the hybrid RNN-GPOD surrogate model is coupled with a feed-forward neural network (FFNN), which is capable to quickly deliver the strain state in adjacent buildings. More details about the employed FFNN is given in Sect. 7.4.2.2. The model-based TBM steering supported strategy is depicted in Fig. 7.26.

Fig. 7.26
figure 26

Concept of TBM steering support using the predicted level of building damage as objective for adjusting the operational parameters

7.5.1 TBM Steering Support with Deterministic Data

The TBM steering support scheme using the expected level of damage of buildings located in the vicinity of the tunnel axis as a target for the adjustment of the operational parameters of the machine is illustrated by means of a synthetic example, characterised by a tunnel section constructed by a TBM in an urban area. The tunnel is assumed to directly underpass a multi-storey building (highlighted in red in Fig. 7.27a). For the analysis of the expected damage caused by tunneling induced settlements, a structural model of a facade is established, see Fig. 7.27b. For the classification of the building in terms of category of damage (cod), the maximum expected strain at the facade is used as the damage indicator. With the fact that damages can emerge in the building due to the tunneling induced settlements during construction, a surface settlement prediction and an assessment of the damage category for the building are performed in real-time to support the TBM driver selecting appropriate steering parameters to minimise or to reduce the influence of the tunneling process to the building. The two-dimensional damage analysis is performed based on the facade model together with the boundary conditions obtained from settlement prediction at the building baseline.

Fig. 7.27
figure 27

FE simulation model of a tunnel section underpassing a building. a Tunnel model geometry, b Facade model of the investigated building

It is assumed that, the TBM has proceeded to the \(25^{\text{th}}\) step of the tunneling process and is directly below the investigated building as shown in Fig. 7.28a. A constant level of 120 kPa at the tunnel axis is adopted for the face support pressure from time step 1 to time step 25. Three possible pressure scenarios are investigated to drive the TBM underneath the building. In the first scenario (scenario (1)), the face support pressure is kept unchanged in the next 12 meters, i.e., 6 excavation steps, as illustrated by the black line in Fig. 7.28b. The corresponding settlement trough along the baseline of the building, denoted as black line in Fig. 7.28c, and the corresponding building damage category are quickly predicted and classified by the RNN-GPOD surrogate model and the FFNN surrogate model, respectively. Five building damage categories representing the damage degrees from negligible damage, very small damage, slight damage, moderate damage and severe damage are depicted in different colors as shown in Fig. 7.28. The criteria for damage groups classification is given in details in Table 7.3. For scenario (1) with an unchanged face support pressure, the building is predicted to have severe damage and thus classified into the fifth category. With this provided information before underpassing the building, the TBM driver has enough time to adjust the face pressure in order to avoid the critical situation. Alternative scenarios for applying the face pressure when advancing underneath the building can be investigated in real-time, using the proposed approach.

Fig. 7.28
figure 28

TBM steering support example with deterministic data. a Face support pressure history, b three investigated pressure scenarios, c resulting settlement trough and building damage category for the scenarios (1) and (2), d resulting settlement trough and building damage category for scenario (3)

Blue and green colors as illustrated in Fig. 7.28 represent two alternative pressure scenarios (2) and (3). In case of increasing the face pressure immediately at the next excavation step (time step 26) to the value of 180 kPa (scenario (3) – green line), the ground settlements along the facade are predicted to be reduced significantly. The category of damage of the building can thus be reduced from ‘‘severe damage’’ to ‘‘moderate damage,’’ see Fig. 7.28d. According to scenario (2), where the face support pressure is not immediately increased, but only increased when the TBM is already below the building at time step 29, the surface settlement and the resulting maximum strain in the building are reduced, but the building is still estimated to suffer ‘‘severe damage’’ as shown in Fig. 7.28c. Table 7.4 summarizes several possible scenarios, including different steps to increase the pressure and the applied values of the face pressure. It can be seen, that it is necessary to increase the face pressure latest at step 26 to 180 kPa or already at step 23 to 150 kPa in order to reach damage category 3. In addition, although the face support pressure might be increased earlier, e.g. already in step 20, the damage category still remains. This example exemplifies the potential of the proposed concept to consider the building damage risk categorisation as a steering target in order to select appropriate TBM operational parameters in real-time during the construction process in mechanized tunneling.

Tab. 7.4 Different scenarios of increasing the face pressure and resulting category of damage of the investigated building

7.5.2 TBM Steering Support with Polymorphic Uncertainty

Polymorphic uncertainties from soil parameters and building characteristics under different types of uncertain data are also taken into account within the prediction. In this context, soil parameters are generally quantified by intervals, building parameters can be represented as random variables. The interval settlement field propagated from interval soil parameters can be predicted using the hybrid surrogate model with interval data. Stochastic analyses for building damage assessment are then carried out considering the interval settlements as boundary conditions. The proposed prediction scheme is applied on a synthetics example, which adopts the risk assessment of building damages as a criterion to drive the TBM.

A tunnel section, which is assumed to be constructed by a TBM in an urban area, is investigated. Figure 7.29a depicts the symmetrical view of the simulated tunnel section. Among a number of existing buildings on the ground surface, the tunnel is excavated directly below a multi-storey building, which is considered as a critical infrastructure. The selected building is modeled with details as illustrated in Fig. 7.29b. In this example, the uncertainty of the elasticity modulus of the second soil layer (the sandy clay), which the tunnel is excavated through, is given as an interval \(E_{2}=[30;55]\) MPa. The prediction accuracy of all surrogate models involved in the prediction of interval settlements are in general very high with values of the coefficient of determination \(R^{2}\) larger than 0.9. For more detailed explanations and descriptions of the surrogate model generation, validation and comparison, readers are referred to [13].

Fig. 7.29
figure 29

Simulation model of a tunnel section with polymorphic uncertain soil parameters and building properties. a Model geometry, b detailed 3D model of the investigated building

With the aim to support the selection of appropriate steering parameters during the TBM advance in real-time, a practical oriented investigation is carried out using the established surrogate models to demonstrate the applicability of the proposed scheme. The current TBM position is assumed to be directly in front of the investigated building, corresponding to time step 15. The history of the applied grouting pressure at the tunnel axis is recorded from time step 1 to time step 15 with a value of 130 kPa, as shown in Fig. 7.30a. For the investigation of different advancement scenarios underneath the multi-storey building, five possible applied pressure scenarios from time step 16 are of particular interest, see Fig. 7.30b. The first scenario, i.e., scenario (1), considers the situation with no modifications in the selection of grouting pressure in the next 24 excavation steps (i.e., 48 m). The grouting pressure values are planned to be increased to 180 and 230 kPa starting from step 16, respectively, in scenarios (2) and (3). Scenarios (4) and (5) account for the increase in pressure values up to 180 kPa from step 21 or later from step 26. In each steering scenario, two bounds, i.e. the lower bound and the upper bound of the settlement trough involving the building, is given from the settlement surrogate model. As a result, the maximum strain in the building is computed for each bound considering building parameters as random variables. More specifically, following Table. 7.3, the probabilities of the maximum strain can be transformed into probabilities of categories of damages (cod) from 0 to 5 as visualized in Fig. 7.31. The interval bounds of the imprecise classification probabilities (relative frequencies and accumulated probabilities) can also be seen in Fig. 7.31. Considering the five steering scenarios, the evaluation of changing the grouting pressure when excavating the tunnel in the vicinity of the building can follow two possible strategies. Either different magnitudes of the applied pressure (scenarios (1), (2) and (3)) or different steps starting applying the pressure with the same magnitude (scenarios (1), (2), (4) and (5)) are compared.

Fig. 7.30
figure 30

TBM steering support example with polymorphic uncertainty. a grouting pressure history, b five investigated pressure scenarios

Fig. 7.31
figure 31

Individual and accumulated building damage probabilities of the five investigated steering scenarios

Regarding the first evaluation, in case of remaining the pressure according to scenario (1), the building would mainly belong to cod 3 and cod 4 with the respective frequencies of 0.4027 and 0.4736 due to induced damage resulting from the lower settlement bound. With the upper settlement bound, the corresponding relative frequencies in this scenario are 0.4181 and 0.5098, respectively. If the grouting pressure is increased to 180 kPa as planned in scenario (2), the relative frequencies (0.4027 and 0.4181 for lower and upper settlement bounds) can slightly be reversed to 0.4969 and 0.4287 for cod 3, however the categories with the highest relative frequencies are still the same (cod 3 and cod 4). In this scenario, i.e. scenario (2), a certain reduction in the relative frequencies can be observed for cod 4, which changes from 0.4736 to 0.2486 for the lower settlement bound and from 0.5098 to 0.3562 for the upper settlement bound. A very slight damage (cod 1) to the building can be achieved with the lower bound of the settlement trough, when the pressure is further increased to a value of 230 kPa in scenario (3). The relative frequency of cod 1 in this scenario is 0.4593, which is also the highest frequency, while the other frequencies, i.e., for cod 0, cod 2, cod 3, cod 4 and cod 5, are 0.1234, 0.2383, 0.1257, 0.0441, and 0.0092, respectively. Although the lower settlement bound in scenario (3) leads to a re-distribution of relative frequencies of cod as compared to those in scenario (2), the relative frequencies of the damage categories resulting from the upper settlement bound, are just slightly changed as compared to the corresponding distribution in scenario (2).

In the second evaluation, effects of the time steps, where the pressure is applied, are investigated. In case of comparing scenario (1) and scenario (5), in which the pressure remains constant and the pressure adjustment is performed too late respectively, it can be seen that the probabilities are almost the same. If increasing the pressure from time step 16 , i.e. before entering the building area, following scenario (2) and from time step 21, i.e. later when the TBM being under the building, as in scenario (4), differences between the resulting relative frequencies with respect to the lower settlement bound are negligible. However, the difference in the time step of applying pressure leads to a redistribution in the relative frequencies associated to the upper bound of settlement. More specifically, if scenario (4) is applied, the probability for the damages on the investigated building related to the cod 2 with slight damages is 0.0221 instead of 0.1004 in case of applying scenario (2). Similarly, an earlier steering parameter adjustments (scenario 2) can reduce the relative frequency of the severe damage category (cod 4) to 0.3562 from a higher number of 0.4813, when steering parameters are adjusted late as planned in steering scenario (4).

The accumulated imprecise probabilities of building damage categories resulting from the polymorphic uncertain information of the inputs are presented in the stacked plot in Fig. 7.31. For instance, considering driving the TBM with a target of limiting the building damages in cod 3 (moderate damages), the probabilities of satisfying this condition are in a narrow interval [0.467; 0.512] in case of applying steering scenario (1). Higher magnitudes and wider interval probabilities [0.613; 0.725] and [0.488,0.727] can be obtained when the pressure is adjusted earlier and with higher values, e.g. in steering scenarios (2) and (4), respectively. Adopting a high pressure as drafted in scenario (3) would even lead to an assured interval of probabilities of [0.713; 0.947] for the moderate damage group (cod 3). With the presented building damage analyses, it can be concluded that the magnitude of the applied grouting pressure (the first evaluation) and the time step of adjusting the grouting pressure (the second evaluation) are essential for the control of damage risk of existing buildings in mechanized tunneling.

7.6 Application Development for TBM Steering Support

With the aim to support the steering of TBMs during mechanized tunneling, a real-time simulation application is developed based on the algorithms presented in Sects. 7.3 and 7.4. The application Smart (Simulation-and-Monitoring-based Assistant for Real-time steering in mechanized Tunneling) is capable of providing a very quick prediction of the system response with user defined inputs. The goal of Smart is to predict the system response, i.e. the surface settlements and the risks of damages on existing buildings (and/or tunnel lining forces etc.), resulting from the TBM-soil interaction in real-time as the response to changes of operational parameters, such as the face pressure or the grouting pressure.

The main flowchart of the application is based on the online algorithm described in Sect. 7.3. Depending on the type of input data, the system responses are predicted under different type: deterministic, interval or fuzzy data. Additionally, reliability analyses can be performed in real-time to support the selection of TBM operational parameters. Thus, the application Smart provides an assistant tool for decision-making in regards to adjusting the support and grouting pressures during the tunnel construction.

Fig. 7.32
figure 32

Screenshot of the software application Smart for real-time building damage assessment during tunnel construction

Figure 7.32 presents a screenshot of the application Smart with the latest features related to building damage assessment in real-time. All historic data of the tunnel drive are stored. Given the current position of the TBM, by moving the sliders on the right hand side to change the values of operational parameters for the forthcoming advancement step, the software computes and visualizes the corresponding surface settlement field, considering the soil structure interactions, and the expected category of building damage evaluation within just one second.

The application is implemented in MATLAB and is executed on a standard computer. The MATLAB Compiler is used to create a standalone version of Smart which targets to run Smart application on machines without installing the complete MATLAB software. The standalone version together with a free MATLAB Component Runtime library are easily distributed to the end users for demonstration purposes. Before running Smart, the user needs to install the provided Runtime library which is compatible with the operation system of the target machine. Smart application has been successfully installed and tested on Windows, Linux and Mac OS. Since the installation and execution of Smart does not require any expensive hardware, the application can run in almost every standard laptop or even tablet.