1 Introduction

The frequency and scale of natural hazards have been increasing since the 2000s and bringing huge losses and casualties to the world (UNDRR 2020; Goff 2021). Various natural hazards not only pose great challenges to the disaster prevention and mitigation capacity of infrastructure but also present major demands for the response of rescue and recovery (Novelo-Casanova et al. 2021). Many countries have set up emergency management agencies to cope with the impact of natural hazards, such as the Federal Emergency Management Agency (FEMA) of the United States, the Ministry of Emergency Management of China and United Nations Office for Disaster Risk Reduction (UNDRR). China’s 14th Five-Year Plan (General Office of the State Council 2021), Japan’s Climate Change and Disaster Prevention Strategy in the Era of Climate Crisis (Cabinet Office, Government of Japan 2020) and the United States National Disaster Preparedness Goals (FAMA 2015) all propose new requirements for promoting a systematic and intelligent system of disaster prevention, mitigation, relief, post-disaster recovery and risk awareness training (Lu and Li 2020). However, there are still plenty of obstacles for disaster management, such as technical cross, resource sharing and functional integration (Taubenböck et al. 2013). Thus, the construction of comprehensive platform has become a key problem in the field of disaster prevention and mitigation (Cheng et al. 2017; Abdel-Basset et al. 2020). Digital twin (DT) is one of the most promising technologies for better management of complex environment and facilitates the connectivity required through many self-operative functionalities (Fuller et al. 2020). Based on the above, this paper aims to discuss DT-driven intelligence disaster prevention and mitigation for infrastructure (IDPMI), reviews the states, respectively, and discusses future directions.

Infrastructure, including buildings, bridges, roads, railways and ancillary pipe network facilities, is an essential element for ensuring human daily production and life. It is difficult to make effective improvement after the construction of infrastructure because of the high cost, large reserves, long cycle, and individuation. Therefore, multi-stage factors, such as design, disaster prevention, disaster mitigation, disaster relief and post-disaster recovery, need to be considered comprehensively to effectively reduce the damage and loss of infrastructure caused by natural hazards. The DT is defined extensively but is described as the effortless integration of data between a physical and a virtual object in either direction (Grieves 2014). DT comes from simulation technology; the interoperability, timeliness, and predictability between the physical and virtual objects have been deepened in recently years (Ladj et al. 2021), and simulation models, digital shadows, and DT have also been gradually distinguished (Sepasgozar 2021). DT has excellent management ability and has the obvious advantages of function systematization, data integration and process automation (Zhang et al. 2019).

With the development of theoretical models (Park and Ang 1985; Richard 2007; Lai et al. 2020), simulation technology (Lu et al. 2020b), remote sensing technology (Eguchi et al. 2008), and sensor network (Cayirci and Coplu 2007), infrastructure-related fields have been producing a large amount of daily data, which can be roughly divided into real data and calculated data, and these data can be used to support infrastructure disaster management (Sun et al. 2020). Currently, the common approaches for data analysis are gradually transformed into intelligent technologies (Yang et al. 2002; Jiang 2009), and there are many related applications in infrastructure design, disaster prevention, disaster reduction, and recovery (Yang et al. 2017; Bao and Li 2019; Mariappan et al. 2015). Learning system extended Reality (ER) Internet of Thing (IoT) and other technologies or software have also been introduced to support catastrophe virtual recurrence, which can achieve multidimensional visualization of performance and behavior using the reverse modeling method (Ma et al. 2015; Lin et al. 2020; Kawai et al. 2016). These applications provide a novel perspective for the development of disaster prevention and mitigation for infrastructure (DPMI). However, infrastructure-related industries have been some of the industries that are slow to adopt new technologies (Manyika et al. 2017).

Natural hazards are a series of dynamic and complex events that threaten the economy, environment, and human life (Wettenhall 2009). Owing to the complex interdependent of society, infrastructure, and natural environment, disaster processes and outcomes are thus extremely difficult to predict (sun et al. 2020). In the meantime, disaster management not only involves the prediction of disaster processes and outcomes but also involves avoiding adverse processes and mitigating these consequences, which is undoubtedly a very challenging task, especially during urgency events (Ostadtaghizadeh et al. 2015). After many years of development, digital twin (DT) has been proved to successfully support the management and decision-making processes in many complex fields, such as aerospace (Ye et al. 2020), manufacturing (Lu et al. 2020c), industry (Tao et al. 2019c), and military (Li et al. 2020) In this sense, DT would also be feasible for the management of IDPMI based on the above practices.

This paper proposes a scientific concept of DT-driven systematic construction of IDPMI, which intends to clarify the potential of DT in disaster management. Moreover, the challenges that may hinder the full utilization of DT technology are emphasized and some possible ideas for the future are also given. The rest of the paper is organized as follows. Section 2 provides the review methodology. Section 3 reviews the history of DT and discusses the scientific scope for the life cycle of infrastructure. Section 4 key reviews the development level and demand of correlation technologies that are applied in the IDPMI process based on DT requirements. Section 5 summarizes the application of DT in the related stages of infrastructure. Section 6 combines the development needs of DT and IDPMI and formulates a development agenda for the future, and Sect. 7 gives the conclusions and outlook.

2 Review methodology

To present the status of academic publications about DT and IDPMI, this paper provides the following review methodology, which mainly consists of three parts. Figure 1 summarizes the review methodology in terms of the search strings, criteria, and paper selection procedure.

Fig. 1
figure 1

Methodology on screening papers

The first step was the selection of databases and search strings. This study selected a host of mainstream databases, including Web of Science, Scopus, Science Direct, ProQuest, IEEE Xplore, Google Scholar and CNKI. To ensure the integrity of the retrieved data, one should select of search strings that fully cover their own development and combination points. Another core factor is the time frame. Although intelligent technology has developed for many years, its vigorous development mainly comes from the deep learning model proposed by Hoitton and Salakhutdinov (2006). The infrastructure industry has always been one of industries that are slow to adopt new technologies. Moreover, the initial paper on DT was published in 2011. Therefore, the time frame is set as 2010–2021 in this study.

The advanced search function of each website (the module name of different websites is different) was adopted in this study, the topic strings and the refined strings were set up, and the main search scope included the title and abstract. The topic string for DT was set as digital twin. To fit the complex process of IDPMI and reflect the management ability of DT during an emergency, the refined strings were set as design, disaster management, construction, and maintenance. The topic strings for IDPMI were set as intelligence technology and disaster. The refined strings are set as disaster management, prevention, mitigation, and design. As a result, more than 6000 papers were initially found by the topic retrieval, and more than 500 papers were selected through refined retrieval.

Subsequently, all selection papers were filtered manually. The main filtering scope and order included the abstract, conclusion, and introduction, to ensure that the main content did not involve the title and abstract, but in other parts. The filtering process also involved removing some irrelevant articles with fragmented search strings, such as digital, twin and intelligence. DT is mainly good at complex event management, therefore, the articles related only to disaster report and mechanism analysis under the separate topical string “disaster” were also excluded. To identify the repeated search contents from different databases, the filtering process also involved sorting and deduplication of the search results. Because there are many repeated results in the topic retrieval and refined retrieval, these steps do not provide the number of specific papers. After the filtering, a total of 234 key articles were retrieved. The authors read through these papers and summarized their common grounds and unique propositions. To provide an integrated introduction to intelligent technology, conventional design approaches, and other related contents, some related papers are also included.

3 Development of DT

3.1 History of DT

The initial concept of DT can be dated back to the Apollo project carried out by NASA. In this project, NASA made two identical spacecraft, one spacecraft left on the earth was called the twin, which was used to map the status of other spacecraft performing lunar missions (Rosen et al. 2015). The modern concept of DT was first proposed by Grieves in 2003 and is called the mirrored spaces model. Although the concept was not clearly defined at that time, necessary factors such as physical objects, virtual objects, and their connections were illustrated (Grieves 2005). With the development of data transmission approaches and computer modeling technologies, NASA introduced the concept of DT to diagnose and predict the function of aircraft systems (Piascik et al. 2010). In 2011, the US Air Force Research Laboratory applied DT to life cycle management for aircraft and proposed a conceptual model to predict its structural life (Tuegel et al. 2011). In 2012, the concept of DT was redefined by NASA. DT is an integrated multi-physics, multi-scale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding entity equipment twin (Glaessgen and Stargel 2012). According to the level of data integration between the physical and digital counterparts in DT applications, three subcategories are considered: digital model, digital shadow, and digital twin (Dahmen and Rossmann 2018). In addition, characteristics such as bidirectional data exchange and real-time self-management distinguish a DT from other digital systems (Ladj et al. 2021; Sepasgozar 2021), which makes the concept of DT more concrete. Gartner, one of the leading consultancy firms in the world, listed DT as one of the top 10 strategic technological trends for 3 years (2017, 2018, and 2019) (Panetta 2016, 2017, 2018). This paper summarizes Google Trends’ search heat of DT over the past decade, as shown in Fig. 2. The development of DT can be roughly divided into two phases: the incubation stage (2011–2017) and the growth stage (2017–present), and the representative event nodes are also marked in Fig. 2. In view of the current development momentum, the authors believe that the attention and application of DT will continue to grow in the next 3–5 years.

Fig. 2
figure 2

The search trend of DT

Recently, DT has been gradually applied in many fields. Siemens has built a generation system to integrate the manufacturing process based on DT and established a virtual enterprise that can effectively promote the process of digital transformation (Siemens 2015). Tao et al. (2017b; 2019a) proposed the concept of DT workshop and expounded its system composition from operation mechanisms, characteristics, and key technologies. Tao and Zhang (2017a) initially listed service dimension as the key component in DT frame and proposed the concept of the 5D model, which consists of data, physical entities, virtual models, services, and connections. The history of the DT methodology, the transformation from 3 to 5D, is shown in Fig. 3.

Fig. 3
figure 3

Transformation from 3D (Grieves 2014) to 5D (Tao et al. 2018c) for DT

According to the literature search on DT, the number of articles about DT has reached more than 2000, and, therefore, it is impractical to completely introduce these articles. Therefore, this paper lists some review papers on DT, as shown in Table 1.

Table 1 List of a number of related surveys on DT in recent years

DT, a digital and intelligent application framework, has shown its application prospects in many industries. To standardize the construction process of DT, the first thing is the theoretical explanation for DT, which includes the selection and integration of the data, the accuracy of the virtual model, and the service/control/generation algorithms. There is also a lack of the relative paradigm planning to satisfy the life cycle of industry and product requirements. Personalized promotion is also a problem that must be considered.

3.2 Possible scope of DT for infrastructure

According to the origin and development process of DT, its application scope mainly focuses on the monitoring of the target object. The rapid development and popularization of big data, Internet of Things (IoT), 5G, cloud computing, intelligence algorithms, etc., provide a strong impetus for the architecture and timeliness of DT. With the proposal of advanced manufacturing strategies such as Made in China 2025, U. S. Industrial Internet Strategy, and German Industry 4.0, the application scope of DT is gradually expanding. On the basis of the current application of DT in various fields, the authors redefine the possible scope for infrastructure into five stages: design and optimization (D and O), manufacturing and installation (M and I), usage and maintenance (U and M), emergency management (EM), recycling and dismantling (R and D), as shown in Fig. 4.

Fig. 4
figure 4

The possible scope of DT for infrastructure

The D and O stage is mainly an object-oriented process from scratch. Each industry has integrity design codes and processes. In the related fields of infrastructure, they often use theoretical codes, simulations or a small number of tests to complete this work. The design of the DT framework regards the theoretical specification as the initial basis, takes the simulation analysis and test as the data source, and relies on the virtual model to realize the 3D generation, optimization, and evaluation off the function and process level of the infrastructure. Compared with the normal design process, DT can help designers to obtain more accurate feedback to achieve virtual testing of cycle interactive optimization, operations, and functions (Wang et al. 2020).

The M and I stage is the construction and installation process of the components and accessories of an infrastructure. Based on the virtual model, this process can be realized for scientific management, diagnosis, and correction. The virtual reality installation process is connected seamlessly, which not only realizes the systematic storage of data but also ensures the position code and scientific measurement of components. This procedure also facilitates the subsequent tracking and diagnostic process (Zhou 2019).

The U and M stage is mainly the operation process, which can dynamically track the use state of the infrastructure. Real-time data collection can be realized by a sensor or an IoT network. Multi-source data are gathered in the scientific process under the DT framework, and virtual monitoring of the physical part can be realized. Based on the anomaly recognition algorithm and the preset threshold, the operation instructions are issued for the infrastructure. If data-driven DT models are combined with intelligent systems, they can even achieve timely predictive maintenance, diagnosis and decision-making (Luo et al. 2020).

The EM stage is mainly aimed at the emergency caused by natural hazards, which is quite different from the U and M stage in terms of causality. The DT framework not only can realize dynamic monitoring and pre-rehearsal for infrastructure objects in hazards environments but also can evaluate damage, loss, and casualties in a timely manner (Doğan et al., 2021). The combination of statistical algorithms or intelligent technologies can even realize the estimation of residual functions for infrastructure objects. DT can also guide immediate establishment of emergency strategies and avoidance routes.

The R and D stage is the final stage of the infrastructure object. Retirement characteristics can be accurately acquired according to the U and M stage. The R and D stage driven by DT can realize reasonable waste recovery, establish an efficient planning for the demolition process, and a priori predict the waste quantity and danger.

As shown above, this paper reassigns the possible scope of DT in the life cycle management of the infrastructure. It also illustrates that DT has great application potential in related fields. At present, research on DT is still in the growth stage. Although there are some application cases in fine production, problems such as narrow application range and internal development limitations still hamper its deep application.

4 New technologies applied in IDPMI

Compared with other industries, infrastructure also can see as the product of industrialization, but its mechanization, automation, intelligence and informatization levels are lower than other industries. Drawing on the development experience of history and other industries, the development of new technologies will permeate all aspects of human social and productive activities, thereby improving the working conditions of large-scale labor and harsh environments (Bao and Li 2019). In DPMI, the integrated application of new technologies will also cause profoundly development.

4.1 Acquisition and fusion of multi-source data

Infrastructure is one of the necessary resources for maintaining human survival and development. The multi-source data in the whole life cycle provide unique evidence to evaluate its disaster prevention and mitigation capabilities (Sun et al. 2020). In the life cycle of an infrastructure, there exist a host of design data, normal monitoring data and disaster data. The acquisition of disaster data is the most challenging among the above three categories, and the acquisition and fusion methods of disaster data mostly cover the other two, therefore, the following content mainly focuses on this challenging one.

The acquisition of disaster data to be the primary task that can reasonably evaluate the catastrophe situation and residual functions of an infrastructure (Ahammed et al. 2014). Different types of infrastructure have slightly different requirements for disaster data; they mainly focus on displacement, deformation, modal characteristics, point clouds, images, videos, and catastrophe data. The previously acquired method of disaster data usually depends manual collection by professionals, which has disadvantages such as long acquisition cycles, large human factors, and high risk. With the development of theory and technology, structural simulation, sensors, 3D laser scanning, satellite, and unmanned aerial vehicle (UAV) are widely used in infrastructure disaster data acquisition (Liu et al. 2016; Akyildiz et al. 2002; Peter et al. 2019; Rotta et al. 2020; Erdelj et al. 2017). The above technology greatly simplifies the process of disaster data acquisition, especially to prevent people from entering complex hazardous environments. However, these technologies have their own advantages and limitations.

Simulation technology can establish a model of an infrastructure in a virtual environment and realize the reproduction or prediction of complex deformation, which can guide infrastructure design, disaster prevention, disaster mitigation, and normal monitoring. However, the current simulation technology still cannot capture the complex material and multi-dimensional details of an infrastructure nor can it describe the complex external environmental factors.

The sensors are widely used in infrastructure (Hodge et al. 2015). Sensors can help professionals effectively avoid stepping into a hazard area when collecting monitoring data, and the real data also cover the invisible interior part. However, sensor data often have a low signal-to-noise ratio and their installation can cause initial defects for the target member. In hazardous situations, sensors may contain error data or failures. Sensor networks are usually installed in hazardous location to monitor structures and infrastructure.

Three-dimensional laser scanner directly carries out sampling from catastrophic infrastructure, which can quickly obtain massive and irregular 3D point clouds reflected as 3D coordinates (X, Y, Z) and certain attributes (reflection intensity, rendering, etc.). It has become an important approach to characterize the 3D space of complex real-world object in the digital era. Furthermore, 3D laser scanners come in various types, such as onsite installation, handheld, vehicle mounted, airborne, and satellite carried, which can meet the requirements of data acquisition in various complex environments (Yang et al. 2017). Although the technology has obvious advantages in catastrophic data acquisition, it still faces great challenges in splicing, classification, recognition, and accurate reproduction of complex scenes. A 3D laser scanner is high-precision equipment that is difficult to maintain.

Satellite remote sensing, including visible light, infrared, and synthetic aperture radar, has developed rapidly. Satellite image data play an important role in many fields, such as geographical mapping, disaster monitoring, and urban planning (Cao et al. 2018). However, this collection method has many disadvantages, such as single angle, long time difference, and low signal-to-noise ratio. Moreover, the data are difficult to share owing to the privacy characteristics of satellite systems.

UAV remote sensing is a new technology that integrates UAVs, remote sensing sensors, differential positioning, communication, and other technologies to realize fast acquisition of target information. It has unparalleled advantages such as low cost, strong mobility, flexible data acquisition, timeliness, repeatability, and high resolution (Yao et al. 2019). The suggestions of UAV remote sensing mainly focus on acquisition and accuracy.

When one is dealing with massive information data with different sources and formats, how to quickly and efficiently extract valuable knowledge from such data has become a key factor in the decision-making process. Many types of data have the characteristics of heterogeneous sources, heterogeneous formats, fuzzy, and random, and the time dimension must be considered (Yang et al. 2020). The general steps to realize information utilization can be divided into two aspects: multi-source data fusion and decision.

Common statistical decision-making methods mainly include probability and statistics theory (Vapnik 1995) and fuzzy mathematics theory (Zimmermann 1983). These methods require prior knowledge or additional information, such as the distribution function and the membership function; such information is usually not easy to obtain, which limits the accuracy and efficiency of data fusion. In 1982, Pawlak (1982) proposed a rough set theory, which is an effective method for solving incomplete and uncertain data information. At present, it has been widely used in multiple attribute decision analysis (Kadziński et al. 2016), data mining (Wei et al., 2012), artificial intelligence (AI) (Hassan et al. 2017), etc. The most significant feature in data analysis is to directly “let the data speak.” Therefore, the description of uncertainty is relatively objective, but that of randomness is not satisfactory. With the development of AI in natural language processing, Li et al. (2009) proposed a method for the qualitative and quantitative expression of a transformation cloud model, which has the joint characteristics of randomness and fuzziness, and is more effective and comprehensive than a single random or fuzzy model in decision-making (Wang et al. 2017). The above theories have laid the foundation for wide developing and application of intelligence methods, such as machine learning and deep learning. Subsequently, many decision-making cases based on multi-source data have emerged. Using the rough set theory and integral operator information model, Xie et al. (2012) accurately estimated the probability of a flood risk by analyzing the characteristics of multi-source information. Liu et al. (2021a) proposed a deep multilayer fusion network that can fuse high-resolution RGB images, hyperspectral images, point cloud data, etc.

Many methods aim at the information fusion of multisource heterogeneous, such as weighted fusion, Bayesian method (Cooper and Herskovits 1992), Kalman filtering (Welch 2001), neural network (Marshall 1995), and Dempster Shafer theory (Sentz and Ferson 2002). Weighted fusion is a simple method for processing data directly; however, it will lose a large amount of original information and is inapplicable to uncertain information. The Bayesian method is relatively commonly used; however, this process is limited to the demand of prior probability. Kalman filtering is mainly based on the system dynamics method; however, its application range is mostly limited to linear problems. A neural network relies on a complex connection to fuse information by adjusting the weight among the internal nodes, and the training of a neural network requires a large number of samples; however, there are not enough samples for disaster data. Subsequently, various algorithms have been developed. The Dempster-Shafer theory was proposed by Dempster and improved by Shafer (1992); it can effectively handle conflict information without prior probability or training samples. Similarly, many research cases have emerged for multi-source data fusion. Wu et al. (2003) proposed an improved Iterative Closed Point algorithm to achieve accurate image registration and fusion. Wu et al. (2019) proposed a road pothole detection method based on a deep learning algorithm that has high prediction accuracy for satellite images, simulation data, and mobile acquisition images. Nawari (2019) developed a theoretical background to support a neutral data standard by integrating and transforming semantic rules into a computable model.

Certainly, there are a number of cases for multisource information fusion and decision-making in various industries, and most of these cases are derived from similar theories and methods. This section briefly presents some representative results. Figure 5 illustrates the evolution of multisource data fusion, which can be divided into three stages: result fusion, feature fusion, and semantic fusion.

Fig. 5
figure 5

The evolution of multisource data fusion

4.2 Computer vision and learning system

With the improvement of various databases and computing power, computer vision and AI have been more widely used in high-speed data interpretation (Dwivedi et al. 2021). Computer vision is mainly oriented toward the analysis of images and videos. According to Sect. 4.1, the disaster data of an infrastructure contain a large amount of high-dimensional data, such as images, videos, sensor data, and point cloud, which can be used to reveal the essence accurately (Ko and Kwak 2012). The origin of computer vision can be traced back to 1959, when Hubel and Wiesel (1959) carried out experiments on cats to study the working mode of vision. In 1966, with the launch of the Summer Vision project, people were introduced to computer vision technology; the main goal of this project was to build a program system that divides images by possible objects, background, and chaotic regions (Papert 1966). Subsequently, it experienced the first application of neural network technology in computer vision (LeCun et al. 1990), the first proposed application for face recognition (Viola and Jones 2001), the proposal of ImageNet and AlexNet (Deng et al. 2009; Krizhevsky et al. 2012), and animation generation combined with convolutional neural networks (Tesfaldet et al. 2018), etc. Computer vision technology has since been widely used in optics, medicine, computer science, etc.

Machine learning and deep learning are collectively referred to as learning systems in this paper, both of which belong to the category of AI. The concepts of AI and machine learning were proposed at the Dartmouth Conference in 1956, aim at enabling machines to learn rules from historical data and then apply them in the future (Simon 1983). After years of development and improvement, many machine learning algorithms have been developed, including linear regression, nearest neighbor (Hart 1968), logical regression (Menard 2004), decision tree (Franco-Árcega et al. 2012), random forest (Cutler et al. 2012), Bayesian (Feng 2010), clustering algorithms (Havens et al. 2012), and support vector machines (Saunders et al. 2002). Deep learning is a branch of machine learning and incorporates the idea of artificial neural networks (McCulloch and Pitts 1943), which aims to reduce human factors and stimulate the intellectual ability of the system. This paper mainly focuses on the related practices of computer vision and learning systems in IDPMI and provides only a brief introduction to the algorithms.

The computer vision and learning system is an organic whole, with its own purpose and method. Currently, it is widely used in many fields. Similarly, there is a host of research practices based on computer vision and learning systems in the field of DPMI. To fundamentally reduce the impact of hazards, researches have conducted different studies on structural components (Feng et al. 2020), structural systems (Wang et al. 2009), optimized designs (Tu et al. 2020; Yang et al. 2019), etc. Especially, not only the health monitoring and damage assessment of infrastructure objects at the global level (Gong et al. 2012; Ram et al. 2017; Chen et al. 2017 Gao et al. 2018) but also the identification and quantification of surface cracks at the component level (Yang et al. 2018b; Dorafshan et al. 2018) can be realized through this idea. With the increase in depth, the number of identifiable catastrophe objects is increasing and gradually covers many catastrophe types such as steel corrosion, bolt loosening, concrete cavities, and steel peeling. (Cha et al. 2018). Regardless of the type of intelligent technology adopted, sufficient datasets are a prerequisite to ensure their generalization ability, although disaster data acquisition is relatively difficult. To promote the development of IDPMI, Maxar (2021) collected over 850,000 high-definition satellite images of buildings classified into six different types under natural disasters, PEER created an open-source dataset Φ-NET that covers eight types of catastrophe infrastructure objects (Gao and Mosalam 2019), and Kaggle (2010) also provided an open-source dataset covering multiple disciplines. The above datasets provide strong support for the development of computer vision in IDPMI.

In terms of infrastructure damage assessment, the identification and quantification of the surface of catastrophe features are far from an accurate assessment of global damage (Khan et al. 2019). The analysis of the catastrophe process and even of the scenario reproduction will be very helpful in accurately evaluating the damage state and mechanism of an infrastructure (Shafieezadeh and Burden 2014). With the development of computer vision technology, Sun et al. (2018) were able to reproduce the 3D geometry and structure of objects from images without a complex calibration process. Although these methods have just recently emerged, they have shown satisfactory results on various tasks related to computer vision and graphics (Yang et al. 2018a). Ma et al. (2015) and Zeibakshini et al. (2015) used as-built BIM and 3D point clouds, to create the time reproduction procedure of damaged components and structures, these works can only reproduce the surface of catastrophe features with the data acquisition time and are not good at including component overlap, out-of-plane deformation, cracks, and catastrophic process. 3D Reconstruction plays an important role in many fields, such as digital entertainment, social media, emotional analysis, and personal identification (Zhang et al. 2021). Texler et al. (2020) built a Unet-based framework to realize human body and facial action scene visualization, which only requires a small number of training samples and supporting tags and frame-by-frame extraction. Research on 3D reconstruction technology is still in its infancy, and there is no such study on IDPMI, which is still an urgent problem to be solved. Therefore, the introduction of 3D time reproduction and even scenario reproduction technology may play a significant role in the development of this field.

4.3 Sensors and IoT

Sensor networks and IoT are emerging monitoring and control methods in the twenty-first century. It is generally recognized that the concept of IoT was proposed by Ashton in 1990 (Kopetz 2011). In 2005, the International Telecommunication Union issued a report with the same name, and the scope and content of the IoT were redefined (ITU 2005). The currently widely adopted definition is that the IoT is a system that includes sensors, drives, or both and is directly or indirectly connected to the Internet (Mostefa and Abdelkader 2017). The sensor, which is the basic component of the IoT, is widely used in various fields, such as environment monitoring, climate control, military surveillance, structural health monitoring, medical diagnostic monitoring, and air pollution monitoring (Dawood and Athisha 2013). In recent years, sensor networks have become a basic monitoring tool, especially in infrastructure disaster management systems. Common sensors include motion detection sensors, cameras, inclinometers, temperature sensors, and ultrasonic sensors (Aziz and Aziz 2011). A sensor network plays the following two roles in disaster management for infrastructure. First, it provides a more efficient disaster warning system. Second, it is a system that can monitor multiple parameters, thereby helping infrastructure to perceive different hazards (Priyadarshinee et al. 2015), such as earthquakes (Zou et al. 2019), landslides (Qiao et al. 2013), floods (Du et al. 2019), and volcanic eruptions (Werner-Allen et al. 2005). Moreover, it provides data support for the search and rescue process (Wang et al. 2010).

The IoT sublimates the advantages of sensor networks, enhances their perception and transmission capabilities, and enables them to have a higher application value in the field of disaster management. The number of devices connected via the Internet has been significantly increased in the past few years, and the IoT has inspired new ways to connect associate with various devices. To date, about 30 billion users connect to each other through the Internet and about 5 billion IoT devices have been deployed or connected (Sharma et al. 2021). A complete IoT system may also include a controller, data transceiver, protocol stack, and data analysis system. The main goal of the IoT is to maximize profit and improve production efficiency through the intelligent interconnection of objects or machines under Industry 4.0. The simplest data stream can consider transmitting data from terminal sensors to predefined cloud servers, and new computing methods such as cloud computing and edge computing can effectively improve the efficiency of data analysis (Taleb et al. 2017). Ali et al. (2016) divided the IoT into four layers, perception layer, network layer, support layer, and application layer, and discussed the network security problem layer by layer. Using the multi-layer IoT architecture, many researches have conducted a large number of studies on disaster monitoring with an IoT system. Choi et al. (2016) developed the IoT system and an intelligent image and communication system for infrastructure fire monitoring. Sharma (2020) proposed a highly integrated seismic management system based on the IoT and a deep learning model, which is helpful for linking of disaster response, emergency management, and disaster relief. The IoT depends on the high-speed shared transmission mechanism of the Internet to realize the interconnection of objects. Therefore, the security of the Internet is also a factor that must be considered in the IoT system. The ability of an IoT system to control an actual object can become a security vulnerability when the network is attacked. Especially in disaster management, because of the involvement of confidential and private data, the security challenges are more arduous (Allouch et al. 2019). The primary reason is that the security protection of physical objects is weak. Furthermore, the IoT is large in scale and its paradigm is scattered, and, therefore, it is difficult to propose targeted security countermeasures. In conclusion, most IoT components have limited memory and do not support complex security schemes. IoT security issues drive the design and development of the VIRTUS middleware. Conzon et al. (2012) used VIRTUS under the XMPP protocol, which can provide secure communication in the IoT and ensure the exchange of data in a dedicated network. Ferrag et al. (2021) attempted the application of the IoT in an epidemic situation to fight COVID-19 and divided its security problems into five categories: authentication and access control solutions, key management, and encryption solutions, blockchain-based solutions, intrusion detection systems and privacy protection solutions. At the same time, they provided suggestions for future development.

The wide application of sensor networks and IoT technology has greatly improved the efficiency in IDPMI and has considerable economic benefits and application value. However, it is difficult to accurately characterize the overall state using the local state owing to their sparse measurements. Future research should pay more attention to high-speed data transmission, intelligent data mining and decision-making, which can push the deep application of sensor networks and IoT technology.

4.4 Extended reality

Extended reality (XR) is the general term for immersive interactive technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR). These technologies refer to various experiences that blur the boundaries between the real world and the virtual environment. VR uses equipment to simulate a virtual world, providing an immersive simulation of vision and hearing for users. AR focuses on the seamless integration of real-world and virtual world information. MR focuses on mixing the real world and the virtual world to generate a new visual environment (Milgram and Kishino 1994; Vukelic et al. 2021). Although the biggest demand for these immersive technologies is creative industries (such as video entertainment and games), XR technology has great application potential in improving infrastructure emergency management. Current research on disaster management has shown a tendency to focus on enhancing the capacity of the infrastructure, with a relative lack of research on human behaviors and emergency responses. Most people lack experience in disaster response, which may cause them to adopt unjustified disaster avoidance actions (Bernardini et al. 2016).

Researchers have tried to apply XR to infrastructure disaster management and strived to develop it into a training platform for scenario emergencies and assessment tools for disaster capability (Ronchi et al. 2015). The core of XR implementation is its detailed 3D model, which also requires information on the surrounding environment (Sampaio and Martins 2014). BIM is a multi-dimensional (3D space, 4D time, 5D cost, and ND applications) model information integration technology; therefore, the combination of BIM and XR will be more helpful in promoting the application of XR technology in infrastructure-relative fields (Wang et al. 2015). Because of the high-fidelity and implantable characteristics of XR, it is likely to be applied to the planning of infrastructure disaster avoidance systems. Nagao et al. (2019) generated an internal model of a building based on point cloud data and deep learning, combined with VR components, to realize the immersive experience of the personnel inside the building and emphasized the advantages of this technology in disaster simulation. Jing et al. (2021) used an underground mine as a background, using 3ds Max to a build the overall mine roadway model in proportion, and used the Floyd algorithm to realize the optimal planning for the disaster avoidance path.

However, with the increasing complexity of design schemes and catastrophe processes, the requirements for fine models are impractical. Behzadan et al. (2015) introduced the application of AR in the infrastructure system, which offset the huge cost of 3D model engineering by using a real-world background.

5 Development of DT-related technology in the infrastructure field

According to the results of the literature review and the word frequency maps, DT and IDPMI related search keywords are drawn as shown in Fig. 6. There is a large coincidence in high-frequency words by comparing of between DT and IDPMI. Therefore, it is proved that DT-driven IDPMI is feasible from the perspective of technology and development.

Fig. 6
figure 6

Word frequency maps of DT and IDPMI

5.1 Intelligent design

At the beginning of the product design, it is necessary to consider the design processes and methods. For infrastructure, there are many codes and databases, including ISO, CEN, FAMA, and GB, dedicated to standardizing the design process, which lays the foundation for supporting its safe use throughout its life cycle. In most cases, these codes are only basic guidelines and rarely pay attention to the benchmarks, performance levels, and detailed information. In addition, this process needs to be used in conjunction with specific databases or evaluation systems (Coldstream Consulting 2013). Moreover, many codes are established on the basis of small batch data, such as experiments or practices combined with mathematical models, and their significant application value has been verified by construction practice in many years. However, with the increase in complexity of the design scheme and the various requirements of an infrastructure, the limitations of conventional design specifications have become increasingly emerge in data utilization, designer level, and application scope. (Chakrabarti et al. 2011). When an intelligent system is applied to various industries, its advantages of high efficiency, stability, simplicity, and small human factors will bring a profound change to these industries, and the infrastructure design field will certainly be no exception.

The purpose of design is to develop a scheme that satisfies the intent. The process consists of three main tasks: generation, optimization, and evaluation. The generation process introduces the expected solutions, whereas the optimization and evaluation processes maximize the trade-off between solution benefits and performance (Chandrasekaran and Josephson 2000). In infrastructure, intelligent design methods can be roughly divided into three categories: function-based design methods (Pahl et al. 1996), semantic/parameter-based design methods (Krishnamurti and Stouffs 1993) and analogy-based design methods (Falkenhainer et al. 1989). The function-based design method focuses on making design objects on the basis of functions, generating design schemes from function models, and weighing design decisions among optional components with the same function (Pahl et al. 1996). However, a function model can be developed from single-function requirements during the early stage of research. With the increase in function requirements, the complexity of the function model and the computational burden have become obstacles to such methods. Functions have multiple definitions and fusion methods; for example, Chandrasekaran and Josephson (2000) used causality to establish the definition of functional integration, and Chakrabarti (1998) established the relationship between potential solutions and expected behaviors, which can achieve the integration of similar functions. The above methods provide certain theoretical support for the function integration, but only for the function requirements with high correlation. With the deepening of relative research, a host of function fusion models has been established, including the structure-behavior-function model (Chen et al. 2013), function-behavior-structure model (Al-Fedaghi 2016), and conditional autoregressive model (Zhang et al. 2016), and has provided specific design cases to prove the wide applicability of each model. To improve the convenience of product function design, many institutions have developed retrieval knowledge bases in different classes (Bryant et al. 2005; Christensen and Schunn 2007; Regli et al. 2009), and the classification of knowledge bases is increasing. Bohm et al. (2008), and Zhao and Enrico (2019) classified the precise knowledge base of design information, which includes artifacts, functions, faults, physics, performance, perception, and media. However, owing to the limitations of the information storage format, the accessibility and availability of multiple design environments need further consideration. Computer generation technology not only can fully use the rich design knowledge base but also can cover the topics of knowledge representation and reasoning. It can generate potential design schemes on the basis of existing products, prior knowledge, and functional requirements (Chakrabarti et al. 2011). This type of design process is similar to the working principle of an expert system; therefore, intelligent technology can also be applied in the function-based design process. Huang and Zheng (2018), and Chaillou (2019) realized the recognition and generation of architectural drawings by utilizing a generative adversarial network. Based on the functional requirements of the DPMI, some intelligent design practices have also emerged in this field, including fire prevention (Naser 2019) and earthquake resistance (Mirrashid and Naderpour 2021).

The parametric/semantic-based design method focuses on developing the computer interactive or automatic application design libraries and parametric/semantic rules, which are used to generate various types of new designs (Jabi 2013). Each parameter controls or indicates an important property of the design result. Moreover, changing the value of the parameter will also change the design result (Xu 2012). The rise of parametric design can provide technical support for flexible and changeable design methods with deep professional rationality (He and Lai 2019). In the parametric design process, the design requirements are regarded as parameters, and then, the specific rules are used as instructions. A computer programming language can be used to design parameter model that can describe the relationship between the parameters and generates of the design. Moreover, this design can be regenerated by changing the parameters. The distribution and relative proportion of the force are directly determined by the shape of the infrastructure structure (Larsen and Tyas 2003). Thus, the rational design of the structural shape is the first step in improving its disaster prevention and mitigation capabilities. BIM is an integrated software for the life cycle of buildings and structures, which can improve the functional design of structures and buildings. BIM has an obvious trend in parametric/semantic development. Semantic refers to the automatic or semi-automatic design of a parameter model through semantic rules using docking software with a computer programming language (Belsky et al. 2016). Industrial Foundation Classification (IFC) is a modeling language for enhancing the interoperability of BIM systems (Building Smart 2013), after years of rich development, not only it has been applied to the development of parametric/semantic models (Zhao 2017), but also has made great progress in compiling existing BIM models through 3D point clouds (Brilakis et al. 2010; Zeibak-Shini et al. 2016), and this process coincides with the data-driven model update process in DT. Many similar studies have been published on this topic. Zhao et al. (2017) and Xu et al. (2016) proposed a method that combines the IFC and WebGL technology to create a 3D visualization in a web environment. Lu et al. (2020a) extracted building data from computer-aided design (CAD) drawings using optical recognition technology, extended and complemented building information using neural fuzzy system and an image processing program, and developed semantic /parametric model based on IFC that can be applied to the creation of a DT virtual model. Under the premise of ensuring modeling accuracy, semantic/parametric models can achieve high-precision dynamic updates and visualization; therefore, they can meet the modeling requirements for DT (Boje et al. 2020). Throughout the development of a BIM-based parametric/semantic model, although it is similar to the prototype of a DT system, there are still large gaps in data transmission, physical objects and services.

The analogy-based design method focuses on solving design problems that rely on analogy knowledge, which puts special emphasis on case-based design and biologically inspired design. Whether through formal or informal design, designers often draw inspiration from previous design knowledge (Christensen and Schunn 2007). A case-based design method can be developed from a knowledge system. Its goal is to solve similar design problems through knowledge acquisition, storage, and maintenance. (Paek et al. 1996), and the common method is to extract and index design elements through a searchable case library (Goel et al. 2015). The current commonly used retrieval algorithms include analogy (Liu et al. 2019a), serial/parallel search (Huan et al 2008) and hierarchical search (Lim et al. 2012). The adaptive design can also adopt a case-based design method. The general process is based on the currently determined case that needs to be modified, and the solution is retrieved from the stored cases. Avramenko and Kraslawski (2008) summarized various adaptation techniques such as structural adaptive design and derived adaptive design and clarified the conversion from the perspective of inadaptability to case design. However, it is still difficult automatically to realize case design.

Biosystems effectively complete various tasks under various environments and constraints, many of which are similar to engineering design; thus, biological systems provide a rich source of inspiration for design (Romero et al. 2014). Bio-inspired design studies include the development of bionic processes, databases, and tools (Hill et al. 2005; Hu et al. 2017), the main steps can be summed up as similar biological inspirations, problem analysis and transfer, and design proposals. In the field of building design, biologically inspired design is mostly used in structural optimization. Common algorithms include the genetic algorithm (Wan 1993), harmony search algorithm (Mahdavi et al. 2007), simulated annealing algorithm (Hearn 1986), ant colony algorithm (Blum 2005), and artificial neural network (Judith and Deleo 2001). Compared with the traditional mathematical solution algorithm, biological heuristic algorithms have obvious advantages in solving complex irregular spaces and discrete variable optimization problems. These biological heuristic algorithms have wider applicability, and not only does not it depend on the selection of initial points but also it has a higher probability of converging to the global optimal solution in complex structural optimization problems. With the help of the biological heuristic algorithms, a series of rapid progress has been made recently in engineering optimization problems. Moreover, these algorithms show good stability in design process. Paya et al. (2010) used a multi-objective simulated annealing algorithm to optimize the design of reinforced concrete frame structures. Based on 77 optimization objectives, such as economic cost, constructability, environmental impact, and overall safety, the structural optimization design can be completed. Esfandiari et al (2018) proposed a multi-criteria decision-making particle swarm optimization algorithm for the optimization problem of a 3D-reinforced concrete frame. Most of the biological heuristic optimization design algorithms have been developed in the past few decades, mostly the second or later generations of the algorithms, and have good applicability in nonlinear and non-directed optimization problems (Zavala et al. 2014). After a large amount of research and testing, these algorithms have been found to have satisfactory computational stability (Babaei and Mollayi 2016). For the DT system, to improve the design of a product, the authors find these types of optimization algorithms to be more suitable for integrated; however, there is currently no clear research case.

The progress of design methods has always been accompanied by the development of technology. Before the advent of computer-aided technology, designers could only express the design intent and specific details of the design plan through hand-drawn drawings, which prolonged the design cycle of the project, and it is always difficult to modify and reuse drawings. With the emergence of computer-aided technology and BIM technology, a large number of basic elements and related information can be efficiently organized to check the contradictions and errors in the design process, which improves the efficiency to a certain extent (Soust-Verdaguer et al. 2017). Early computer-aided design systems are driven by computers rather than by design, thus forcing designers to systematically learn programing skills to use various computing tools, which can usually hinder their conceptual process and creativity. Future computer-aided design tools need to be driven more by design, rather than by computer. With the increase in the complexity of engineering and design requirements, it has become increasingly difficult to solve problems such as the prediction, visualization, and design optimization of various complex scenarios in the life cycle of an infrastructure. Most of the technologies and stages (design, construction, operation, disaster management, and dismantling and recycling) present the characteristics of decentralized development and compatibility difficulties. Thus, the efficient and intelligent design process of an infrastructure not only requires much deeper integration of multiple technologies but also considers multi-stage integration as an indispensable part.

DT has great potential in the life cycle management of physical objects and in the integration of multiple technologies. In the design stage of an infrastructure, there are little data and real-time information that will inevitably lead to a difference between the virtual model and the physical situation. Thus, to meet the requirements of hyper-realistic visualization and experience of the proposed objects from the design, construction, and customers, the designers must make the accuracy and fidelity of the virtual model as the core elements of DT in the design stage (Schluse and Rossmann 2016). To build a virtual model of infrastructure objects, we first need a natural, accurate, and efficient mathematical expression that supports data definition and transmission at all stages of the life cycle of infrastructure objects (Zhuang et al. 2017). Common virtual modeling methods in DT practice mainly include the following. The first method utilizes 3D modeling software, including BIM, 3ds Max and simulation software (Boje et al. 2020; Lin et al. 2021), and many researchers have independently developed modeling programs (Lai et al. 2021). The second method performs modeling using measuring instruments and equipment, such as laser scanners and infrared imaging systems (Dollner 2020). The third method performs modeling through multi-depth image fusion (Zheng et al. 2020). There are very few papers on DT-related technology supporting product design, and the complete DT design framework is not been mentioned in these papers. The authors found a total of seven related papers in the literature summary. Piascik et al. (2010) applied the DT concept to the life cycle management of aircraft, which also involves the early design stage, but does not provide a definite framework and ideas. Canedo (2016) believed that DT is a new way to manage the industrial IoT, which can significantly coordinate design and production processes. Zhuang et al. (2017) explored the application of DT in product design and proposed a simple design-oriented DT framework. Yu et al. (2017) believed that the application of DT can strengthen the collaboration between design and manufacturing. Zhang et al. (2017) used the glass production line as an example to verify the effectiveness of product design supported by DT. Tao et al. (2018a; 2018b) expanded the application of DT in product design, including product planning and conceptual and detailed design, and carried out a design case study on bicycles. It can be seen that DT is still at an early stage in product design and lacks practical support for complex products.

5.2 Intelligent construction and maintenance

The transformation from traditional construction to intelligence, digitization, and informatization to be an inevitable trend in the new era (Liu et al. 2019d). With the integration of emerging information technologies such as cyber physical system, BIM, IoT, cloud computing, and AI, intelligent construction can provide new construction approaches for infrastructure to achieve information integration and comprehensive IoT in the construction process (Lu 2019; Yu and Hao 2020; Liu et al. 2020a, b). A high-fidelity virtual model can simulate and portray the state and behavior of physical entities under DT. It can preview or simulate all activities of the physical entities in a virtual space in advance. It is also an important bridge between a physical space and a virtual space, as well as various other technologies (Negri et al. 2017). The introduction of DT can effectively improve construction quality, reduce the incidence of errors, and effectively improve the intelligence and informatization of the infrastructure construction process (Sepasgozar et al. 2020). It also promotes the transformation and upgrading of intelligent construction and helps in the healthy operation and maintenance of construction systems.

Traditionally, the inspection of infrastructure construction status mainly relies on contact measuring equipment, such as tape measures and calipers. Several simple non-contact solutions are also widely used, such as level, theodolite, total station, and GPS, and can generate higher precision data (Fathi and Brilakis 2013). However, these methods usually require manual single-point measurements, which are very time consuming. Combined with the development of data acquisition technology introduced in Sect. 3.1, including remote sensing and 3D laser scanning, more efficient non-contact construction inspections can be promoted (Arashpour et al. 2021). These technologies can quickly capture the target data and possibly transform them into a 3D point cloud with an accuracy of a few millimeters to several centimeters (Khoshelham 2018). The normal use of 3D point clouds usually requires a reverse reconstruction process to convert measurement data into corresponding 3D semantic digital counterparts. This is also a prerequisite for the application of 3D point clouds for automated quality assessment and detection in many actual construction processes. As a key bridge for digital and intelligent development, DT can realize the integration of data-reverse modeling and realize the interaction between a virtual space and a physical space (Liu et al. 2021b, c). Based on the DT 5D model for industry (Tao et al. 2018c), a 5D was redefined in this study for intelligent construction and maintenance under DT. The model is shown in Eq. 1.

$$M_{{{\text{BDT}}}} = \, (B_{{{\text{PE}}}} ,B_{{{\text{VE}}}} ,B_{{{\text{SS}}}} ,B_{{{\text{DD}}}} ,B_{{{\text{CN}}}} )$$
(1)

where BPE represents the physical entity, BVE is the virtual model, BSS is the intelligent construction service for the whole life cycle of the building, BDD is the whole life cycle data of the building object, and BCN is the connection between the modules.

BPE can be roughly divided into two parts, the first part is the infrastructure and structural components, and the other part is the key elements of construction and maintenance, including personnel, machinery, materials, environment, service and control center.

BVE is conventionally established using reverse modeling process. In the design stage, based on the high-fidelity BVE, the construction simulation and maintenance can be carried out in the virtual space, and even the catastrophe prediction under various disaster conditions is realized. In the construction stage, BVE provides real-time feedback and regulation to the entire construction process through continuous updating using real-time construction data and accumulated historical data. In the maintenance stage, BVE can predict conflicts or faults in real time and also can predict feedback on the decision command to the physical space.

BSS refers to the actual needs of a physical space. It relies on the support of the virtual space algorithm base, model base, and knowledge base (including expert knowledge, industry standards, rule constraints, inference and other data processing methods) to make decisions on the problems encountered in the cycle life of the infrastructure, as well as of the equipment and components, to meet different requirements of different participants.

BDD represents the multidimensional data of the entire life cycle of the infrastructure. It can be roughly divided into two parts: real data and virtual data. Real data, including measurement data, sensor data, remote sensing data, and point cloud data, represent the real state of the infrastructure. Virtual data, including architectural and structural drawings, models, simulation data and generated data, represent the predicted state in a specific stage of the infrastructure life. Multi-source data fusion is a challenge that must be faced to realize the life cycle management of an infrastructure.

BCN is used to realize data sharing and decision transmission among BPE, BVE, BSS, BDD, and BCN, and its content includes the data transmission technology and the building approach of the network.

Currently, the development of DT-related technologies is in a growth period, and the overall level of technology application is insufficient. The purpose of DT is to ensure that the construction is precise during the construction stage (Maalek et al. 2019). Tan et al. (2020) used BIM and remote sensing data to develop a virtual model of precast building units. However, the recognition accuracy for the curved line and surface is low. Bosché et al. (2015) constructed a DT of cylindrical components using the Hough transform and a scanning-BIM reconstruction method. Unlike in the designed BIM model, the deviation, correctness, and completeness of the actual object can be automatically screened. However, this method requires a preset direction and limited to simple components. On the level of structure, Liu et al. (2021b, c) elaborated the DT framework in the construction process from a 5D model and found that DT can be applied to the construction of precast infrastructure. However, this study only provided the rough construction concept of DT and did not mention the key data processing algorithm, construction quality diagnosis, and control process. Tran et al. (2021) proposed an evaluation framework for the geometric quality of precast infrastructure based on the DT concept and proposed a control method for the accuracy, completeness, and correctness of the components under the semantic rule. However, this complete DT framework has not yet been developed.

Maintenance represents the longest stage in the entire life cycle of an infrastructure and is one of the most expensive industrial processes. The maintenance of an infrastructure using a DT is concerned not only with the infrastructure itself but also with the related equipment. Peng et al. (2020) developed a DT frame for a hospital system, that can visualize the medical infrastructure itself, equipment, and room occupancy. DT system can also centralize all maintenance data and select meaningful data according to different needs (Varé and Morilhat 2020). During the entire life cycle, different participants can adopt the same DT system to realize various needs, allowing all participants to provide data and professional knowledge and make effective use of they care about, respectively (Autiosalo et al. 2021). Thus, one of the key factors in DT development is the establishment of paradigm standards. Varé and Morilhat (2020) developed a DT model for nuclear reactors that can support data exchange and use among different participants. This approach can help optimize energy consumption, rationally plan maintenance strategies, and reduce energy costs for the target infrastructure (Antonino et al. 2019). A DT system can evaluate the performance of infrastructure maintenance using rich data analysis algorithms (Shim et al. 2019a, b; Tahmasebinia et al. 2019). According to the results of the evaluation of the existing state, Shim et al. (2019a, b) visually assessed the degradation of identified bridges using coding systems. A DT system can also support residual life prediction (Yu et al. 2020). Tahmasebinia et al. (2019) estimated the effect of long-term load and shrinkage creep on the Sydney Opera House using DT technology. Of course, there are also some ideas and practices of urban development and maintenance supported by DT (Tao and Qi 2019). Obviously, DT technology can be used to provide more valuable operation and maintenance.

In summary, there have been some research attempts to use DT-related technologies to promote the construction, operation, and maintenance of an infrastructure. However, some of the current attempts are mainly focused on BVE, whereas the development of other dimensions of DT is slightly unbalanced. Thus, to effectively realize intelligent construction and maintenance, we still need to invest significant research energy in the integration of intelligent technology. BIM and IFC have a huge potential for the development of BVE, BSS, and BDD. Therefore, the combination of DT and BIM is expected to be a hot topic in this field.

5.3 Disaster management

There are a total of six DT-related papers about disaster management, which mainly focus on urban disaster management. Park et al. (2018) proposed an AR-based smart building and town disaster management system, to acquire the visualization and grasp of occupants during fire disasters in buildings. However, this study did not provide a model update mechanism. Ham and Kim (2020) proposed a DT framework that can be used for urban management under extreme weather conditions and conducted a case study in Houston. Ford and Charles (2020) proposed a DT framework for community disaster management and analyzed the key role of multisource data. Zhu et al. (2020) used the Sichuan-Tibet Railway as an example, using the idea of multi-element classification and coding to classify the environment, facilities, and disasters, and discussed the construction method of its corresponding DT. Fan et al. (2021) realized that most of the existing studies on disaster management are fragmented without a common paradigm and proposed a DT paradigm that includes multi-data sensing for data collection, data integration and analytics, multi-actor game-theoretic decision-making, and dynamic network analysis. Multiple studies have shown that a DT framework not only can support disaster management for infrastructure and urban areas, but also has guiding significance for the scientific management of large-scale epidemics (Ivanov and Das 2020).

The infrastructure object is one of the basic elements of a city. Only the accurate grasp of the disaster mechanism of an infrastructure can effectively promote the development of the disaster management of a city. However, current research on disaster management of DT-driven infrastructure objects is still in a blank state. Thus, it is recommended that future research efforts be directed toward this aspect.

6 Opportunities of DT-driven intelligence disaster prevention and mitigation for infrastructure

There are already some practical applications of intelligent technologies to solve the problems of DPMI according to a literature review. Although reasonable results have been achieved for specific disasters, objects, and scenarios, the overall trend of fragmentation development is shown. The low level of information utilization and technology integration provides little guidance for DPMI and post-disaster recovery. To overcome the above obstacles, the authors find the DT system to be a feasible route from a technical and practical perspective. The following section briefly describes the future development framework of DT-IDPMI from five aspects: data, object, technology, connection, and service layers, as illustrated in Fig. 7.

Fig. 7
figure 7

The future development framework of DT-IDPMI

6.1 Data layer

The disaster data of an infrastructure can be divided into two categories: design and construction data, and maintenance and catastrophe data. The design and construction data mainly include CAD drawings, BIM models, and preliminary simulation models. These types of data have comprehensive characteristics and are an indispensable source of information for evaluating DPMI capabilities and formulating disaster strategies. However, with the extension of the employment time, some differences will emerge with the real state of the infrastructure object. Thus, maintenance and disaster data are also an important part, which mainly include data collected by personnel; data collected by sensors, UAVs, laser scanners, satellites, and robots; catastrophe data; and data collected by some corresponding simulation models, which have strong timeliness.

6.2 Object layer

The object layer is mainly divided into physical and virtual objects. Physical objects include the corresponding infrastructure building itself, equipment content, and surrounding environment information, whereas virtual objects are virtual representations that correspond to each part one to one. The difference between virtual objects and conventional modeling methods lies in the use of data reverse modeling technology, which can effectively avoid the simulation results as the unique evidence to realize catastrophe reproduction.

6.3 Technology layer

Information fusion technology is essential for decision-making with multisource data. The available methods can be roughly divided into two types: methods based on feature recognition algorithms for feature fusion, which are the more commonly used methods, and those based on format conversion to achieve semantic integration, which are not only less practical but also applicable only to a small number of issues.

A DT system has strict requirements for data analysis and decision-making; thus, the algorithms for data analysis and generation are very important. After years of development, many algorithms have emerged with strong analysis and generalization ability and high precision. There are also many algorithms that can be used to build intelligent disaster prevention and mitigation systems. Furthermore, the applicability, accuracy, and efficiency of these algorithms should be analyzed in the selection process.

Reverse modeling technology is a modeling approach that utilizes deformation data for virtual modeling. Common reverse modeling technologies include the BIM-based modeling method, physical path method, and Agisoft Metashape. Intelligent technologies have also made important progress in human behavior fitting, which can realize the reverse modeling of complex face or limb behaviors, and they can provide a new idea for reverse modeling.

It is difficult to consider interactive optimizations in conventional design processes under various defects, such as degradation and catastrophe conditions, owing to model and technology constraints. The five-layer system provided by DT can only compensate for these defects. DT can perform interactive optimizations in design processes under various disaster conditions, and the maximum optimization design for performance, function, and value can be obtained under the cycle optimizations. Interactive design process can realize the most satisfactory scheme.

ER can provide a more realistic understanding for participants throughout the life cycle of an infrastructure, which also includes the disaster period. It can enhance the customers’ pre-purchase experience of the target object, assist designers in identifying design defects, help constructors in planning the construction process, and facilitate the development of strategies for disaster management and recovery.

6.4 Connection layer

The connection layer is the medium for sharing information between each layer of the DT. Common connection technologies include Zig-Bee, Bluetooth, WiFi, UWB, NFC, satellite, and shortwave communication. A DT system requires a connection layer with the characteristics of fast transmission speed, strong compatibility, stable transmission, and low cost. DT also puts forward the requirements for environmental adaptability based on the special hazard environment of the DPMI. Currently, 5G is one of the most advanced communication technologies worldwide. It not only meets the above requirements but also has the characteristics of low latency and power consumption. Currently, it is one of the best choices for building the connection layer of a DT system.

6.5 Service layer

The main purpose of the service layer is to transform the DT analysis results into services that meet specific needs. Different stages face different target participants, and, therefore, the service requirements are different. The design stage mainly caters to all participants of the construction and customers, and the requirements for DT include optimization design, visualization, disaster preview, and ER. The main requirements in the maintenance stage include visualization, monitoring, diagnosis, possible faults, and life prediction. The main requirements of the disaster management stage include visualization, risk prediction, damage assessment, casualties’ location and prediction, escape route planning, disaster relief policy, recovery plan formulation, and ER.

7 Concluding remarks

The current DT and IDPMI literature works are reviewed and discussed in this paper. DT has shown its application prospect in many industries and has the obvious advantages of function systematization, data integration, and process automation. IDPMI also shows higher efficiency and timeliness than those of traditional methods, but only covers a unilateral factor. To boost an IDPMI frame systematically, this paper systematically expounds the feasibility of DT from the perspective of IDPMI. Based on the literature review, the key findings of this study are as follows:

The history of DT is retrospectively based on scientific literature retrieval. DT has experienced the incubation stage and is now in the growth stage. Thus, it can be used for application development. Based on the experience in relative industries, the scope of DT for the life cycle of an infrastructure can be divided into five parts: design and optimization, manufacturing and installation, usage and maintenance, emergency management, recycling, and dismantling.

This paper basically reviews the application of IDPMI based on DT requirements, and the developments and challenges of each technology are analyzed. UAV and laser scanning are efficient methods for acquiring catastrophe data. Feature fusion has achieved good results in data fusion, which can be utilized currently; however, semantic fusion will inevitably be in demand in the future. The learning system has the advantage of high speed for data analysis, which can satisfy the real-time requirements of DT. ER embodies efficient interaction based on the virtual or actual environment, which can be applied to the visualization and service function of DT. The technical feasibility of combining DT and IDPMI is roughly proven.

The literature on DT in the stages of design, construction, maintenance, and disaster management for infrastructure is also reviewed in this paper. This part of the research is relatively thin and unbalanced, and many cases are concentrated in construction and maintenance. Overall, the gap is also widely compared with other industries, especially in disaster management. The technological feasibility of combining DT and IDPMI is demonstrated.

Based on the literature review and the features of DT and IDPMI, this paper proposes a conceptual framework called DT-IDPMI. The entire system is innovatively divided into five parts: data, object, connection, technology, and service layers. In addition, the novelty establishes the data connection between disaster management and the design process, which can support interactive optimization design.

There are also many challenges associated with DT and IDPMI. The current theoretical explanation for DT is oversimplified, which cannot afford relative paradigm planning to satisfy the life cycle management. Data are the determining resource for DT; semantic fusion of multi-source data can ensure better use under the premise of the integrality of information characteristics; however, the current development level is low. Although DT as a systematic concept is widely used in the manufacturing industry, its introduction into IDPMI-related fields may cause issues owing to the lack of real-time data and the low degree of technology application. IDPMI has been a research subject for several decades, and the level of collaboration between systems and stages is still relatively weak. To introduce more advanced intelligent technologies, the users need to consider the acquisition and volume of real-time data. To further enhance the ability of an infrastructure to resist hazards, the authors call on colleagues to contribute to open-source disaster data.