9.1 Introduction

In the framework of this publication, the term “signal processing” covers all processing steps with the goal of extracting information from a (received or measured) signal or preparing information for transmission from an information source to an information consumer. The important goals of signal processing are the acquisition of information about the structural health status, data reduction, and preparation for visualization. In this chapter we focus on the available data reduction strategies, their challenges in relation to wireless sensors and issues arising from the data management. From a broader sense, the chapter will not only deal with the theoretical development of the said concepts, but will also provide accurate applications, strategies for monitoring and future prospects of the algorithms—embedded within the context of dynamical systems. While signal processing forms the building blocks of the chapter, the applicability of these methods—especially in data reduction strategies—provides significant evidence towards the practical implementation of monitoring from a real-time perspective.

9.2 Signal Processing

Traditionally, distinction is made between digital and analog signal processing. In the context of the SHM methods introduced in Chaps. 5 to 8, digital signal processing is becoming more and more important due to the following advantages:

  • The analog signal may be affected by extraneous noise and coupling from other electrical systems nearby, which is intrinsically eliminated after digitization. This can be easily carried out in real-time using algorithms such as the recursive singular spectrum analysis (Bhowmik 2018).

  • The desired data reduction steps can often be implemented more easily with digital processing steps (e.g. by programming microcontrollers).

  • Digital signal processing allows for storage and simple further processing (transmission to other systems (e.g. visualization, storage, or control systems)).

Nevertheless, some distinct disadvantages are associated to digital signal processing:

  • Analog-to-digital converters (ADCs) and/or digital-to-analog converters (DACs) temporally discretise information with a certain sampling rate and a certain “vertical” accuracy based on the bit depth of the system.

  • In some cases, the digitization forms an intermediate processing step (analog→digital and later again digital→analog) in the implementation chain, which doubles potential conversion errors.

  • The implementation of digital processing steps and their execution can sometimes be very time consuming and lead to information delays. This may make approximation methods necessary instead of using the full precision of available algorithms.

Basically, a distinction can be made between one- (e.g. ultrasonic and strain gauge signals) (1D), two- (e.g. images) (2D), and three-dimensional (3D) (e.g. moving images and video) signal processing. In the SHM framework in aircraft, Table 9.1 provides the first overview on the typical sensing techniques applied in aircraft monitoring. Based on this, all currently applied typical SHM methods deliver 1D information, which must be interpreted in terms of the aircraft status. Powerful approaches exist to combine the information provided by a network of 1D sensors to retrieve some 2D or 3D damage mapping on the structure. However, the limitation of signal information to essentially 1D data streams also addresses the requirement of data reduction for signal processing as far as today’s computational capabilities are concerned.

Table 9.1 Aircraft monitoring techniques

The cyclic step of converting analogue to digital signals and vice versa is mainly associated with the transmission costs that might arise during data transfer. For certain transmission systems, digital communication sources are preferred which generally require post processing in an offline mode. However, in recent times, methods using real-time fault detection approaches have strikingly lowered the transmission costs for a monitoring system and therefore, the cyclic conversion back into the analogue regime can sometimes prove beneficial. This aspect is also applicable to aerospace incorporations.

In the context of real-time SHM, recent literature has time and again illustrated the potential to identify system faults online. This is particularly helpful for aircraft machinery where the state of system health can be identified in real-time and mitigation measures can be correctly implemented. Real-time control of dynamic systems has never been more comprehensive when investigated under the auspices of eigen perturbation theories and higher order error stabilization. In the present context, the authors firmly believe that signal processing approaches have paved the way to online SHM contributions over the recent years. This is also systematically evidenced by the relevant inclusions of references throughout the text.

Although approaches such as digital filtering, lossless compression, and methods to increase SNR have been adopted by researchers and practitioners worldwide, the inherent difficulties in performing real-time impedes their possible applications. Cases where data needs to be transmitted from a monitored section to the processing site, the transmission should be at a capacity to transfer a bulk of data. However, in order to alleviate these drawbacks, recent works on perturbation theory have shed light on the in-situ approach of data-driven decision-based framework (Bhowmik et al. 2020a). Forming an evidence base around the topic are extensive numerical simulations and experimental case studies which are supplemented by practical real-life scenarios.

9.3 Data Reduction Strategies

Traditional SHM systems employ coaxial wires for the communication and data transmission between sensors and the decision making (data interpretation) unit. As a result, traditional models and schemes developed for health monitoring are largely challenged by low-cost, quality-guaranteed, and real-time event monitoring. The data involved in many SHM systems are time-domain structural responses with large data sizes. Damage detection methods with a higher detection sensitivity are generally often associated with a higher sampling rate. It is not unusual for a wave propagation-based damage detection system to excite and sense wave motions at a megahertz range. High-frequency sampling and excitation pose multiple challenges to wireless sensor development, one of which is the timely transmission of a large amount of data. Although directly transmitting the original sensor data to the data interpretation unit can retain the signal fidelity for a comprehensive data analysis, this may lead to a prohibitive burden to the wireless data transmission, especially for applications that need near real-time decision making. One tempting idea is to perform data pre-processing at the local sensor level, such that the data can be greatly compressed/reduced while the critical features reflecting the damage effects can be preserved. If effectively established, such data reduction, which is also known as feature extraction, can alleviate the data transmission rate issue between the sensors and the data interpretation unit. On the contrary, it requires a higher computing capability and the associated power consumption at the local sensor level.

9.3.1 Sampling Rates of Different SHM Methods

Gaining a clear understanding of the structural behaviour to allow a reasonable assessment of its as-built condition requires high-fidelity sensor data to build accurate models (Nagayama et al. 2006). In addition, potentially problematic structural changes, such as corrosion, cracking, buckling, and fracture, all occur locally within a structure. Sensors are expected to be in close proximity to the damage to capture the resulting change in response, while sensors further from the damage are unlikely to observe measurable changes. A dense array of sensors is required to achieve an effective monitoring system capable of generating informative structural models and detecting critical structural changes. Such a dense instrumentation system is not practically realised with the traditional structural monitoring technology due to the cost of deployment and the potential for data inundation. Advances in the wireless technology and embedded processing have made much lower-cost wireless smart sensor networks (WSSNs) an attractive alternative to wired, centralised data acquisition (DAQ) systems. The majority of the work using wireless smart sensors for structural monitoring has focused on using the sensors to emulate traditional wired sensor systems (Arms et al. 2004; Pakzad et al. 2008; Whelan and Janoyan 2009). These systems require that all data be sent back to a central DAQ system for further processing; hence, the amount of wireless communication required in the network can become costly in terms of excessive communication times and the associated power it consumes as the network size increases. For example, a wireless sensor network implemented on the Golden Gate Bridge that generated 20 MB of data (1600 s of data, sampling at 50 Hz on 64 sensor nodes) took over 9 h to complete the communication of the data back to a central location (Pakzad et al. 2008).

In the following we are presenting the sampling rate requirements for the main SHM systems discussed in this book to better differentiate between the different methods. Ultrasonics

Ultrasonics are particularly a data-heavy and popular class of approach in this regard. Michaels (2008) reported how guided waves for large-area monitoring, diffuse waves in complex components (e.g. plates), and local ultrasonics for hotspot monitoring, change the demand. In this regard, local hotspot monitoring is of relevance, and the sampling rate can be as high as 10 MHz. The data intensity is often a question of orders of magnitude; thus, comparing such rates for other applications is worthwhile. Acousto-ultrasonics have been successfully utilised for detecting a notch on a plate (Smithard et al. 2017) with a 100–550 kHz range, while a central input frequency of 138 kHz was kept for rotor damage detection by Li et al. (2018). Damage detection in carbon fiber-reinforced plates (CFRP) is also observed at the 40–260 kHz range (Krishnaraj et al. 2012). Delamination (Krohn et al. 2002; Farrar et al. 2007) is a typical phenomenon for detection, and the kHz–MHz range is often used. Scruby and Drain (1990) indicated 3–5 MHz to be typical for laser ultrasonics, but recent studies went up to 250 MHz (Cavuto et al. 2015).

On the other end of this data intensity lies the fact that high-frequency ultrasonics do not have a deep penetration, and for the 500 kHz–1 MHz range, distinguishing differences in composites is hard. Vibration-Based Methods

Vibration-based methods typically consider accelerometer-based global responses, and sampling is typically based on the type of accelerometer aligned to the needs of the system. This can vary from hundreds of Hertz (popularly sampled diadically as 256 Hz, 512 Hz, etc.) to thousands of Hz (Bhowmik et al. 2019a, 2019b; Noel et al. 2017; Zhu et al. 2018). Displacement sensors like linear variable differential transformers (Wang and Tang 2017) or radar-based ones (Li et al. 2015) can have a similar sampling rate range compared to accelerometers, although image processing-based techniques are limited by the camera frames per second (O'Donnell et al. 2017; Yang and Nagarajaiah 2016). High-speed imaging can often come at the expense of pixel resolution and with an uncertainty related to the displacement measurement. Laser Doppler vibrometry measures velocity, but encoders in them can transfer such data to acceleration or displacement and have typical sampling rates wider than what has been discussed in this section (Schell et al. 2006). Acoustic Emission

The typical bandwidth of frequencies reaches from a few kHz to 1 – 2 MHz due to the dynamic phenomena causing AE signals. Accordingly, this results in typical sampling rates in research applications that are well beyond 5 MSP/s (million samples per second) because of the required oversampling for adequate signal digitization. In particular, for composite materials, some AE sources exhibit high-frequency components that resemble very critical failure modes, such as fiber breakage (Grosse and Ohtsu 2008; Sause 2016). Based on sensor spacing, much of the high-frequency information is lost during propagation, which consequently may result in lesser sampling rates for practical structural health monitoring applications. Nevertheless, these will still reside in the range of several MSP/s required for data capture. With a reasonable number of sensors installed on the structure, this will result in a large amount of data generated during AE monitoring. From the beginning, this aspect has been considered in the development of the measurement equipment. With the lack of a real-time storage capacity for such “high-frequency” signals, some of the first commercially used systems started to focus on extracting features from the recorded waveforms instead of attempting to store full waveforms (Grosse and Ohtsu 2008). One of the most popular analysis routines considers the localization of AE sources by means of a sensor network (cf. Sect. 7.4). This is based on the respective time-of-flight between source and sensor and thus requires a precise, continuous synchronisation at ultrasonic timescales across multiple sensors. Otherwise, offsets or scatter in the synchronisation accuracy may lead to false source coordinates and thus may render this analysis routine completely useless. In this processing chain, the first step is adequate triggering to initiate any further processing, that is often taken as arrival time of the signal, although more dedicated methods have been developed (cf. Sect. 7.4). That is, the acquisition system detects the presence of a signature that exceeds the typical noise floor based on several criteria. Technically, this is implemented as an analysis of the currently recorded signal portion kept available in ring buffers. Based on this first analysis, the relevant portion of the signal stream is extracted and subject to further analysis, such as feature extraction. In this context, several representative features (e.g. amplitudes, frequencies, and more) are calculated from the signals (see Sect. 6.4.1 or standard literature on concise definition (Grosse and Ohtsu 2008; Sause 2016)). This well-established procedure is by far the most effective data reduction step in AE, turning a typical AE signal of 100 kB into a dataset of only 100–150 B. Nevertheless, it comes with the cost of skipping the actual raw data early on, causing potential conflicts in later interpretation steps. With the availability of much more powerful computer systems within the last 20 years, storing at least the relevant portions of the signal streams in addition to the extracted features seems to be more and more desirable. This is based on several new capabilities that have emerged in the last decades regarding source interpretation, source localization, and for decision making.

It was identified early that, especially outside laboratory investigations, there is a high chance of presence of faulty AE sources, which could mess up the correct interpretation of AE measurements. In most application scenarios, there is a likelihood for “noise” sources being present, such as friction between components, signals from electrical or hydraulic systems, mechanical motion, or even from other (active) ultrasonic systems. For the passive operation of material defects, such as crack growth, the noise sources may mask useful signals and lead to a false interpretation on the actual health status of the structure. This forms one of the driving reasons for performing a more sophisticated signal analysis based on machine learning approaches, cross-correlation techniques, or advanced feature extraction in time-frequency space. All the latter have proven highly useful for improving the AE signal interpretation; however, they come with an additional computational complexity that must be covered by the processing chain. Ideally, this is done highly integrated inside the DSPs or FPGAs on actual acquisition systems, but this is not the standard. As an additional turn-back, this also comes with a higher energy demand that must be delivered by the on-site supply in an SHM application.

While acoustic emission approaches are less popular than ultrasonics, the sampling frequency remains high and in the MHz range. The damage detection in reinforced concrete at 5 MHz (Yoon et al. 2000) has been established over decades now. Detection using Lamb Wave is also performed in a MHz range. Smart composite laminate detection relevant to aerospace applications requires 5 MHz sampling (Masmoudi et al. 2013), while the recent detection of concrete damage uses 10 MHz sampling (Nor 2018). While such application has typically seen an overall increase in the data intensity by an order in the last two decades, challenges in underwater situations have catered to a lower (64 kHz) sampling rate (Walsh et al. 2017). Overall, the AE frequencies in material evaluation and characterization and in engineering asset health monitoring typically range between 100 kHz and 10 MHz (Tan 2016). Strain Monitoring

While larger infrastructure systems typically consider more robust vibrating wire strain gauges, their application is limited, and the sampling rates are very low (typically one sample per minute) (Pakrashi et al. 2013). Such rates limit the abilities to perform frequency domain analyses, and time series techniques must be employed. Distributed fiber optic sensors increase this rate, but only to approximately 1–5 Hz (Berrocal et al. 2020). On the contrary, these gauges usually come with thermal measurements. Bragg grating sensing or dynamic strain gauges can go up to tens or approximately 100 Hz (Moyo et al. 2005), but still several orders lower than the ultrasonic counterpart. Distributed fiber optic sensors are more prevalent for strain monitoring in composites (Jothibasu et al. 2018). Lower sampling rates lead to less demand on data analyses or storage, but come with the limitation of being able to accommodate low frequencies alone, which may be inadequate when trying to assess certain damage types. The other option is to create a dense network of strain sensors (Daichi and Tamayama 2019; dos Santos 2015; Lizotte and Lokos 2005; Święch and Święch 2020), which has been experimented on aircraft wings before, but come with the problem of cumbersome instrumentation and having to deal with several channels of simultaneous data. For example, Bombardier tested heavily instrumented strain gauges (Marsh 2011).

9.3.2 Established Approaches for Data Reduction

Aspects around data reduction were highlighted by Martin et al. (1997), and several techniques exist. While frequency domain transforms can reduce data, certain features can also be missed. The recent work around real- or near real-time approaches (Mucchielli et al. 2020; Bhowmik et al. 2020a; Krishnan et al. 2018), along with batch-processing assessments (Martinez-Luengo et al. 2016), has provided typical statistical methods for reducing the data intensity. Park et al. (2010) identified the need for data denoising as a primary step. A wavelet-based approach was primarily followed here. Bolandi et al. (2019) more recently approached this topic by interpreting the cumulative duration of strain events at different predefined strain levels. Techniques based on multivariate statistics (Worden and Manson 2000) and statistical process control (SPC) (Fugate et al. 2001) have recently been applied in structural damage detection. On the contrary, PCA (principal component analysis) has been used for performing data compression prior to the feature extraction process when data from multiple measurement points are available to enhance the discrimination between features from undamaged and damaged structures (Bhowmik et al. 2019a) This process transforms the time series from multiple measurement points into a single time series. Visualization and dimension reduction were implemented using the PCA for damage detection. The PCA technique has been used to condense the frequency response functions data and their projection onto the most significant principal components used as the artificial neural network input variables (Zang and Imregun 2001). Moreover, the PCA has also been recently used for several purposes, including model reduction, dynamic characterization (Feeny 2002), sensor validation (Friswell and Inman 1999), modal analysis (Feeny 2003), parameter identification (Lenaerts et al. 2001), or damage detection (Bhowmik et al. 2019a). Some nonlinear extensions of the PCA have been employed for SHM purposes.

With its advent, the wavelet theory has served as a useful approach for data compression where conventional techniques have not achieved the desired speed, accuracy, or efficiency. The wavelet transform principle lies in hierarchically decomposing an input signal into a series of successively lower resolution signals (Rioul and Vetterli 1991; Strang and Nguyen 1996). At each level, the decomposed signal contains the information needed to reconstruct the signal situated at the next higher resolution level. This concept owns its potential of extending to electric power quality issues (Santoso et al. 1997). For this application, the wavelet transform coefficients corresponding to a disturbance can be found larger than others unrelated. Therefore, we can store only those data related with the event. Using this method, the data of power quality disturbance can be compressed, while the original signal can be reconstructed with very little information loss. Figure 9.1 depicts an informative illustration of wavelet-based data reduction.

Fig. 9.1
figure 1

Data reduction through wavelet transform approaches

The data compression for SHM systems has attracted much interest in the recent years, especially for wireless monitoring systems, because data compression techniques can provide a method of improving the power efficiency and minimising the bandwidth during the transmission of structural response time histories from wireless sensors (Lynch 2004; Xu et al. 2004). Wavelet-based compression (Xu et al. 2004) and Huffman lossless compression (Lynch 2004) techniques have been developed. All these data compression methods belong to a conventional framework for sampling signals that follow the Nyquist–Shannon theorem: the sampling rate must be at least twice the maximum frequency present in the signal.

Compressive sensing (CS) (Candés and Wakin 2008; Donoho 2006) is a novel sampling technique for data acquisition, whose capability, when first met, seems surprising. It asserts that if certain signals are sparse in some orthogonal basis, one can accurately reconstruct these signals from far fewer measurements than what is usually considered necessary based on Nyquist–Shannon sampling. This new technique may become the main paradigm for simultaneously sampling and compressing data, thereby increasing the efficiency of data transfer and storage. Figure 9.2 shows a quick summary of data reduction through compressive sensing.

Fig. 9.2
figure 2

Data reduction through compressive sensing strategies

9.3.3 Open Challenges for Data Reduction in SHM Systems

For ultrasonics, the protocols for quantifying the output of an SHM system are generally not available. The full 3D modeling of wave propagation is still prohibitive for several applications, but the numerics around this keeps improving. Validation against real defects in structures remains a problem. Environmental effects like temperature, load variations, and surface conditions are well known to cause significant changes in ultrasonic signals, often exceeding changes due to damage. Load changes, unlike temperature changes, are generally anisotropic; like temperature changes, they cause changes in the propagation times of individual ultrasonic echoes. Ultrasonic Systems

For some SHM methods using ultrasonic signals to retrieve information, one of the key aspects in data reduction is how to deal with the raw data. As per the ultrasonic range definition, the typical sampling rates in the order of several MSP/s will produce a serious amount of data within a very short time of monitoring operation. For active methods, such as pulse-echo systems and guided wave monitoring, one can select pulsing intervals for the monitoring inspection and adjust the amount of data. For one active monitoring operation, the generated data will still require a certain share of data storage capacity. For passive systems, such as acoustic emission monitoring, skilled data reduction steps may be used to avoid storage of raw data streams. Nevertheless, the amount of data generated cannot be freely selected because it depends on the overall acoustic emission activity during the operation. Based on the likelihood of numerous noise sources being active, this will generate a serious amount of data during the aircraft operation, which needs to be interpreted. Reliability Issues Related to Loss of Information Via Data Reduction

Every data reduction pipeline has an intrinsic issue related to the potential loss of relevant information due to skipping of signal portions, compression, selection, or other measures. While the actual (digital) raw signals may be properly reconstructed in some cases, many cases in presently used SHM data reduction steps require a significant reduction of the amount of data to come to a final decision regarding the actual status of the structure under monitoring. With much of the signal reduction steps still being under research, there is no expectation of established reliability schemes for all algorithms being applied. This considers the aspects of false alarm rates and the likelihood of false interpretation. In addition, for SHM systems to become a part of in-flight avionics, the aircraft operation standards request a certain reliability of the full chain because the SHM system will contribute to high-level decisions with the ultimate consequence of the risk of lives.

9.4 Wireless Sensing Considerations

Structural health monitoring (SHM) applications in the field of aerospace engineering generally involve a limited space for sensor installation and mechanical movements, which may damage parts of the monitoring system. Moreover, long cables may considerably increase the system cost and the overall aircraft weight. For these reasons, wireless communications are generally preferable, although several critical issues may arise using these technologies.

According to Logan and Sankareswaran (2015), aircraft electrical wiring problems have recently increased in the aircraft manufacturing industry. The Airbus A380, for example, has 40,000 sensor connectors and 98,000 wires consisting of over 530 km of wiring in each aircraft (Gao et al. 2018). All onboard safety systems are based on wired connections; hence, the wiring degradation might contribute to further issues and lead to terminated flight missions (Gao et al. 2018), which results in production and delivery delays. The US Navy spends approximately 1 to 2 million man-hours finding and fixing wiring problems. Replacing onboard wired sensing devices by wireless-based solutions can optimise maintenance and improve the safety of aircraft, reducing their weight. Less wires mean less chances of wiring problems, which is great for the most important factor in the aerospace industry, that is, safety. In terms of fuel efficiency, a lighter plane will use less fuel than a heavier one. Therefore, wireless communications and sensors can bring a host of economic benefits.

Several researchers (Logan and Sankareswaran 2015; Yedavalli and Belapurkar 2011) have conducted studies on the improvement in weight due to wireless systems, which has led to an improvement in the fuel efficiency. Moreover, Liu et al. (2008) suggested that the aerospace industry should consider replacing some aerial-vehicle sensor wiring with wireless communications, thereby lowering the weight of the aerial-vehicle wiring and leading to an increased payload capacity. This thought has been shared by other researchers (Yedavalli and Belapurkar 2011; Zahmati et al. 2011) and become the utmost importance with the most recent developments in the aircraft industry because transportation systems and test facilities are becoming increasingly complex (Figueroa and Mercer 2002).

Nevertheless, some researchers believe that airborne wireless systems (AWSs) may negatively affect the overall reliability of aerial vehicles, jeopardizing their safety (Liu et al. 2008). For this reason, the field of research of wireless sensing systems faces continuous growth, proposing different network topologies and operative algorithms aimed at maximizing the monitoring system efficiency while preserving the robustness and reliability comparable to those of wired solutions.

9.4.1 Network Topologies

Dealing with dense wireless sensor networks is generally impractical if continuous and high-rate data streams are necessary. Accordingly, some studies (Cao and Liu 2016) proposed event-triggered sensing systems to collect high-fidelity data at the occurrence of particular events aimed at performing condition-based decision making with minimal data transmissions, while others (Montalvão et al. 2006) were simply based on periodic inspections. Although these systems are particularly effective in some cases, continuous real-time approaches are generally preferable in aerospace applications (Abdeljaber et al. 2017).

Neural networks are effectively used in real-time monitoring systems. However, unlike eigen perturbation methods for damage detection, CNN (or any deep learning-based approach, for that matter) is computationally more expensive (Bhowmik et al. 2019b, 2020a, 2020b). The reason for this placeholder in the present context is to make the reader aware of such methods that exist both in industry and in research practice. Numerous industries (including the tech giants such as Google and Apple) have taken up neural network-based modeling, fault identification, and system reliability as important pathways for product design and development (Marr 2017; Apple 2017). Even though real-time SHM is not mandatory for all SHM systems, as the present chapter focuses on data reduction strategies—and it has been time and again mentioned about the usefulness and lower resource involvement for real-time SHM systems—the authors strongly opine that the placement of this concept is apt. Recalling the concepts here—real-time SHM based monitoring systems can perform in-situ analysis, thereby lowering the transmission cost (Bhowmik et al. 2019a). As compared to conventional monitoring systems, these contemporary modules can lower the computational cost (about 75% lower) than traditional (Kalman filter based) systems (Bhowmik et al. 2019b). The importance of taking higher resolution, lower interval data reading is much more important when aircraft systems are involved. For monitoring such systems, it is usually desired that the readings are close and evenly spaced, such that preventive measures can be adopted as soon as possible to prevent any disaster (Bhowmik 2018). For such cases, if the aircraft loses communication with the control tower, it is the onus of the crew to prepare for emergency evacuation and initiate mitigation protocols to save lives of the passengers. This can only happen if a real-time monitoring system is placed on the black box, and critical junctures of the aircraft. In the event of an impending disaster, these real-time monitoring stations will provide accurate, early and timely detection (and forecasting) to prevent any mishaps—and therefore, crucial in aircraft systems and for the aerospace industry.

Centralised networks (i.e., those consisting of sensing nodes connected with a central monitoring station through one-to-one connections) are among the most widely used in current SHM applications because they easily allow the application of traditional identification algorithms. In particular, the raw data collected at the instrumented locations are directly transmitted to the monitoring station, where centralised processes occur, thereby generally using algorithms for multivariate data. However, in this case, the amount of data to be centrally collected exceeds the network bandwidth as the number of nodes increases, regardless of the adopted data transmission method, posing a strict constraint on the network scalability. Therefore, this topology is suitable in the case of small or sparsely distributed sensor networks, where the throughput should be maximised without considering a large number of sensor locations.

In the last years, innovative sensing solutions, namely wireless smart sensor networks (WSSNs), have been introduced, disposing micro electro-mechanical system (MEMS) sensors and microcontrollers that can perform a simple onboard processing, such as digital signal processing, self-diagnosis, and self-adaption functions (Nagayama et al. 2009). Their modest computational footprint can be exploited through edge-computing, giving rise to decentralised systems (Abdulkarem et al. 2020; Avci et al. 2018; Quqa et al. 2020), in which part of the signal processing is performed at the node level to lighten data transfer, reduce the computational burden for real-time implementations, and improve the energy efficiency of the entire system. Sensing nodes capable of collecting and processing data independently from the others constitute decentralised networks.

Over the last two decades, different decentralised network configurations have been implemented in SHM applications, and the simpler of which consists of the star topology. Although the connections are similar to those of centralised networks, data processing and compression can be performed before transmitting to the central node (i.e., the sink). However, this configuration does not exploit the information gained from combining data from neighboring nodes (e.g. spatial information), making onboard pre-processing generally limited to simple operations. Filtering and downsampling are generally performed in onboard smart nodes in star configurations (Quqa et al. 2020). The first studies on SHM through decentralised networks in a star configuration were conducted by extending traditional techniques, such as the damage locating vector (DLV) (Gao et al. 2006; Nagayama et al. 2009), to allow their application in a decentralised fashion. Nevertheless, time-series representations (Long and Büyüköztürk 2017) and artificial neural networks (ANNs) (Avci et al. 2018) have recently been exploited to obtain damage sensitive features (DSFs) using small amounts of data.

Hierarchical systems usually organised in tree structures can resolve the limitations of both centralised and independent processing approaches. Smart nodes are divided into hierarchical levels performing different tasks. Leaves (i.e., end devices) are generally used for data collection and filtering, while at the higher levels, data are aggregated. Thus, lighter parameters are calculated at each level before transmitting the data to the monitoring station. In the study of Gao and Spencer (2008), a hierarchical configuration was used for the online evaluation of the flexibility matrix. Meanwhile in the study of Jindal and Liu (2012), the singular value decomposition was performed on the data collected on small groups of sensors, thereby generating a tree-structured identification process.

The other implementations of decentralised WSSNs include mesh topologies, which are still rarely used in SHM applications, due to the need for more power for redundant multi-hop transmissions and the requirement of a complex routing scheme (Abdulkarem et al. 2020). Nevertheless, Mechitov et al. (2004) developed a mesh-compatible algorithm with transmissions limited to extracted features only. Linear, bus, and clustered hierarchical configurations have recently emerged as choice topologies for WSSNs and could find applications in the field of SHM soon.

9.4.2 Data Rates

Dürager et al. (2013) performed a comparison of wireless signal measurement systems with regard to dimensions and weight, processor, wireless transmission system and its range, and energy supply. The sensor nodes discussed in the recent literature had one, four, or eight measuring channels each (Dong et al. 2015; Dürager et al. 2013). The signal resolution is usually 12 or 16 bits. Depending on the system, the sampling rate varies between approximately 100 kHz (Grosse et al. 2010) and 5 MHz (Wu et al. 2017) and the data transmission rate between approximately 40 kB/s (Dürager et al. 2013) and approximately 6 Mbit/s (Wu et al. 2017). The energy required for wireless signal transmission varies depending on the system and the operating mode (e.g. transmission or reception and active or inactive). Dürager et al. (2013) specified approximately 50 mW for “inactive” and 1.4 W for “active”/“transmit.”

A recently published review article (Ayaz et al. 2018) generally described the state of the art of wireless sensor technology, but did not deal with applications in the ultrasonic signal range, with the exception of acoustic methods for leak detection in pipelines (e.g. underwater hydrophones). Wireless underground networks were discussed by (Trang et al. 2018), but this was again without reference to ultrasonic signal measurements. In general, the possible data transmission rates faced in ultrasonic signal evaluation are a problem. Assuming a typical transmission capacity in the order of 100 Mbit/s (Alonso et al. 2018) and determining the required data transmission rate from the number of channels, sampling rate, and estimated maximum rates for signals of 103 s−1, Figure 9.3 shows the result in diagrams. The wireless transmission of recorded waveforms in quasi real time practically requires data transmission rates of >1 Gbit/s, which is not possible with existing systems.

Fig. 9.3
figure 3

Required data transmission rates in Mbit/s for ultrasonic signals with a 16-bit resolution for a different number of data acquisition channels and sampling rates, with a logarithmic scale of the y-axis. Technically possible: approximately 100 Mbit/s

One approach of solving this problem is locally implemented data storage, possibly data compression and optimised data transmission protocols (Wang et al. 2018). This can be useful if the average amount of data is limited (e.g. data acquisition is not continuously performed), but only during peak loads of a structure (e.g. using event-triggered systems). Therefore, intermediate storage only has to absorb the peak demand (e.g. in the case of short-term significant damage and the corresponding generation of signals for an active or passive inspection of the structure), but 100 Mbit/s may be sufficient.

Another approach is to reduce the amount of data locally (e.g. by extracting only relevant parameters for data transmission) during further evaluation, without using the actually recorded high-frequency signals. Figure 9.4 shows the possible scope of such parameter data sets for the corresponding number of channels, assuming that each channel generates a maximum of 1000 data sets per second. Under these assumptions, data sets of approximately 2 kB each can be transmitted wirelessly.

Fig. 9.4
figure 4

Required data transfer rates for 1000 parameter sets per second and channel for different numbers of data acquisition channels and amount of data sets with the logarithmic scale of the y-axis. Technically possible: approximately 100 Mbit/s

Typically, lower sampling rate sometimes creates issues with algorithms to analyse the data stream. Consider the algorithm singular spectrum analysis. First, this works solely on a single channel data available from monitored stations (Bhowmik et al. 2019a). Next, if the sampling is low, chances of aliasing can also prevail, with lower sample sizes. It becomes extremely difficult to analyse such a low number of samples in such cases. For AE signals, usually a greater bandwidth is approved of—for easy analysis of the data, and therefore, this content is particularly helpful for such implementations. Depending on the system, the sampling rate varies between approximately 100 kHz and 5 MHz and the data transmission rate between approximately 40 kB/s and approximately 6 Mbit/s. The energy required for wireless signal transmission varies depending on the system and the operating mode (e.g. transmission or reception and active or inactive).

At this stage, there is inadequate evidence base around the topic and the above values are engineered to be application-specific—sometimes to an experiment—forming a lack of generality. This is due to paucity of extensive round robin experiments and also for this industry's reluctance around data sharing. Under such circumstances, these values are strongly related to:

  1. (a)

    the technology

  2. (b)

    the sampling rate

  3. (c)

    what is to be measured

  4. (d)

    the error allowed

  5. (e)

    the minimum size of event to be measured

  6. (f)

    the resolutions of the event changes to be measured

  7. (g)

    the definition and the extent and the structure of noise in the system

  8. (h)

    the operational conditions

  9. (i)

    the effect of extraneous variables

  10. (j)

    the measurement regime and

  11. (k)

    the data compression and signal fidelity.

A combination of so many makes it an extensive field which should be addressed - but before such values should be quoted with confidence (and confidence internals, to allow for uncertainties), the authors choose to resort to the previous format presented above.

9.4.3 Synchronization

In SHM applications, the data collected at different structure locations should usually be synchronised, especially when modal shapes are exploited for damage identification. The measured signals from sensing nodes with intrinsic local time differences can generate inaccuracies in the outcomes of the SHM process. The synchronization accuracy depends on the algorithm used to manage the data, the implemented communication layer, network topology, and specific application (e.g. due to the environmental conditions). Time synchronization errors may be caused by both clock offset and drift between sensor nodes. The first depends on the contemporaneous initialization of all the sensing nodes in a wireless sensor network (WSN), while the latter occurs due to the clock rate of the crystal oscillation, which may differ from design reference clocks.

Time synchronization is considered as an open challenge in scientific research (Abdulkarem et al. 2020). Several studies have addressed the effects of inaccurate synchronization on the monitoring process results (Abdaoui et al. 2017; Krishnamurthy et al. 2008; Nguyen et al. 2014). The effects of time synchronization on the ability to identify mode shapes through the well-known frequency domain decomposition (FDD) was evaluated by Krishnamurthy et al., which demonstrated that the error in the modal shapes due to inaccurate synchronization is dependent on both the time shift and the modal frequency.

A precise synchronization also enables the sensor nodes to transmit data in a scheduled time, preserving power and involving less collision and retransmission of the data. Hu et al. proposed an energy-balanced synchronization protocol for SHM applications (Hu et al. 2010).

However, the accurate time synchronization of the processed signal and synchronised sensing do not necessarily coincide. The precise sensing timing control based on synchronised clocks is challenging, especially when two or more tasks, including sensing, are simultaneously performed because it usually happens in smart systems. Accordingly, several solutions have been proposed involving both hardware and algorithms.

Huang et al. proposed a new design for the hardware cross-layer, achieving high-precision synchronization for single-hop transmission (Huang et al. 2015). Xiao et al. obtained a good performance for multi-hop communication (Xiao et al. 2017). Araujo et al. proposed a Zigbee-based solution directly involving the physical layer, employing the synchronization clock pulses transmitted by the master device to all the end devices (Araujo et al. 2012). The global positioning system (GPS) has also been used with a considerable power consumption and a poor indoor performance. Sazonov et al. proposed a hierarchical architecture with beacon synchronization for local clusters of sensors using the GPS time reference (Sazonov et al. 2010).

The frequent implementation of time synchronization protocols or algorithms may mitigate synchronization errors. Two main families of synchronization protocols exist: (1) the sender–receiver protocol involves the synchronization of each sensing node with a reference node clock using bidirectional communication; and (2) the receiver–receiver synchronization employs broadband transmission from a reference node to a group of sensing devices. However, synchronization protocols typically require considerable power consumption. Recent studies (Abdaoui et al. 2017; Nguyen et al. 2014; Yan and Dyke 2010) proposed error-resilient algorithms aimed to reduce the influence of non-synchronous data on structural identification via software. This solution aims to reduce the power consumption by reducing the use of synchronization protocols.

The other algorithmic solutions presented in the literature involve resampling-based approaches (Nagayama and Spencer 2008) applied after collection without the need of strict timing control, which yielded an accuracy of approximately 30 μs.

9.4.4 Power Management and Consumption

The longevity of WSNs is one of the main challenges for SHM applications using battery-powered sensing systems (Gao et al. 2018). Using a lithium-polymer (LiPo) battery (330 mA/h, 4.2 V), Dürager et al. (2013) concluded that a typical SHM system based on guided ultrasonic waves can operate for only 0.24 h in actuator mode and only 0.51 h when in sensing mode. For this reason, ambient energy is commonly used to self-power sensor networks. Moreover, within the aerospace industry, batteries may not be permitted in most structures; therefore, power must be managed through wireless methods or completely harvested, making this topic of utmost importance. However, the harvested source of power is typically not enough to completely supply wireless devices; hence, research studies in this topic have attempted to maximise the efficiency of WSNs from both the algorithmic and hardware viewpoints (Davidson and Mo 2014; Xu 2016).

Both dynamic power management (DPM) and dynamic voltage scaling (DVS) are suitable for optimizing the efficiency of WSNs. Figure 9.5 shows the DVS scaling effect on the power of a single processor, where the embedded system adjusts the supply voltage to place components into lower power states, which are idle at any given time by using various heuristics (Park et al. 2009).

Fig. 9.5
figure 5

DVS scaling effect on the power of a single processor (Park et al. 2009)

In contrast to the DVS, the DPM selectively places idle components into lower power states, thereby decreasing the power consumption (Park et al. 2009). Figure 9.6 shows the relation between time and power of the DPM for a single device that has a shutdown and wake up delay in idle states, but drastically reduces the power consumed in these states. The usage of the DPM compared to the DVS increases the power savings of the system by at least a factor of 10 (Park et al. 2009).

Fig. 9.6
figure 6

Dynamic power management for a single device (Park et al. 2009)

A method of interest in this context is the electromechanical impedance (EMI) technique, which uses a PZT transducer (bonded or embedded) that monitors the mechanical impedance of a structure by relating it directly to the electrical impedance sensed by the transducer (Neto et al. 2011). This theory is employed when constructing a wireless impedance device (WID). WID-3 (Farinholt et al. 2010), which was originally used for monitoring civil structures, is a low-powered sensor/transducer operating from 2.8 V. Its overall power consumption (Table 9.2) allows taking a single measurement per day for up to 5 years based on the power provided by two lithium AA batteries.

Table 9.2 Drawn current and power consumption per mode (Farinholt et al. 2010)

In an effort to minimise power consumption, the digital-to-analogue converter (DAC) for generating the excitation signal required for the EMI (Zhou et al. 2010) can be omitted by replacing the initial sinusoidal wave emission with a digital pulse train. A similarly wireless, but with integrated, multithreaded sensors design was suggested by Wang et al. (2007). Figure 9.7 depicts the schematic design of which. All components were programmed to 5 V, with both the active and standby currents per item detailed, giving a total active current for the unit of 77 mA and a resting current of 100 mA.

Fig. 9.7
figure 7

Design of the wireless sensing unit (Wang et al. 2007)

A connectivity-driven synchronization method was also proposed to reduce the power consumption of WSNs (Anastasi et al. 2009). Specific nodes were activated as “coordinators” and remained awake, while the other nodes were in the sleep mode. If two sleeping nodes cannot connect either directly or via one node, that node will become a coordinator, allowing for a minimization of the power consumption. This theory is common fold among WSNs. Liu et al. previously developed sensors that “wake up” at 3 mA power consumption, with full powering down at 0.5 A (Liu et al. 2005).

9.4.5 Future Developments in Energy Harvesting and Power Management

Several sources of energy may be exploited in aerospace structures (Fig. 9.8). However, the most promising are thermal gradients and vibration (Le et al. 2015). The research performed by the Cardiff School of Engineering found that average power levels may be produced by a thermoelectric generator between 5 and 30 mW. However, only up to 1 mW could be produced from a single vibrational energy harvester (Pearson et al. 2012). Moreover, using the quasi-static temperature difference in aerospace structures has shown great potential. Thermoelectric devices can be used to harvest energy in an aircraft, where hot and cold surfaces are available (Elefsiniotis et al. 2013).

Fig. 9.8
figure 8

Taxonomy of energy-harvesting sources in a WSN (Shaikh and Zeadally 2016)

Vibration may be used through electrostatic devices, such as capacitors, to harvest energy. The device capacitance varies with the vibration levels of the structure (Gilbert and Balouchi 2008). Research has shown that up to 40 μW could be generated on a single device if the monitored structure vibrates at 2 Hz frequency (Naruse et al. 2009).

Another method employed for energy harvesting involves the strain energy from the aircraft wing deformation. The experiments conducted by the University of Exeter found that power levels of up to 3.34 mW could be produced. An energy transfer efficiency of up to 80% was also possible. These experimental results give real promise to a fully developed system utilised in real-world applications (Chew et al. 2016).

However, a more extensive research into long-lasting self-sufficient energy harvesters is still needed. The biggest constraint that must be overcome for the development of aerospace energy harvesters is the creation of materials that can handle the extreme temperature fluctuations (Zhang and Yu 2011). For this aim, a more application-specific research with a focus on design is needed (Priya and Inman 2009). NASA is currently conducting studies on sensors that require low power for use on hypersonic aircraft where temperatures can reach in excess of 1000 °C. One method includes the use of chemical sensors, such as single-walled carbon nanotubes (SWNTs). SWNTs work due to their high responsiveness to chemicals, such as nitrogen dioxide, acetone, and ammonia. These sensors are one of the many research objects in the NASA roadmap that require the use of passive wireless in low-energy electronics (Wilson and Atkinson 2014). Promising results have been achieved in the mentioned study employing a system that involves wireless sensor nodes capable of making logic-based decisions, which are then transmitted to a central base station. However, powering such a system is troublesome.

The reliability of devices must also be improved. For example, many vibration-based devices operate at their natural frequency resonances and will eventually become unstable after long periods of use (Priya and Inman 2009).

Synchronised sensing. Measured signals from a smart sensor network with intrinsic local time differences need to be synchronised. If not appropriately addressed, time synchronization errors can cause inaccuracy in SHM applications, particularly in the mode shape phases.

9.5 Data Management

Flight data management systems are an essential part of mandatory flight reports, recording and analysis of flight data, improving operational safety, and aircraft maintenance. With the advent of modern computational techniques and the ability to process a massive amount of data in almost real time, several aircraft operators are already using advanced data processing routines to improve the rentability of their air fleet. The expectations for such systems are generally high because the processing of in-flight data can be used to perform a faster fault diagnosis, repair affected systems, reduce turn times by up to 50%, and reduce false alarms due to no-fault found equipment returned for repair. Generally, this is expected to lower the maintenance costs and provide higher on-time deliveries.

While much data from the regular flight avionics of the aircraft are already stored and processed, the integration of structural health monitoring systems faces several new challenges in providing reliable data storage, traceability, and liability. The specific types of SHM systems discussed in this work are considered onboard maintenance systems. Their primary function is to monitor the aircraft health (through a continuous monitoring of all acquired data) and quickly and accurately diagnose issues. The recorded data typically includes all aircraft avionics during flight and flight crew alerts, such as voice records and crew reports. The combined amount of data aims to diagnose the root cause fault behind symptoms, the correlation of the fault with crew reports, to quickly inform the maintenance crew of the required repair action.

9.5.1 Reliability

Digital sensors, processors, and data links have improved over time; hence, modern aircraft can produce huge amounts of data (cf. Sect. 9.1). These data are not completely processed in real time nor necessarily onboard an aircraft; thus, they need to be transferred outside the aircraft and stored for a certain amount of time. This is desired for any sort of offline analysis, but is also causing a cascade of reliability issues. First, the data transfer itself must be reliable enough to inhibit data corruption due to technical issues and the vulnerability of transmission protocols. Second, the data must be persistently stored to avoid data corruption over time and for potential liability issues.

In the future, the required level of security of airborne systems will require standard solutions as is also the case for ground-based security systems. The existing solutions assume that the personnel will not improperly handle keys or data. Some existing systems assume that opaque connections, by their specialised nature, prevent attacks or that system isolation is sufficient. These types of assumptions will not be acceptable to regulators or data users as data, and its use has become increasingly integrated into aircraft operations as in the case of an SHM system.

As a result, cryptographic systems will be required for all systems in the aircraft and for communication with the aircraft. These systems require a verified authentication method to enable host and client checks before data are sent. Several companies are already working toward an architecture that provides an appropriate level of security while improving data availability through hardware-based cryptography and isolation to enable fast data processing.

Onboard maintenance systems also include aircraft data loading hardware (wired or wireless) used to provide a secure connection between ground systems and aircraft avionics. Specifically, for critical flight tools, including navigation databases (charts and maps) and flight plans, this requires specifically certified hardware and software tools. Intentionally, the secured connection is used to download fault history database information from the aircraft’s central maintenance computer.

9.5.2 Liability Issues

The current status of data usage in aerospace faces several emerging legal issues (Spreen 2019). New legal considerations with regard to aerospace as a technology are constantly emerging and changing society in the course of time. A topic of particular attention is the legal status and the effect of data resulting from the aircraft operation. Huge amounts of aircraft-related data are involved; thus, questions arise regarding data ownership, liability for errors or misuse, and how to control data distribution.

OEMs have developed systems for generating and collecting data concerning the maintenance, repair, and overhaul sector. OEMs have been said to abuse their privileged access to data to dominate the aftermarket by unjustifiable business from other businesses. This consideration raises potential antitrust issues. Other data-related issues include intellectual property rights and confidentiality laws. Business agreements involving aircraft operators are beginning to address, control, and use data and attempt to manage data access and their applications. The contractual limits of data use continue to evolve (Helland and Tabarrok 2012).

The security of aircraft-related data systems is not a legal issue in itself, but potential consequences of an insecure system open up numerous legal questions. The negative consequences for companies if data are lost or hacked could be serious. New EU laws allow fines of up to 4% of the worldwide turnover for operators of 'essential services' and 'digital service providers' if they are not able to adequately manage cyber-risk attacks. Aircraft manufacturers and operators have a clear legal liability in case of failure to take due account of the data system security. Digital systems permeate aircraft design and play a greater role in the actual control of aircraft in flight; thus, numerous questions of product liability arise. The two Boeing 737 MAX crashes in 2018 and 2019 suddenly brought the role of digital autopilots into the public consciousness and raised questions regarding the individual responsibility of the human pilot and the aircraft software. These issues are still the subject of lively discussions by regulators in courts and within the industry, which still has to be solved.

SHM systems intend to provide a decisional basis for grounding the airplane or for putting it back into service; therefore, similar legal issues are expected and must be addressed to provide a system that is not only technically feasible, but also ready for practice.

9.5.3 Ground-Based Systems

In addition to onboard management systems, the so-called ground-based tools are also used. These tools include loadable diagnostic information, report builders to help customise reports, and manage system updates based on the aircraft operators’ preferences. In general, the same regulations regarding their technical reliability apply as for onboard systems; however, they do not suffer so much from other requirements. In particular, the system weight is not an issue; data transfer can be much higher and does not need to be specified for the harsh environments faced in-flight, but only the typical requirements for operating equipment on an airport.

9.6 Conclusions

From the viewpoint of an aircraft ready monitoring system, the data intensity provided by SHM systems still proves as a general challenge. Specifically, in the context of wireless sensing systems and their energy demand, technical solutions are already developed, but could benefit substantially from proper integration of modern data reduction strategies. The requirements for proper SHM integration completely depend on the desired monitoring application: the nature of potential structural damage; the operating environment; the consequences of failure to detect damage; the consequences of false positives; the frequency at which data is required; the regulatory environment; the economic case in terms of capital costs, operational costs, opportunity costs and many more. Therefore, as pointed out in Chaps. 2 and 4, the system operator does need to consider not solely the technical aspects, but also the complete framework required to safely operate a SHM system in an aerospace context.