1 Introduction

High temperature hydrogen attack and other degradation mechanisms such as intergranular attack cause problems in industrial environments and can lead to catastrophic failures [1]. Non-destructive detection of these phenomena would be desirable but challenging to accomplish. Ultrasonic techniques in particular appear to have the potential, but the changes introduced by the degradation are small and very difficult to detect with conventional ultrasonic methods, which cannot work at high temperatures and require the plant to be shut down for inspections. They also suffer from coupling uncertainties when the probe has to be reattached. However, monitoring changes with permanently attached sensors might be more successful as uncertainties due to coupling are eliminated and continuous data flow is available.

Monitoring hydrogen attack and other high temperature degradation mechanisms like intergranular attack with conventional ultrasonic transducers is not possible as the piezoelectric material cannot withstand high temperatures (\(>\!200~{^\circ }\)C). There have been recent advances in developing high temperature piezo elements [2], however these are not readily available and transducers are very difficult to construct as also all other materials such as solder connections and cables have to survive the high temperature. Methods such as laser ultrasonics [3] can be used at high temperatures, however they are not suitable for large scale field applications. There are a limited number of commercially available detection techniques that are cost-effective and have the potential to measure small changes at elevated temperatures. The only commercially available technology for mass field deployment is by means of waveguides [4, 5]. Waveguide based shear horizontal wave transducers provide a low-cost solution to thermally isolate the sensitive piezo elements from the high temperature specimen and therefore can be permanently attached to high temperature components.

The aim of this paper is to assess whether the data collected using the waveguide sensors is accurate enough to be used for the reconstruction of ultrasonic propagation velocity distribution within the material and by extension for reconstruction of voiding-type material degradation. For this purpose different image reconstruction algorithms are considered, two types of which are implemented and quantitatively compared.

This paper will start by reviewing literature on the degradation mechanism of hydrogen attack and intergranular attack followed by the description of high temperature ultrasonic monitoring. Then our approach to simulate the effect of degradation mechanisms in ultrasonic properties by creating a non-uniform temperature distribution is presented. Following this a review of imaging techniques is described. The Implementation section describes the imaging algorithms that were used, then measurement results are presented, discussed and conclusions are drawn.

2 Background

2.1 Degradation Mechanisms

2.1.1 Hydrogen Attack

The phenomenon of hydrogen attack has attracted substantial attention over the years. The mechanism of the degradation is well known: it occurs in carbon steels when hydrogen diffuses into the steel at high partial pressures and produces methane further reacting with the metal carbides [6]. Therefore cavities filled with high pressure methane are formed. This degradation poses a complex problem as it can reduce the structural strength of the material [7]. Design codes have been introduced based on the Nelson curves to avoid certain grades of steel in environments that are susceptible to hydrogen attack [8], but there have still been failures in equipment that has been in service for long periods [9] and the Nelson curves have been adjusted several times. Prescott [9] concludes that the equipment operating under conditions that cause hydrogen attack should be considered as if it was degrading even if the operation of the equipment was designed according to the Nelson curves. It is necessary therefore to monitor the condition of the vessel in use.

In situ metallographic inspection is difficult to carry out without destructive tests, however ultrasonic detection of hydrogen attack is potentially achievable. The ultrasonic properties of the degraded material are expected to change due to the methane voids. This has previously been exploited, however currently implemented detection techniques are very much operator dependent and therefore the reliability of testing is subjective [10]. In addition the accuracy of standard coupled velocity measurements is not well reported when used for material degradation. Yi et al. [11] carried out thickness measurements relying on time of flights using standard coupled probes and concluded that the uncertainty of time of flight measurements may be up to \(0.2/10\) mm = \(2~\%\). This is not sufficient for accurate evaluation of Hydrogen Attack.

Based on a report by Eliezer [12] the diameter of the voids caused by hydrogen attack is in the order of \(2 {\upmu }m\) - the wavelength of the ultrasonic signal used (frequency in the range of 1–10 MHz) is of the order of \(1\) mm which is three orders of magnitude larger than the microstructural changes and suggests that these changes can be modelled as changes to the bulk ultrasonic parameters. Significantly higher frequencies however cannot be used for the measurement because of attenuation problems.

Chatterjee [13] proposes to estimate the changes by calculating the effective bulk modulus and density (using a simple law of mixtures equation) of a voided material. This can be used to evaluate the altered ultrasonic propagation velocity in the following way:

$$\begin{aligned} \varrho _{effective}&=(1-v_f)\varrho _{material}+v_f\varrho _{void},\end{aligned}$$
(1)
$$\begin{aligned} G_{effective}&=G_{material}(1-45/23v_f+2.1v_f^2),\end{aligned}$$
(2)
$$\begin{aligned} c_{voided}&=\sqrt{\frac{G_{effective}}{\varrho _{effective}}}, \end{aligned}$$
(3)

where \(\varrho _{material}\) and \(\varrho _{void}\) are the densities of the bulk material and of the void, \(v_f\) is the void fraction, \(G\) is the shear modulus and \(c\) is the propagation velocity of shear waves. The resulting relationship between propagation velocity and void fraction in the range of interest of this study (\(v_f=0-3~\%\)) is close to linear:

$$\begin{aligned} c_{voided}&=p_1 \cdot v_f+p_0{} \end{aligned}$$
(4)

where \(p_1=-16.25~{\mathrm{m}}/{\mathrm{s}}\) and \(p_0=3246.7~{\mathrm{m}}/{\mathrm{s}}\) are the linear fit coefficients for \(v_f=0-3.5~\%\). The maximum error of the fit is \(0.44\frac{m}{s}=0.01~\%\).

There are more advanced models than the Chatterjee model, see e.g. Hirsekorn et al. [14] and Caleap et al. [15]. At late stages of the material degradation approaches such as proposed by Bowler et al. [16] should also be considered. The Chatterjee model however is a suitable approximation at low void fractions where the voids are uniformly distributed. These are all expected to be valid assumptions at the onset stage of Hydrogen Attack.

Measuring these changes is difficult using standard ultrasonic inspection techniques as the variability introduced by coupling can cause phase shifts, which decreases the precision of the measurement significantly [17]. Permanently installed sensors however have the advantage of stable coupling conditions and so the measurement can easily be automated and its precision is improved.

In order to evaluate the viability of monitoring these small changes in ultrasonic velocity and to spatially resolve them a proposed monitoring setup and signal processing techniques are investigated in this paper.

2.2 Monitoring Concept at High Temperatures

In order to achieve minimum variability, the experiments described in this paper were carried out with permanently installed ultrasonic sensors using waveguides. This would be essential for in service monitoring of high temperature hydrogen attack as they provide a solution for the need to thermally isolate the temperature sensitive piezo-electric transducer from the high temperature specimen. For more information about the waveguide transducer the reader is referred to [4, 5].

A sensor array was built using 20 waveguide transducers with a pitch of \(3\) mm, altogether covering a distance of \(57\) mm—as shown in Fig. 1. This assembly was used for a full matrix capture acquisition of pitch-catch measurements, in which each transducer was fired in turn, the other transducers being used as receivers, resulting in \(19\cdot 20=380\) waveforms per measurement.

Fig. 1
figure 1

The ultrasonic sensor array used in the experiments discussed in this paper (20 waveguides, dry coupled to steel specimen)

Ideally the transducers are excited with a 2 MHz five cycle Hann windowed toneburst as this signal has a narrow enough bandwidth to work with the waveguide transducers while still being short enough in the time domain for the different wavepackets to be separate and easily identifiable. The toneburst sent to the piezoelectric element transforms into shear horizontal waves, and propagates along the transmitter waveguide to the contact surface with the test piece. The wave is partially transmitted into the material, and it travels to the receiving waveguide via several paths (as shown in Fig. 2). As the waves reach the contact surface of the receiver waveguide they are transmitted into the waveguide, and propagate along the receiver waveguide. Once they reach the receiver piezoelectric element they are converted back into electrical signals.

Fig. 2
figure 2

Paths of the surface (dashed line), first backwall echo (continuous line) and second backwall echo (dotted line) waves between waveguides number 6 and 14

As the monitored waves have propagated through different paths within the specimen, they carry essential information about the ultrasonic properties of the different spatial areas within the material. Our main focus is to extract the time of flights in order to gain insight about the spatial distribution of the ultrasonic propagation velocity, and hence to be able to monitor material degradation. Dines et al. [18] conclude that below \(16\%\) velocity contrast, acceptable images can be reconstructed by assuming that ray paths remain straight. The velocity contrast in all images in this paper is below \(1~\%\), therefore we use the straight ray assumption without expecting poor reconstruction due to ray bending.

2.3 Non-uniform Propagation Velocity Distribution

As discussed earlier the changes introduced by high temperature degradation mechanisms are modelled by a local change in the ultrasonic propagation velocity. In order to estimate the accuracy of our methods and setup we have chosen to apply heat to the test specimen to create a non-uniform ultrasonic propagation velocity distribution. According to [5] and the calibration described in the next section, the relationship between the local temperature and ultrasonic propagation velocity is close to linear, therefore the conversion between the two is straightforward to implement. The waveguide sensor assembly used for our measurements is designed to withstand high-temperatures, therefore this approach seems like an ideal choice as the propagation velocity changes can be introduced in a short period of time with this method.

Our goal was to create a 2D temperature distribution within the measurement plane of the test piece. In order to achieve this a \(100\) mm long 500 W cylindrical (\(D=10\) mm) heating element (sourced from: RS Components Ltd. Birchington Road, Corby, Northants, NN17 9RS, United Kingdom, stock number: 724–2103) was used to create a temperature distribution that could be modelled in 2D at the central plane of the test piece as shown in Fig. 3. Since the relationship between the local temperature and ultrasonic propagation velocity can be calibrated (as described by the next section), a one dimensional ultrasonic array attached to the surface of the plate can monitor the 2D spatial changes in the ultrasonic propagation velocity within the material. This configuration has also been investigated in simulations (illustrated by Fig. 4) in order to be able to assess the proposed reconstruction techniques in a noise-free environment. The simulations were based on a two-dimensional steady state conduction model described in [19].

Fig. 3
figure 3

Sketch of setup with steel specimen and cylindrical heating element. The central temperature profile is assumed to be 2 dimensional and hence simulations of this region are shown in Fig. 4. The location of the thermocouples relative to the test piece and the sensor assembly are shown as \(T_1-T_5\). (The location of \(T_3\) is at \(x=0\) as shown on the image)

Fig. 4
figure 4

An example of a simulated 2D temperature distribution. The parameters of the simulation are described in Sect. 4

2.4 Calibration of the Ultrasonic Propagation Velocity’s Dependence on Temperature

In order to effectively convert the propagation velocity values to temperature, a calibration measurement is needed. The sensor assembly comprising of 20 waveguide sensors and its clamp described in Sect. 2.2 was slowly heated up to \(120~{^\circ }\)C with a hotplate while measuring the temperature distribution using 5 K-type thermocouples at the locations shown in Fig. 3. The heating gradient of the hotplate was chosen to be sufficiently low to ensure uniform temperature distribution, as verified by the thermocouples (all showing temperature to within less than \(1~{^\circ }\)C). Based on Eq. 5 and assuming homogeneous temperature distribution the calibrated propagation velocity-temperature curve was calculated according to the following equations and is shown in Fig. 5.

Fig. 5
figure 5

Measured ultrasonic shear velocity within the temperature range from 25 to \(116~{^\circ }\)C (crosses) and their linear fit (continuous line). Each measurement point is the average of 380 waveforms measured at each temperature level

$$\begin{aligned} c^{calib}_{ij}&=\frac{2\sqrt{\frac{d_{ij}^2}{4}+L^2}-d_{ij}}{T^{SBW}_{ij}}\end{aligned}$$
(5)
$$\begin{aligned} \overline{c}^{calib}&=\mathrm mean (c^{calib}_{ij})\end{aligned}$$
(6)
$$\begin{aligned} \overline{c}^{calib}&=k_1t+k_0{} \end{aligned}$$
(7)

where \(d_{ij}\) is the nominal separation between waveguides number \(i\) and \(j\) (e.g. on Fig. 2 \(i=6\) and \(j=14\)), \(L\) is the thickness of the test piece, \(c^{calib}_{ij}\) is the calibrated propagation velocity at each temperature between waveguides number \(i\) and \(j\), \(T^{SBW}_{ij}\) is the time of flight extracted using the signal processing methods described in detail in Sect. 5.1.1, \(\overline{c}^{calib}\) is the average calibrated velocity at each temperature level (calculated as the arithmetic mean of all obtained \(c^{calib}_{ij}\) values), \(k_1\) and \(k_0\) are the parameters determined by the calibration, and \(t\) is the temperature. Altogether 380 waveforms were evaluated at each temperature level and so each calibration point is the average of 380 propagation velocity values. According to [5] this curve is close to linear, which is confirmed by the results of the calibration measurement shown in Fig. 5. The assumed linear relationship between the propagation velocity and temperature is described by Eq. 7. The linear fit for the calibration points resulted in the following constants: \(k_0=3254.9~{\mathrm{m}}/{\mathrm{s}}\) and \(k_1=-0.4981~\frac{\mathrm{m/s}}{{^\circ }\mathrm{C}}\). Considering the results of the calibration measurement and the estimated effects of hydrogen attack, the ultrasonic velocity change over the temperature range investigated in this study (\(20\!-\!110^{{\circ }}\mathrm{C}\)) is equivalent to a void fraction of \(0\!-\!3.5~\%\) of hydrogen attack (using Eq. 4 as an estimate).

2.5 Reconstruction Algorithms

As described in Sect. 2.3 the goal of the reconstruction is to quantitatively extract the ultrasonic propagation velocity map within the material based on the data from the waveguide sensor array. It is therefore important to choose a reconstruction algorithm suitable for the conditions of the measurements described in this paper. In order to choose the appropriate imaging approach the main aspects of currently existing techniques are considered, namely the underlying physical assumptions and possible solution methods.

Several possible wave propagation modelling approaches may be considered from an imaging point of view. The most widely used modelling approaches are the straight and bent ray approximations [20, 21], both of which ignore diffraction and so the potential resolution of the reconstructed image is limited. The advantage of this approach however is that it is relatively easy to implement and should result in a robust algorithm especially in the case of a low contrast image.

In order to account for diffraction the Born or the Rytov approximations are commonly considered [22]. Their advantage is a potential resolution gain, however these assumptions are highly restrictive as they require the observed object to be low contrast and small relative to the wavelength and potentially result in the reconstruction being more sensitive to noise.

Another option is the non-linear, full wave inversion method [23]. This approach uses a numerical approximation (e.g. finite difference method) of the underlying wave equation as its physical model. The selected solution method must then determine a suitable set of parameters (e.g. material properties at all points on a grid) such that the signals from the model match the measurements from the array. In theory this approach avoids the problems associated with the approximations described above, however its implementation is complicated and experimental issues are difficult to account for using a forward model, so very high signal-to-noise ratio data, taken from a very controlled environment is required for such a method to be of practical use.

As mentioned above, another critical aspect of the imaging approach to consider is its solution method. Traditionally direct solution methods were used, often based on the Fourier transform (for example straight ray tomography based on the Fourier Slice theorem) [22]. Such an approach is particularly attractive if the reconstruction is carried out with data from a simple array configuration, such as a circle, which allows parallel projections through the object or if computing resources are limited. Fast modern processors, however, allow iterative algorithms to be employed; iterative methods are often easier to implement and are suitable for more general sensor configurations.

In this paper the imaging is carried out based on the projection data measured by a waveguide sensor array, which means that the limited field of view of the setup combined with the high level of noise means that little additional information could be extracted through the more accurate physical modelling methods. A straight ray imaging approach using the Kaczmarz method as an iterative solver [24] was therefore selected for reconstructing the velocity map. Altogether this has the advantage of being relatively insensitive to noise and fairly simple to implement while still providing an accurate reconstructed image [22]. The details of the implementation are discussed in Sect. 3.1.

As an alternative approach to address the problem of limited field of view and the noise levels of our measurements an Assumed Distribution method is considered. It is our expectation that the most apparent issue of the reconstruction will be the lack of sufficient vertical resolution regardless of the reconstruction method, as the dataset simply does not include horizontal projections. Our proposal therefore as an alternative reconstruction approach is to assume a vertical distribution of the ultrasonic propagation velocity based on considerations related to the cause of propagation velocity change. This allows us to replace the data from low angle, long wavepaths, which are therefore the lowest signal-to-noise ratio waveforms of the dataset, with assumptions of vertical propagation velocity distribution. In the case of this study the propagation velocity change is caused by temperature inhomogeneities around a point-like heatsource, which we approximate by an exponential distribution as further explained by Sect. 3.2. As Hydrogen Attack is linked to diffusion of hydrogen into the steel it may be possible to model it just like temperature diffusion.

3 Implementation of Reconstruction

Based on the approach introduced in previous sections, the reconstruction of the spatial ultrasonic propagation velocity distribution from the time of flight data acquired by our sensors is considered in this section. Two different algorithms are investigated: the Kaczmarz algorithm, which uses only geometrical assumptions about the positions of the waveguide transducers and the time of flight data extracted from the 380 acquired waveforms and the Assumed Distribution method, which uses only the data acquired by adjacent transducers and assumptions about the temperature distribution within the material. These methods are described in detail below.

3.1 The Kaczmarz Algorithm (Algebraic Reconstruction Technique)

The assumption of the Kaczmarz algorithm is that the image reconstruction based on the observed data is described by the following equation:

$$\begin{aligned} Ax=b, \end{aligned}$$
(8)

where \(b=(b^1,\ldots ,b^M)\in \mathbb {R}^M\) is the observed data (in our case the time of flight data), \(x=(x^1,\ldots ,x^N)\in \mathbb {R}^N\) is the actual image (distribution of ultrasonic shear wave velocity in the sample), and \(A=A_{ij}\) is a non-zero \(N{\times }M\) matrix that describes the relationship between the observed data and the points of the image. Each row of matrix \(A\) contains therefore coefficients of each wavepath linked to all of the points of the image. The main problem of the reconstruction based on Eq. (8) is the large data dimension and noise in the observed data. The Kaczmarz method (also referred to as Algebraic Reconstruction Technique (ART) [24]) is one of the most popular solvers of overdetermined linear systems [25, 26].

Because of its iterative nature this approach addresses the problem of large data dimensions. It is also relatively simple to implement - every iteration step calculates:

$$\begin{aligned} x_{k+1}=x_k+\frac{b_i-{\langle }a_i,x_k{\rangle }}{||a_i||^2_2}a_i, \end{aligned}$$
(9)

where \(x_k\) is the \(k\)th iteration of the reconstructed image, \(i=(k\) mod \(m)+1\) and \(a_i,\ldots ,a_N \in \mathbb {R}^N\) denote the rows of \(A\). Therefore the algorithm cycles through the rows of \(A\) and adjusts a part of the reconstructed image based on the criteria described by the given row of \(A\) and the measured data (\(b\)). This essentially means that in each cycle the algorithm adjusts some of the pixels in the image (as described by the rows of \(A\)) based on the backwall echo arrival time of each wavepath. After cycling through the data enough times the image is expected to converge to the real distribution.

In order to increase the convergence rate of the original Kaczmarz algorithm a randomization is introduced so that the rows would not have to be reevaluated one after another, but in a random order [25] with the aim to speed up the iteration.

It is necessary therefore to set the probability of each row. Strohmer and Vershynin in [26] and [27] propose to set the probability to the Euclidean norm of the row, and therefore the revised algorithm is described by:

$$\begin{aligned} x_{k+1}=x_k+\frac{b_{p(i)}-{\langle }a_{p(i)},x_k{\rangle }}{||a_{p(i)}||^2_2}a_{p(i)}, \end{aligned}$$
(10)

where \(p(i)\) takes the values in \(\{1,\ldots ,N\}\) with probabilities \(\frac{||a_{p(i)}||^2_2}{||A||^2_F}\). Here \({||A||_F}\) denotes the Frobenius norm of A. The implementation of this algorithm and the calculation of constants are described in the next section.

The calculated average velocities for each wavepath are used as input data (see Sect. 5.1.1). In order to be able to discretise the spatial distribution of the propagation velocity a grid was created to serve as the image of the reconstructed velocity map. The resolution of the image can be chosen arbitrarily - the resolution in this paper was chosen to be 1.5 pixels per millimetre, resulting in a resolution of 85 by 57 pixels. The reconstruction also requires matrix \(A\) (in Eq. 8) to be determined. This matrix quantifies the relationship between the velocity at each pixel and the measured data. The pixels are assumed to have an effect on the average velocities of the wavepaths in a certain distribution—in this calculation a polynomial distribution function has been used weighted by the \(y\) coordinate of the pixel described by Eqs. 11,12, which are therefore necessary to produce a smooth image.

$$\begin{aligned} P_{mn}&=\frac{M_L-M_0}{L}y_{n}+M_0{},\end{aligned}$$
(11)
$$\begin{aligned} A_{mn}&=d^r-(\frac{l}{P_{mn}})^r,{} \end{aligned}$$
(12)

where \(P_{mn}\) is the weighing coefficient based on the \(y_n\) coordinate of pixel \(n\), \(r\) is the exponent of the polynomial distribution, \(d\) is the nominal separation between neighbour waveguides, \(l\) is the distance between the given wavepath and point and \(A_{mn}\) are the elements of the matrix \(A\) defined by Eq. 8. The distribution described by Eq. 12 (effectively the shape of an upside-down parabola curve) has negative values - these have to be replaced by zeros in order to achieve the intended functionality. The approximation therefore weighs in pixels close to the wavepath more than the ones further away from it as shown on Fig. 6. The following values have been used for the coefficients mentioned above: \(r=0.1\), \(M_0=0.5\), \(M_L=4\). As an example one row of the A matrix is shown in Fig. 6 reshaped as an image.

Fig. 6
figure 6

Coefficients for the wavepath between waveguide number 3 and 12 and each point of the velocity map

With all the constants defined the reconstruction algorithm requires an estimated image to start the iteration with. For this purpose the calculated propagation velocities for each wavepath are averaged for each pixel weighted by the corresponding coefficients in matrix \(A\), the resulting image is taken as step 0.

3.2 Assumed Distribution Method

An alternative reconstruction method is proposed based on the following considerations: the temperature distribution is assumed to be exponential around the heatsource, therefore its spatial distribution can be described by the following:

$$\begin{aligned} t(r)&= \exp {(-q_1 \cdot r+q_0)}+t_{0},\end{aligned}$$
(13)
$$\begin{aligned} t(x,y)&= \exp {(-q_1 \sqrt{(x-x_0)^2+(y-y_0)^2}+q_0)}+t_{0},\nonumber \\ \end{aligned}$$
(14)

where \(r\) is the distance from the point-like heat source, \(x\) and \(y\) are the horizontal and vertical coordinates of points where the temperature is evaluated, \(x_0\) and \(y_0\) are the coordinates of the heat source, \(t_{0}\) is a temperature constant describing the asymptote of the temperature distribution function and \(q_1\) and \(q_0\) are the parameters for which the equation will be solved. In practice \(x_0\) is determined as the mean \(x\) coordinate of the waveguide pair registering the biggest temperature (which is equivalent simply to the waveform with the biggest time of flight change), whereas \(y_0\) is assumed to coincide with the bottom surface of the flat backwall. The relationship between propagation velocity and the temperature is assumed to be linear which is defined by Eq. 7. Equations 7 and 14 yield the formulation of the spatial distribution of the propagation velocity (Eq. 15):

$$\begin{aligned}&c(x,y)\nonumber \\&\quad =k_1\cdot (\exp {(-q_1\cdot \sqrt{(x\!-\!x_0)^2\!+\!(y\!-\!y_0)^2}\!+\!q_0)}\!+\!t_{0})\!+\!k_0\nonumber \\ \end{aligned}$$
(15)

In order to be able to evaluate the function described by Eq. 15 the time of flight data from the neighbouring waveguides are taken into consideration, because the closer the waveguides are, the higher the amplitude of the received signal is and this results in high signal to noise ratio and low variability in the measurements. Using \(c^{S,corr}_{ij}\) (defined in Sect. 5.1.1) and the surface velocity as boundary conditions the equation can be solved for \(q_0\) and \(q_1\) in an iterative way.

$$\begin{aligned} q_0(x)&=\ln (\frac{c_{surface}-k_0}{k_1}-t_{0})+q_1\sqrt{x_{ij}^2+y_{ij}^2}\end{aligned}$$
(16)
$$\begin{aligned} c^{corr}_{ij}&\cong \frac{L}{\int _{0}^{L}\frac{1}{k_1(\exp {(-q_1\cdot \sqrt{(x_{ij}-x_0)^2+(y-y_0)^2}+q_0)}+t_{0})+k_0}dy}, \end{aligned}$$
(17)

where \(x_{ij}, y_{ij}\) is the coordinate of the surface point halfway between waveguides \(i\) and \(j=i+1\) and \(y\) is the vertical coordinate of the pixel to be evaluated (vertical resolution can be arbitrary, as the assumed temperature distribution function can be evaluated at any number of points). The requirement of the iteration is to find \(q_1\), where Eq. 17 is true. This can be achieved using the bisection method, using \(q_1\) as the parameter and Eq. 17 as the equation to solve.

The only constant not quantified so far is \(t_0\), which is the the asymptote of the temperature distribution function. This constant has to be set very carefully as if its value is set too low then the estimated temperature of the hotspot will be lower than its actual temperature. However it is certain that the value of \(t_0\) has to be lower than the coldest point within the test piece, because it denotes the asymptote of the temperature distribution curve - therefore the value of \(t_0\) is set to be equal to the surface temperature, as it is the lowest known temperature within the material, and so it is certain that the temperature of the hotspot will be over-estimated. (In case of degradation monitoring over-estimation of the defect is more desirable than underestimation because of safety reasons.) Once \(q_1\) and \(q_0\) are obtained the temperature distribution based on the Assumed Distribution method can be reconstructed.

4 Reconstruction of Simulated Data

In order to evaluate the implementations of the reconstruction methods described in Sect. 3 they are compared using simulated temperature distributions so that the effect of noise can be eliminated—for this purpose a simulated temperature distribution map has been created. The simulation was based on a two-dimensional steady state conduction model described in [19]. All boundaries were set to be convective. The temperature constant was chosen to be \(t_0=51~{^\circ }\)C and the heat convection constant (describing the heat transfer between the sample and air during cooling) to be \(h=1 \frac{W}{\mathrm{m}^2 K}\). The resulting temperature distribution is shown in Fig. 7a.

Fig. 7
figure 7

Reconstructed temperature distribution estimated from times of flights calculated from a simulated temperature distribution (a) using the Assumed Distribution method (b), and using the Randomized Kaczmarz algorithm (c). All of these images are displayed on identical color-scales as shown. (The array of sensors is located along the top edge of the image). For better numerical comparability d shows the horizontal temperature distribution at \(y=0\) mm—the continuous line shows the actual simulated temperature on the backwall, the dashed line shows the temperature distribution reconstructed by the Assumed Distribution method and the grey dotted line shows the distribution reconstructed by the Kaczmarz method

The constants determined by the calibration in Sect. 2.4 were used to convert the temperature map into velocities and therefore their relationship is linear. The time of flight values and velocities were computed analytically without simulating ultrasonic waveforms. The locations of the wavepaths relative to this velocity map were determined in the following way: the endpoint coordinates of the wavepaths were calculated based on the known attachment point coordinates of each waveguide. In order to calculate the times of flight along each wavepath the value of the velocity map were evaluated along each wavepath using linear interpolation. For the linear interpolation, both the surface wavepath and backwall echo wavepath were sectioned with a spacing of dS = 0.001mm resulting in \(n\) and \(m\) number of sections accordingly. Therefore the time of flight for each wavepath was:

$$\begin{aligned} T^{surface}_{ij}&=\sum _{h=1}^{n}{\frac{1}{c_h}dS}\end{aligned}$$
(18)
$$\begin{aligned} T^{backwall}_{ij}&=\sum _{h=1}^{m}{\frac{1}{c_h}dS}, \end{aligned}$$
(19)

where \(c_h\) is the interpolated velocity at the differential line element number \(h\). It is acknowledged that this straight-ray model ignores (a) refraction and (b) diffraction, but these were considered negligible due to (a) the low contrast and (b) the smoothly varying nature of the velocity field.

Once the time of flight values have been calculated the reconstruction algorithms can be applied as described by Sects. 5.1.1 and 3. The simulated field is shown in Fig. 7.a. and the reconstructed images are shown in Fig. 7b, c. For better comparability the distribution along the backwall of the sample (\(y=0\) mm) is shown on Fig. 7d.

The results show that the Assumed Distribution method estimates the temperature of the hotspot to within \(2\!\!-\!\!3~{^\circ }\)C and estimating the backwall temperature to within \(5~{^\circ }\mathrm{C}\) elsewhere, while the Kaczmarz algorithm has an offset error of \(20~{^\circ }\)C.

5 Reconstruction from Experimental Data

5.1 Signal Processing

The ultrasonic sensor array, the cylindrical heating element and test piece assembly shown in Fig. 1 were used to capture waveforms to also experimentally evaluate the methods described in this paper. The signal acquisition for these measurements has to be very fast as the transient temperature distribution is continuously changing. For signal generation and data acquisition purposes an M2M MultiX LF fully parallel array controller (M2M S.A., Les Ulis, France) was used, which is able to capture the 380 waveforms in a fraction of a second. As an approximation to the ideal toneburst a five cycle square wave was used as a transmitted signal. The repetition rate of the measurements was 0.5 kHz and each saved waveform calculated as the average of 16 measured waveforms.

[4] describe the behaviour of the waveguides used in this paper assuming an ideal sent toneburst and conclude that the signal to noise ratio of the sensor is about 30 dB, as the excitation of undesirable modes in the waveguide cannot be completely avoided. Since the noise caused by the undesirable modes is coherent it cannot be removed by averaging. Another limitation of our setup is the signal generator of the array controller. As the sent toneburst is approximated by a 5 cycle square wave, its frequency spectrum is expected to be less ideal, which results in unwanted frequency components in the signal. These phenomena can be observed on Fig. 8 showing a sample waveform measured with our setup.

Fig. 8
figure 8

A sample waveform recorded at room temperature with the array described in Sect. 2.2. The arrival of the surface skimming wave, first backwall echo and second backwall echo are clearly visible

Three different toneburst packets are clearly identifiable on Fig. 8—the arrival of the surface skimming wave, first backwall echo and second backwall echo wave. A lower frequency tail wave close to the surface skimming wave caused by the imperfect sent toneburst is present as well followed by coherent noise between wave packets, which is explained by the dispersion in the waveguides as previously described. These phenomena cannot be avoided using the current array controller, their effect can only be reduced by band-pass filtering as described in Sect. 5.1.1. Ultimately however the filters cannot eliminate all of the unwanted components and so they contribute to what is handled as coherent noise in the waveforms and are evaluated as such as described by Sect. 5.2. The time of flight data required for the reconstruction therefore is extracted from the necessarily noisy waveforms using signal processing tools described in this section.

5.1.1 Calculation of Time of Flights

The fundamental frequency of the sent toneburst is \(2 MHz\) as described in Sect. 2.2, therefore first a band-pass filter with cut-off frequencies at 1.2 and 2.8 MHz is applied to the signal. Once the signal has been filtered, it is cross-correlated with an ideal noise-free toneburst. The peak times of the resulting cross-correlation function are then interpreted as the arrival times of each wave packet.

The goal of this paper is to assess the spatial distribution of the propagation velocity within the material of the test piece; therefore the time of flight of the backwall echo waves has to be obtained with as high accuracy as possible. For this purpose the first backwall echo is considered. The measured peak times however also include the time needed to propagate through the waveguides - this term needs to be subtracted in order to obtain the time of flights within the material of the test piece only.

For this purpose the arrival time of the surface skimming wave is subtracted from the arrival time of the first backwall echo and this difference is used as an input for the reconstruction. This formulation of the problem eliminates the time of flights within the waveguides, but requires additional assumptions to be made about the sensor assembly.

$$\begin{aligned} T^{SBW}_{ij}=T^{backwall}_{ij}-T^{surface}_{ij}, \end{aligned}$$
(20)

where \(T^{surface}_{ij}\) is the measured arrival time of the surface skimming wave from waveguide \(i\) to \(j\) and \(T^{SBW}_{ij}\) is the time difference of the first backwall echo wave and the surface wave between waveguides \(i\) and \(j\). In order to calculate the average propagating velocity over the backwall echo path based on \(T^{SBW}_{ij}\) it is necessary to obtain the propagation velocity of the surface skimming waves.

In the case of isotropic and homogeneous propagation velocity distribution (and so homogeneous temperature distribution) the average velocities of the surface wave and the backwall echo wave are equal, therefore the calibration measurements can be carried out problem-free.

Common degradation mechanisms do not affect the surface wave, therefore the velocity of the surface wave is straightforward to track, as it is only influenced by the surface temperature, which can be measured externally (e.g. using thermocouples). In the case of simulated heat distributions the temperature of the surface was within \(\pm 1.5~{^\circ }\)C, therefore all the surface velocities are assumed to have the same propagation velocity. The calculation of this velocity is carried out using the following equation:

$$\begin{aligned} \widehat{c}_{surface}=\mathrm MED (\frac{d_{jk}}{T_{ij}^{surface}-T_{ik}^{surface}}), \end{aligned}$$
(21)

where MED is the median function and \(d_{jk}\) is the separation of waveguides \(j\) and \(k\). As our calculation involves 380 waveforms per measurement the median function is used as opposed to averaging in order to prevent the noisier outlier waveforms to impair the precision of the calculation.

The obtained median surface velocity can now be used to calculate the average propagation velocity along each backwall echo wave path

$$\begin{aligned} c^{S}_{ij}=\frac{2\sqrt{\frac{d_{ij}^2}{4}+L^2}}{T^{SBW}_{ij}+\frac{d_{ij}}{\widehat{c}_{surface}}}, \end{aligned}$$
(22)

where \(c^{S}_{ij}\) denotes the calculated average propagation velocity over the backwall echo path from waveguide \(i\) to \(j\).

In order to further decrease variability caused by the differences in each waveguide, the calculated high temperature propagation velocities are corrected based on the ambient propagation velocities.

$$\begin{aligned} c^{S,corr}_{ij}&=c^{S}_{ij}-c^{S,ambient}_{ij}+\overline{c}^{S,ambient}_{ij}, \end{aligned}$$
(23)

where \(c^{S,ambient}_{ij}\) denotes the backwall velocities evaluated using \(T^{SBW}_{ij}\) at room temperature and \(\overline{c}^{S,ambient}_{ij}\) is the arithmetic mean of all \(c^{S,ambient}_{ij}\) values. This correction is based on the reasonable assumption that the average velocity measured at room temperature is precise and the variations come from the specific waveguide combinations (e.g. coupling conditions, waveguide imperfections, differences in the piezoelectric elements, and so on).

The benefit of extracting the time of flights of the backwall echo waves using the surface wave is not immediately obvious, since a much more straightforward approach exists. The alternative would be to use pulse-echo waves (waveforms produced by sending and receiving with the same waveguide), which would allow us to extract the time of flights within the waveguides directly, and subtract this value from \(T_{ij}^{backwall}\) in order to calculate the time of flights of the backwall echo waves. Indeed pulse-echo waves are recorded as part of a full matrix capture, however it is practically impossible to carry out pulse-echo measurements on both the sending and receiving waveguide at the same time as the actual pitch-catch measurement takes place, which means that the temperature of the sample and the waveguides will have changed between measurements. In comparison the arrival of the surface wave can be extracted from the very same waveform as the backwall echo wave, it is certain therefore that all of the waveguide-related variabilities are cancelled out and so we chose to use the surface wave arrival times as reference for our signal processing.

5.2 Experimental Measurements

The reconstruction of a simulated temperature distribution and the experimental measurements described in this section are expected to differ from the simulated results due to noise, that experimental measurements introduce into the dataset. A measurement was carried out to evaluate the variability introduced by the experimental setup and the processing methods in use.

In order to evaluate the variability of the sensor assembly, measurements were carried out at room temperature. Altogether 60 datasets were acquired 12 s apart resulting in \(60\cdot 380=22800\) waveforms in 12 min. The results are shown in Fig. 9. The maximum of the calculated standard deviation map is \(0.23\) m/s, which is \(0.007~\%\) of the propagation velocity (while sending with waveguide number 19 and receiving with number 15 as shown in Fig. 9). Based on the calibrated temperature-propagation velocity relation this yields a variability of \(0.45~{^\circ }\)C over the wavepath for this specific waveguide combination, which is the worst case scenario.

Fig. 9
figure 9

Standard deviation of the propagation velocities calculated for each waveguide pair measured at room temperature

5.2.1 Evaluation of Reconstruction Methods with Experimental Measurement Data

In the case of experimental measurements the exact temperature distribution within the test piece is unknown. The temperature of the test piece therefore was monitored using five thermocouples while heating the assembly. These were attached by welding in the locations shown in Fig. 3.

The measurements carried out with the assembly were evaluated using the Randomized Kaczmarz algorithm and the Assumed Distribution method defined in Sect. 3.2 and were compared to the measurements carried out with the thermocouples.

Sixty datasets were acquired while the test piece was being heated. The reconstructed images at the highest temperatures are shown in Fig. 10a, b. In order to demonstrate the importance of the position of the heat source a second measurement was carried out with the heating element repositioned by \(10\) mm. The reconstructed images from the measurements carried out with the repositioned heat source are shown in Fig. 11a, b. (shown at the highest measured temperature).

The figures described above account for the static snapshots at a given time. The evolution of temperatures in time for the centred and repositioned case are shown in Fig. 12a, b. These figures show the temperature of the hotspot measured by the thermocouples and reconstructed with the algorithms described in this paper for all the 60 datasets that have been acquired.

Fig. 10
figure 10

Reconstructed temperature distribution estimated from times of flights calculated from an experimental measurement using the Randomized Kaczmarz algorithm (a) and using the Assumed Distribution method (b) 591 s after start of heating

Fig. 11
figure 11

Reconstructed temperature distribution estimated from times of flights calculated from a measurement using the Randomized Kaczmarz algorithm (a) and using the Assumed Distribution method (b) after repositioning the cylindrical heating element. The measurement was carried out 590 s after start of heating

Fig. 12
figure 12

Evolution of the temperature at the hottest point of the material evaluated with different methods. The continuous line shows temperature measurements carried out using the thermocouples, the black dashed line shows the results of the Assumed Distribution method and the gray dashed line shows results of the Kaczmarz method. Image a. shows the measurement where the heating element is attached in the middle of the sample, while image b shows the measurements where the heating element is attached at an offset of 10 [mm] from the middle of the array

6 Discussion

It is clear that the Assumed Distribution method presented in Sect. 3.2 provides a more accurate reconstruction in the case of simulated data compared to the Kaczmarz algorithm. In simulations the Assumed Distribution method estimates the temperature of the hotspot to within \(2\!\!-\!\!3~{^\circ }\)C, while the Kaczmarz algorithm provides a less accurate estimation (the reconstructed hotspot had a \(20~{^\circ }\)C offset error).

The reconstructed images based on measured data are similar, however the inconsistent noisy data causes the Kaczmarz algorithm to perform even less accurately compared to the simulated case. It still provides a very rough estimate of the propagation velocity distribution within the material and so in this case the Kaczmarz algorithm is able to estimate the temperature of the hotspot with an accuracy of the order of \(\pm 30~{^\circ }\)C. The estimation accuracy of the Assumed Distribution method however is of the order of \(\pm 5~{^\circ }\), therefore outperforming the Kaczmarz algorithm. As shown in Fig. 12a, b the relative error of each method stays consistent while increasing the temperature of the hotspot.

The figures reconstructed by the Kaczmarz algorithm indicate that the primary source of variability is the lack of vertical resolution, which is caused by the limited field of view of the ultrasonic sensor array. This geometrical limitation however cannot be overcome by simply extending the array as the longer the waves propagate within the material the more attenuated they are, which in turn increases the variability of the extracted time of flight data (due to loss of signal to noise ratio).

In comparison to this shortcoming of the Kaczmarz algorithm our preliminary assumption of straight wavepaths introduces negligible errors, as the biggest ultrasonic velocity change is of the order of \(2~\%\) (which is equivalent to a void fraction of about \(3.5~\%\) based on the Chatterjee model), therefore the error introduced by ignoring ray-bending is insignificant compared to the limitations of the geometry.

The Assumed Distribution method circumvents the problem of deducing vertical resolution from noisy data by assuming the vertical temperature distribution and therefore ultrasonic velocity distribution. This approach has been shown to be more effective, however the assumptions made are specific to the phenomenon of diffusion, that can be described by an exponential decay from the source. This is a good model for heat transfer and diffusion of heat into the component. Hydrogen Attack is dependent on diffusion of hydrogen into the steel and the Assumed Distribution Method therefore is also a likely candidate for describing the estimate of damage due to reaction of the diffused hydrogen with the carbon in the steel, provided it is linearly related to the amount of hydrogen. The authors of this paper are not aware of any study investigating this, the discussion of hydrogen attack in [10, 28, 29] only deal with the effect of a step function in the damage. Based on governing rate equations and physical origins however there are strong analogies between heat transfer by conduction and mass transfer by diffusion [30], therefore the assumptions required for Hydrogen Attack are likely to be similar to the ones in this study.

There are other limiting factors while monitoring material degradation, namely possible scattering caused by voids. However the size of the voids are expected to be orders of magnitude lower than the wavelength (as described by Sect. 2.1.1), therefore this phenomenon is likely to be negligible.

Another issue is the potential thickness loss of the specimen—in harsh environments material corrosion could decrease the thickness, which in turn would affect the arrival times of the backwall echo waves. The only difference between the two processes is that while material degradation decreases the ultrasonic velocity therefore increasing the arrival times, thickness loss decreases the backwall echo arrival times. It is therefore possible to differentiate between the thickness loss and material degradation if they occur at different times, however if they occur simultaneously they can compensate for each other. In this case the thickness measurement would have to be carried out using an alternative approach, or it may have to be accepted as a limitation of the method.

If measuring material degradation, temperature variations can also corrupt the measured data, however the temperature of the specimen can be monitored with thermocouples and therefore it can be compensated for. In this case the temperature distribution must be constant across the pipe wall.

Another difference between temperature distribution and material degradation produced ultrasonic velocity change is the arrival time of the surface wave. The measurements presented in this paper involved creating a large temperature gradient within the specimen by applying heat, which is transferred quickly within the material to the surfaces, and so the extraction of the arrival time of the surface wave required additional assumptions to be made. In the case of material degradation however the surface wave should not be affected, therefore the arrival times can potentially be extracted more precisely.

7 Conclusion

The feasibility of using permanently installed ultrasonic sensors for monitoring high temperature degradation mechanisms was investigated in this paper. A non-uniform ultrasonic velocity distribution, which is expected to be the effect of hydrogen attack and similar degradation mechanisms, was created by applying un-steady heating to the specimen. This temperature map was used to evaluate the feasibility of reconstructing the propagation velocity map within the material in order to be able to monitor material degradation. The temperature range investigated in this study (\(20\!-\!110~{^\circ }\)C) is equivalent to a void fraction of \(0-3.5\%\) of hydrogen attack.

Based on the simulated and experimental results, the equipment and methods used are precise enough to measure local temperature changes of the order of \(\pm 30~{^\circ }\)C using the Kaczmarz (ART) algorithm and \(\pm 5~{^\circ }\)C using the Assumed Distribution method presented in this paper with a resolution of \(1.5\) by \(1.5\) mm (which is the wavelength of the signal within the material). These values are equivalent to a local ultrasonic propagation velocity change of \(\pm 15\) and \(\pm 2.5~{\mathrm{m}}/{\mathrm{s}}\) respectively, which is equivalent to a local void fraction of \(0.9\) and \(0.15~\%\) based on our calculations. These initial results thus show that the techniques may be useful to monitor the progress of Hydrogen Attack, hence an experimental testing rig is being built in order to experimentally induce hydrogen attack and monitor its progress.