Introduction

Independent of the approach selected, either displacement or force-based, most of the seismic designs of civil structures are performed using variations of the response spectrum method, where the ground motion is represented as a design response spectrum that is representative of the seismic hazard on the site. The (pseudo-accelerations or relative displacements) design response spectrum is used to estimate the force or deformation demand imposed on the structure. The main goal of this technique is to translate the ground motion into forces acting on the building. Nevertheless, for some special tasks or structures, dynamic time history analyses are required. In the standard American Society of Civil Engineers (ASCE) 7–10, for example, if the structure meets the characteristics necessary to be considered as irregular, dynamic analyses (either modal response spectrum or time history) are required to verify the design of the structure. In the nuclear industry, dynamic time history analyses are required in the design, verification, and seismic assessment of critical buildings. In addition, they are needed for the development of floor (or in-structure) response spectra and free-field ground response spectra. When dynamic time history analyses are to be performed, the seismic input needs to be defined as a time history of accelerations. To comply with code requirements, the acceleration series used should be compatible with the design spectrum, i.e., the response spectrum of the acceleration time history should match with the design spectrum in average.

To select these input records, one can look at historic records with their amplitudes scaled by a factor to optimize the fitting over the design spectrum (e.g., Bommer and Acevedo 2004). Alternatively, there are different methodologies available to generate ‘synthetic’ spectrum-compatible records. When using the first option, i.e., amplitude scaling, a larger number of records are required to obtain a reliable average of the system response due to the natural scatter of the records. On the other hand, if spectrum-compatible records are used, the number of analyses required to obtain a reliable estimate of the response is substantially reduced (e.g., Watson-Lamprey and Abrahamson 2006; Heo et al. 2011; Hancock et al. 2008).

There are several methodologies that have been proposed for the generation of spectrum-compatible records, like the adjustment of the power spectrum of a random process or the manipulation in the time or frequency domain of historic records. The criteria for the generation and evaluation of such records specified in standard seismic provisions (e.g., ASCE 7; American Society of Civil Engineers 2010) are rather slight when compared to the requirements in the nuclear regulatory guide RG 1.208 (US-NRC 2007). For example, RG 1.208 explicitly prohibits the use of synthetic motions that are not based on seed-recorded time histories so that only modified recorded ground motions can be used for site response analysis.

The available methodologies for obtaining spectrum-compatible accelerograms based on the modification of historic records can be classified into three groups: (1) those based on matching in the frequency domain by manipulation of the Fourier spectrum (e.g., Tsai 1972), (2) those grounded on wavelet adjustments to the recorded accelerograms at specific times (e.g., Lilhanand and Tseng 1988; Abrahamson 1992; Hancock et al. 2006; Al Atik and Abrahamson 2010), and (3) those centered on the manipulation of the wavelet coefficients obtained via continuous wavelet transform (e.g., Mukherjee and Gupta 2002; Suarez and Montejo 20032005). Although an acceptable level of matching can be obtained by means of any of the available methodologies, the use of spectrum-compatible records is a matter of discussion in the earthquake engineering community, as the characteristics of the final compatible record may largely differ from those observed in real records e.g., (Naeim and Lew 1995; Bommer and Ruggeri 2002; Bommer and Acevedo 2004). Although these arguments may be valid, the discussion of this issue is beyond the scope of the paper. This article revises the continuous wavelet transform approach to generate spectrum-compatible records from the modification of actual registered seismic events and propose an alternative computation algorithm to reduce its computational cost.

Methods

CWT-based methodology for the generation of spectrum-compatible records

For completeness, this section briefly describes the methodology to manipulate historic earthquake records to obtain spectrum-compatible accelerograms based on the continuous wavelet transform (CWT). A comprehensive presentation of the CWT theory behind the methodology and the properties of the wavelet function used are available elsewhere (Suarez and Montejo 2005).

The CWT of a signal f(t) can be defined as the sum over all times of the signal multiplied by scaled and shifted versions of a wavelet function Ψ as defined by Equation 1

C s , p = + f t Ψ s , p * dt = + f t 1 s Ψ * t p s dt
(1)

The parameter s and p are used to scale and shift the wavelet, respectively. The asterisk * indicates complex conjugation and the normalizing factor 1/√s ensures that the energy is the same for all values of s. The result of the transform is a matrix of wavelet coefficients C(s,p) that contain information about the signal at the scale s and time position p, that is, the CWT can be viewed as a two-dimensional transform that maps an original time signal f(t) into a time-scale domain. Since the scale s can be related to frequency, the CWT is occasionally used in earthquake engineering to obtain simultaneous time-frequency representations of earthquake records (e.g., Montejo and Kowalsky 2008). Once the wavelet coefficients are calculated, the signal can be reconstructed using Equation 2:

f t = 1 K ψ s = 0 + p = C s , p Ψ s , p t dp s 2 ds = 1 K ψ 0 D s , t ds
(2)

where K ψ is a constant that depends on the wavelet function selected for the analysis and the functions D(s,t) are

D s , t = p = 1 s 2 C s , p Ψ s , p t dp .
(3)

They are referred to as the detail functions. They have a dominant frequency that depends on the type of wavelet. In practice, a set of n discrete values of the continuous scale s are used.

The selection of an appropriate wavelet function is crucial for the effective implementation of a given application. For strong motion data synthesis and analysis, the impulse response wavelet has been successfully used in the past. It is define as

Ψ t = e ζ Ω | t | sin Ω t ,
(4)

where ζ and Ω are the parameters that define the shape and central frequency of the wavelet, respectively. For the applications presented here, values of ζ = 0.05 and Ω = π are used. The mathematical properties and advantages of this wavelet in the analysis of earthquakes records are discussed in Suarez and Montejo (2005) and Hancock et al. (2006).

To obtain spectrum-compatible records one may proceed as follows:

  1. 1.

    Perform the CWT, using Equations 1 and 4, to obtain the wavelet coefficients C(s,p) at selected values of s. The values of s are defined based on the periods or frequencies where a match with the target spectrum is desired. For the impulse response wavelet, the relation between s and the predominant frequency (f) in hertz and period (T) is given by Equation 5:

    f = Ω 2 π 1 s T = 2 π Ω s .
    (5)

In the applications presented in this work, a total of n = 100 frequency values uniformly spaced over the log frequency scale from 0.1 to 50 Hz is used as required in Appendix F of the Nuclear Regulatory Guide RG 1.208 (US-NRC 2007). Depending on the application, the user may decide to modify these values.

  1. 2.

    Once the wavelet coefficients C(s,p) are calculated, the detail functions are constructed using Equation 3.

  2. 3.

    Calculate the ratios between the target spectrum Sa T (f j ) and the response spectrum of the historical record Sa R (f j ) at the frequencies defined in step 1:

    R f j = S a T f j S a R f j ; j = 1 , 2 , , n .
    (6)
  1. 4.

    Multiply each detail function D(s j ,t) by the corresponding spectral ratio R(f j ) and reconstruct the signal with Equation 2.

  2. 5.

    Calculate and compare the response spectrum of the reconstructed signal Sa R (f j ) with the target spectrum and repeat steps 3, 4, and 5 until a desired level of matching is attained or a maximum number of iterations are reached.

Improved efficiency using fast convolution via FFT

The CWT approach previously described for generating spectrum-compatible records has been largely used in the past, offering stable solutions and good matching with different target spectra (e.g., Velez 2007; Priestley et al. 2007; Bahar and Taherpour 2008; Linzell and Nadakuditi 2011; Montejo et al. 2012). However, a common complaint is the amount of time that takes for the algorithm to run, especially when lengthy records are used as input. To improve the processing time, a new algorithm is proposed in this paper. It computes the wavelet decomposition, Equation 1, and constructs the detail functions, Equation 3, using a fast convolution implementation by means of the fast Fourier transform (FFT).

By looking at Equation 1, it can be said that the CWT for a fixed scaled s is the convolution of the signal f(t) with the wavelet function scaled by s. The matrix of wavelet coefficients C(s,p) is formed by changing the values of s and repeating the convolution operation. The original implementation of the algorithm computed these convolutions in the time domain. However, according to the convolution theorem, convolution in the time domain is equivalent to multiplication in the frequency domain, that is, if we have the two functions f and g in the time domain, its convolution can be calculated in the frequency domain using Equation 7:

f g = F 1 F f F g
(7)

where ⊗ denotes convolution, • denotes pointwise product, and F and F−1 denote the forward and inverse Fourier transform, respectively. Since we are dealing with discrete signals, transformations between the time and frequency domains call for the application of the discrete Fourier transforms (DFT). If the DFT is implemented using the FFT algorithm, the convolution via the frequency domain can be significantly faster than directly convolving the two time domain signals. While the computational cost for direct time domain convolution of two N-point sequences is of the order O(N2), the cost for frequency domain convolution using the FFT algorithm is O(N logN). Considering the typical length of the signals used in this application and the number of convolution operations required by the algorithm (one per each value of s), this translates into large savings in the computational cost. In the new implementation, the CWT is executed using Equation 8:

C s , p = f t Ψ s , p = IFFT FFT f t FFT Ψ s , p
(8)

where FFT and IFFT are, respectively, the direct and inverse discrete Fourier Transforms calculated with the FFT algorithm.

Note that in order to be able to compute the pointwise product (•), the lengths of the FFT for the record and for the wavelet must be the same. To obtain a faster and most precise implementation, the lengths of the wavelet and the record are set to the next power of 2 of the expected convolution length (2 * N − 1) (Smith 2002).

Finally, notice from Equation 3 that the detail functions can also be calculated using the FFT convolution between the wavelet coefficients for a fixed s and the wavelet function scaled by the same s value, as shown in Equation 9:

D s , t = 1 s 2 C s , p Ψ s , p t = 1 s 2 IFFT FFT C s , p FFT Ψ s , p .
(9)

Figure 1 shows the computational savings obtained using the proposed algorithm and assuming that the record is decomposed at 100 scale (frequency) values. It is seen that for a relative short record of 2,000 data points (e.g., 10 s sampled at 200 Hz), the estimated ‘real time’ that will take the new implementation of the algorithm to run is about 42 s, while the old implementation will take approximately 37 min. As the length of the seed record increases, the differences become exponentially larger (notice the logarithmic scale used in the y-axis). The estimated real times in Figure 1 were computed based on a laptop PC i7 at 2.90 GHz, 8 GB RAM, and 64-bit OS.

Figure 1
figure 1

Time savings using the proposed algorithm implementation.

Results and discussion

The new implementation of the algorithm is evaluated using a design spectrum typical of the US nuclear industry. The target spectrum is extracted from Section 5 of RG 1.208 (US-NRC 2007) and represents a typical uniform hazard response spectrum (UHRS) at the outcrop rock (Figure 2). In practice, synthetic records compatible with this spectrum need to be generated to perform site analyses and develop a site-specific design spectrum.

Figure 2
figure 2

Target uniform hazard response spectra. Top: the target UHRS at the rock outcrop and the scaled response spectra for the three seed records. Bottom: the target UHRS and the response spectra of the resulting compatible records.

Three different seed records to be used as input for the algorithm were selected from the PEER NGA database (PEER 2013) based on the similarity of the response spectrum shape of the records with the shape of the target spectrum. Table 1 summarizes the main characteristics for each of the records selected. Since the selection is based on spectral shape similarities, the amplitude of the records is scaled to provide a best fit. The scaled response spectra for the three records are displayed and compared with the target spectrum in Figure 2 (top).

Table 1 Summary of the selected seed records

Figure 2 (bottom) shows the response spectra for 5% damping of the resulting compatible records along with the target spectrum. The 90% and 130% design spectra are also displayed (dashed lines) as these are the spectral amplitude limits specified in RG 1.208. To prevent the spectra in large frequency windows from falling below the spectrum, RG 1.208 also requires that no more than nine adjacent frequency points fall below the target spectrum. It is seen that the compatible records generated satisfy these requirements.

Figures 3,4,5 display the time histories of acceleration, velocity, displacement, Arias intensity (AI), cumulative absolute velocity (CAV), and cumulative squared velocity (CSV) for the linearly scaled and compatible records generated. It is seen that the characteristics of the seed motions are, in general, retained in the compatible records.

Figure 3
figure 3

Characteristics of the seed motions corresponding to the TCU089 1999 Chi-Chi record. Top to bottom: time histories of acceleration, velocity, displacement, AI, CAV, and CSV for the scaled (blue line) and compatible records (black line) corresponding to the TCU089 1999 Chi-Chi record.

Figure 4
figure 4

Characteristics of the seed motions corresponding to the TCU072 1999 Chi-Chi record. Top to bottom: time histories of acceleration, velocity, displacement, AI, CAV, and CSV for the scaled (blue line) and compatible records (black line) corresponding to the TCU072 1999 Chi-Chi record.

Figure 5
figure 5

Characteristics of the seed motions corresponding to the Auletta 1980 Irpinia record. Top to bottom: time histories of acceleration, velocity, displacement, AI, CAV, and CSV for the scaled (blue line) and compatible records (black line) corresponding to the Auletta 1980 Irpinia record.

Conclusions

An improved algorithm for faster generation of spectrum-compatible records was developed. The algorithm is based on the modification of the CWT coefficients of a historic record. The efficiency of the algorithm is improved by performing the required convolution operations in the frequency domain via fast Fourier transforms. It was shown that the algorithm can be used to obtain compatible records that satisfy current US NRC requirements. A Matlab implementation of the algorithm is available upon request from the corresponding author.