1 Introduction

There has been an exponential growth of data which is generated by the accelerated use of modern computing paradigms. A prominent example of such paradigms is the Internet of Things (IoTs) in which everything is envisioned to be connected to the Internet. One of the most promising technology transformations of IoT is a smart city. In such cities, enormous number of connected sensors and devices continuously collect massive amount of data about things such as city infrastructure to analyze and gain insights on how to manage the city efficiently in terms of resources and services.

The adoption of smart city paradigm will result in massive increase of data volume (data collected from a large number of sensors) as well as a number of data features which increase data dimensionality. To make prices and in-depth insights from such data, advanced and efficient techniques including multi-way data analysis were recently adopted by research communities.

The concept of multi-way data analysis was introduced by Tucker in 1964 as an extension of standard two-way data analysis to analyze multidimensional data known as tensor [22]. It is often used when traditional two-way data analysis methods such as Non-negative Matrix Factorization (NMF), Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are not capable of capturing the underlying structures inherited in multi-way data [9]. In the realm of multi-way data, tensor decomposition methods such as Tucker and CANDECOMP/PARAFAC (CP) [22, 30] have been extensively studied and applied in various fields including signal processing [11], civil engineer [20], recommender systems [30], and time series analysis [10]. The CP decomposition has gained much popularity for analyzing multi-way data due to its ease of interpretation. For example, given a tensor \({\mathcal {X}} \in {\mathbb {R}} ^{I_1 \times \cdots \times I_N} \), CP method decomposes \({\mathcal {X}}\) by N loading matrices \(A^{(1)}, \ldots , A^{(N)}\) each represents one mode explicitly, where N is the tensor order and each matrix A represents one mode explicitly. In contrast to Tucker method, the three modes can interact with each other making it difficult to interpret the resultant matrices.

The CP decomposition approach often uses the Alternating Least Squares (ALS) method to find the solution for a given tensor. The ALS method follows the batch mode training process which iteratively solves each component matrix by fixing all the other components; then, it repeats the procedure until it converges [19]. However, ALS can lead to sensitive solutions [4, 12]. Moreover, in the domain of big data and IoTs such as smart cities, the ALS method raises many challenges in dealing with data that is continuously measured at high velocity from different sources/locations and dynamically changing over time. For instance, a structural health monitoring (SHM) data can be represented in a three-way form as \(location \times feature \times time\) which represents a large number of vibration responses measured over time by many sensors attached to a structure at different locations. This type of data can be found in many other application domains including [1, 5, 23, 37]. The iterative nature of employed CP decomposition methods involves intensive computational processing in each iteration. A significant challenge arises in such algorithms (including ALS and its variations) when the input tensor is sparse and has N dimension. This means as the dimensionality of the tensor increases, the calculations involved in the algorithm become computationally more expensive, and thus incremental, parallel and distributed algorithms for CP decomposition become essential to achieving a more reasonable performance. This is especially the case in large applications and computing paradigms such as smart cites.

The efficient processing of CP decomposition problem has been investigated with different hardware architecture and techniques including MapReduce structure [17] and shared and distributed memory structures [18, 36]. Such approaches present algorithms that require alternating hardware architectures to enable parallel and fast execution of CP decomposition methods. The MapReduce and distributed computing approaches could also incur additional performance from network data communication and transfer. Our goal is to devise a parallel and efficient CP decomposition execution method with minimal hardware changes to the operating environment and without incurring additional performance resulting from new hardware architectures. Thus, to address the aforementioned problems, we propose an efficient solver method called FP-CPD (Fast Parallel-CP Decomposition) for analyzing large-scale high-order data in parallel based on stochastic gradient descent. The scope of this paper is smart cities and, in particular, SHM of infrastructure such as bridges. The novelty of our proposed method is summarized in the following contributions:

  1. 1.

    Parallel CP Decomposition. Our FP-CPD method is capable of efficiently learning large-scale tensors in parallel and updating \({\mathcal {X}}^{(t+1)}\) in one step.

  2. 2.

    Empirical analysis on structural datasets. We conduct experimental analysis using laboratory-based and real-life datasets in the field of SHM. The experimental analysis shows that our method can achieve more stable and fast tensor decomposition compared to other known existing online and offline methods.

The remainder of this paper is organized as follows. Section 2 introduces background knowledge and review of the related work. Section 3 describes our novel FP-CPD algorithm for parallel CP decomposition based on SGD algorithm augmented with the NAG method and perturbation approach. Section 4 presents the motivation of this work. Section 5 shows the performance of D-CPD on structural datasets and presents our experimental results on both laboratory-based and real-life datasets. The conclusion and discussion of future research work are presented in Sect. 6.

2 Background and related work

2.1 CP decomposition

Given a three-way tensor \({\mathcal {X}} \in \Re ^{I \times J \times K} \), CP decomposes \({\mathcal {X}}\) into three matrices \(A \in \Re ^{I \times R}\), \(B \in \Re ^{J \times R} \)and \( C \in \Re ^{K \times R}\), where R is the latent factors. It can be written as follows:

$$\begin{aligned} {\mathcal {X}}_{(ijk)} \approx \sum _{r=1}^{R}A_{ir} \circ B_{jr} \circ C_{kr} \end{aligned}$$
(1)

where “\(\circ \)” is a vector outer product. R is the latent element; \(A_{ir}, B_{jr} \) and \(C_{kr}\) are r-th columns of component matrices \(A \in \Re ^{I \times R}\), \(B \in \Re ^{J \times R} \)and \( C \in \Re ^{K \times R}\). The main goal of CP decomposition is to decrease the sum square error between the model and a given tensor \({\mathcal {X}}\). Equation 2 shows our loss function L needs to be optimized.

$$\begin{aligned} L ({\mathcal {X}}, A, B, C) = \min _{A,B,C} \Vert {\mathcal {X}} - \sum _{r=1}^R \ A_{ir} \circ B_{jr} \circ C_{kr} \Vert ^2_f, \end{aligned}$$
(2)

where \(\Vert {\mathcal {X}}\Vert ^2_f\) is the sum squares of \({\mathcal {X}}\) and the subscript f is the Frobenius norm. The loss function L presented in Eq. 2 is a non-convex problem with many local minima since it aims to optimize the sum squares of three matrices. Several algorithms have been proposed to solve CP decomposition [25, 31, 38]. Among these algorithms, ALS has been heavily employed which repeatedly solves each component matrix by locking all other components until it converges [29]. The rational idea of the least square algorithm is to set the partial derivative of the loss function to zero with respect to the parameter we need to minimize. Algorithm 1 presents the detailed steps of ALS.

figure a

Zhou et al. [42] suggests that ALS can be easily parallelized for matrix factorization methods, but it is not scalable for large-scale data especially when it deals with multi-way tensor data. Later Zhou et al. [41] proposed another method called onlineCP to address the problem of online CP decomposition using ALS algorithm. The method was able to incrementally update the temporal mode in multi-way data but failed for non-temporal modes [19] and not parallelized.

2.2 Stochastic gradient descent

A stochastic gradient descent algorithm is a key tool for optimization problems. Here, the aim is to optimize a loss function L(xw), where x is a data point drawn from a distribution \({\mathcal {D}}\) and w is a variable. The stochastic optimization problem can be defined as follows:

$$\begin{aligned} w = \underset{w}{\hbox {argmin}} \; {\mathbb {E}}[L(x,w)] \end{aligned}$$
(3)

The stochastic gradient descent method solves the above problem defined in Eq. 3 by repeatedly updates w to minimize L(xw). It starts with some initial value of \(w^{(t)}\) and then repeatedly performs the update as follows:

$$\begin{aligned} w^{(t+1)}:= w^{(t)} + \eta \frac{\partial L}{\partial w } (x^{(t)},w^{(t)} ) \end{aligned}$$
(4)

where \(\eta \) is the learning rate and \(x^{(t)}\) is a random sample drawn from the given distribution \({\mathcal {D}}\). This method guarantees the convergence of the loss function L to the global minimum when it is convex. However, it can be susceptible to many local minima and saddle points when the loss function exists in a non-convex setting. Thus it becomes an NP-hard problem. Note, the main bottleneck here is due to the existence of many saddle points and not to the local minima [13]. This is because the rational idea of gradient algorithm depends only on the gradient information which may have \(\frac{\partial L}{\partial u } = 0\) even though it is not at a minimum.

Previous studies have used SGD for parallel matrix factorization. Gemulla [14] proposed a new parallel method for matrix factorization using SGD. The authors indicate the method was able to handle large-scale data with fast convergence efficiently. Similarly, Chin et al. [8] proposed a fast parallel SGD method for matrix factorization in recommender systems. The method also applies SGD in shared memory systems but with a careful consideration to the load balance of threads. Naiyang et al. [16] applies Nesterov’s optimal gradient method to SGD for non-negative matrix factorization. This method accelerates the NMF process with less computational time. Similarly, Shuxin et al. [40] used an SGD algorithm for matrix factorization using Taylor expansion and Hessian information. They proposed a new asynchronous SGD algorithm to compensate for the delay resultant from a Hessian computation.

Recently, SGD has attracted several researchers working on tensor decomposition. For instance, Ge et al. [13] proposed a perturbed SGD (PSGD) algorithm for orthogonal tensor optimization. They presented several theoretical analysis that ensures convergence; however, the method is not applicable to non-orthogonal tensor. They also did not address the problem of slow convergence. Similarly, Maehara et al. [26] propose a new algorithm for CP decomposition based on a combination of SGD and ALS methods (SALS). The authors claimed the algorithm works well in terms of accuracy. Nevertheless, its theoretical properties have not been completely proven and the saddle point problem was not addressed. Rendle and Thieme [32] propose a pairwise interaction tensor factorization method based on Bayesian personalized rank. The algorithm was designed to work only on three-way tensor data. To the best of our knowledge, the first work applies a parallel SGD algorithm augmented with Nesterov’s optimal gradient and perturbation methods for fast parallel CP decomposition of multi-way tensor data.

3 Fast parallel CP decomposition (FP-CPD)

Given an \(N^{th}\)-order tensor \({\mathcal {X}} \in {\mathbb {R}}^{I_1 \times \dots \times I_N}\), we solve the CP decomposition by splitting the problem into a convex N sub-problems since its loss function L defined in Eq. 1 is non-convex problem which may have many local minima. In case of distributing this solution, another challenge is raised where the value of the \(w^{(t)}\) must be globally updated before computing \(w^{(t+1)}\) where w represents AB and C. However, the structure and the process of tensor decomposition allows us to exploit this challenge. For illustration purposes, we present our FP-CPD method based on three-way tensor data. The same logic can be naturally extended to handle a higher-order tensor, though.

Definition 1

Two training points \(x_1 = (i_1,j_1,k_1) \in {\mathcal {X}}\) and \(x_2 = (i_2,j_2,k_2) \in {\mathcal {X}}\) are interchangeable with respect to the loss function L defined in Eq. 1 if they are not sharing any dimensions, i.e., \(i_1\ne i_2, j_1 \ne j_2\) and \(k_1 \ne k_2\).

Based on Definition 1, we develop a new algorithm, called FP-CPD, to carry the tensor decomposition process in parallel. The core idea of FP-CPD algorithm is to find and run the CPD in parallel by considering all the defined interchangeable training points in one single step without affecting the final outcome of w. Our FP-CPD algorithm partitions the training tensor \({\mathcal {X}} \in \Re ^{I \times J \times K} \) into set of potentially independent blocks \({\mathcal {X}}_1,\dots , {\mathcal {X}}_b\). Each block consists of t interchangeable training points which are identified by finding all the possible combinations of each dimension of a given tensor \({\mathcal {X}}\). To illustrate this process, we consider a three-order tensor \({\mathcal {X}} \in {\mathbb {R}}^{3 \times 3 \times 3}\) as shown in Fig. 1. This tensor is partitioned into d independent blocks which cover the entire given training data \({\mathcal {D}}_{b=1}^{d} {\mathcal {X}}_b\). The value of \(d = \frac{i \times j \times k}{\min (i,j,k)}\). Each \({\mathcal {X}}_b\) contains a parallelism parameter p which deduces the possible number of tasks that can be run in parallel. In our three-way tensor example \(p =3\) interchangeable training points.

Fig. 1
figure 1

Independent blocks for \({\mathcal {X}} \in \Re ^{3 \times 3 \times 3} \)

3.1 The FP-CPD algorithm

Given the set of independent blocks \({\mathcal {D}}_{b=1}^{d} {\mathcal {X}}_b\), we can decompose \({\mathcal {X}} \in \Re ^{I \times J \times K} \) in parallel into three matrices \(A \in \Re ^{I \times R}\), \(B \in \Re ^{J \times R} \) and \( C \in \Re ^{K \times R}\), where R is the latent factors. In this context, we reconstitute our loss function defined in Eq. 2 to be the sum of losses per block:\( L (A, B, C) = \sum _{b=1}^{d} L_b ( A, B, C) \). This new loss function provides the rational of our parallel CP decomposition which will allow SGD algorithm to learn all the possible interchangeable data points within each block in parallel. Therefore, SGD computes the partial derivative of the loss function \(L_b (A, B, C) = \sum _{(i,j,k) \in {\mathcal {D}}_{b} } L_{i,j,k}(A, B, C)\) with respect to the three modes AB and C alternatively as follows:

$$\begin{aligned} \frac{\partial L_b}{\partial A }(X^{(1)}; A) = (X^{(1)} - A \times (C \circ B)) \times (C \circ B) \nonumber \\ \frac{\partial L_b}{\partial B }(X^{(2)}; B) = (X^{(2)} - B \times (C \circ A)) \times (C \circ A)\nonumber \\ \frac{\partial L_b}{\partial C }(X^{(3)}; C) = (X^{(3)} - C \times (B \circ A)) \times (B \circ A) \end{aligned}$$
(5)

where \(X^{(i)}\) is an unfolding matrix of tensor \({\mathcal {X}}\) in mode i. The gradient update step for AB and C is as follows:

$$\begin{aligned} A^{(t+1)}:= A^{(t)} + \eta ^{(t)} \frac{\partial L_b}{\partial A } (X^{(1, t)};A^{(t)} ) \nonumber \\ B^{(t+1)}:= B^{(t)} + \eta ^{(t)} \frac{\partial L_b}{\partial B } (X^{(2, t)};B^{(t)} ) \nonumber \\ C^{(t+1)}:= C^{(t)} + \eta ^{(t)} \frac{\partial L_b}{\partial C } (X^{(3, t)};C^{(t)} ) \end{aligned}$$
(6)

3.1.1 Convergence

Regardless if we are applying parallel SGD or just SGD, the partial derivative of SGD in non-convex setting may encounter data points with \(\frac{\partial L}{\partial w } = 0\) even though it is not at a global minimum. These data points are known as saddle points which may detente the optimization process to reach the desired local minimum if not escaped [13]. These saddle points can be identified by studying the second-order derivative (aka Hessian) \(\frac{\partial L}{\partial w }^2\). Theoretically, when the \(\frac{\partial L}{\partial w }^2(x;w)\succ 0\), x must be a local minimum; if \(\frac{\partial L}{\partial w }^2(x;w) \prec 0\), then we are at a local maximum; if \(\frac{\partial L}{\partial w }^2(x;w)\) has both positive and negative eigenvalues, the point is a saddle point. The second-order methods guarantee convergence, but the computing of Hessian matrix \(H^{(t)}\) is high, which makes the method infeasible for high-dimensional data and online learning. Ge et al. [13] show that saddle points are very unstable and can be escaped if we slightly perturb them with some noise. Based on this, we use the perturbation approach which adds Gaussian noise to the gradient. This reinforces the next update step to start moving away from that saddle point toward the correct direction. After a random perturbation, it is highly unlikely that the point remains in the same band and hence it can be efficiently escaped (i.e., no longer a saddle point). We further incorporate Nesterov’s method into the perturbed-SGD algorithm to accelerate the convergence rate. Recently, Nesterov’s accelerated gradient (NAG) [27] has received much attention for solving convex optimization problems [15, 16, 28]. It introduces a smart variation of momentum that works slightly better than standard momentum. This technique modifies the traditional SGD by introducing velocity \(\nu \) and friction \(\gamma \), which tries to control the velocity and prevents overshooting the valley while allowing faster descent. Our idea behind Nesterov’s is to calculate the gradient at a position that we know our momentum is about to take us instead of calculating the gradient at the current position. In practice, it performs a simple step of gradient descent to go from \(w^{(t)} \) to \(w^{(t+1)}\), and then it shifts slightly further than \(w^{(t+1)}\) in the direction given by \(\nu ^{(t-1)}\). In this setting, we model the gradient update step with NAG as follows:

$$\begin{aligned} A^{(t+1)}:= A^{(t)} + \eta ^{(t)} \nu ^{(A, t)} + \epsilon - \beta ||A||_{L_{1,b}} \end{aligned}$$
(7)

where

$$\begin{aligned} \nu ^{(A, t)}:= \gamma \nu ^{(A, t-1)} + (1-\gamma ) \frac{\partial L_b}{\partial A } (X^{(1, t)},A^{(t)} ) \end{aligned}$$
(8)

where \(\epsilon \) is a Gaussian noise, \(\eta ^{(t)}\) is the step size, and \(||A||_{L_{1,b}}\) is the regularization and penalization parameter into the \(L_1\) norms to achieve smooth representations of the outcome and thus bypassing the perturbation surrounding the local minimum problem. The updates for \((B^{(t+1)}, \nu ^{(B, t)})\) and \((C^{(t+1)},\nu ^{(C, t)} )\) are similar to the aforementioned ones. With NAG, our method achieves a global convergence rate of \(O(\frac{1}{T^2})\) comparing to \(O(\frac{1}{T})\) for traditional gradient descent. Based on the above models, we present our FP-CPD algorithm 2.

figure b

4 Motivation

Numerous types of data are naturally structured as multi-way data. For instance, structural health monitoring (SHM) data can be represented in a three-way form as \(location \times feature \times time\). Arranging and analyzing the SHM data in a multidimensional form would allow us to capture the correlation between sensors at different locations and at the same time which was not possible using the standard two-way matrix \(time\times feature\). Furthermore, in SHM only positive data instances, i.e., healthy state, are available. Thus, the problem becomes an anomaly detection problem in higher-order datasets. Rytter [33] affirms that damage identification also requires also damage localization and severity assessment which are considered much more complex than damage detection since they require a supervised learning approach [39].

Given a positive three-way SHM data \({\mathcal {X}} \in {\mathbb {R}}^{feature \times location \times time}\), FP-CPD decomposes \({\mathcal {X}}\) into three matrices AB and C. The C matrix represents the temporal mode where each row contains information about the vibration responses related to an event at time t. The analysis of this component matrix can help to detect the damage of the monitored structure. Therefore, we use the C matrix to build a one-class anomaly detection model using only the positive training events. For each new incoming \({\mathcal {X}}_{new}\), we update the three matrices AB and C incrementally as described in Algorithm 2. Then the constructed model estimates the agreement between the new event \(C_{new}\) and the trained data.

For damage localization, we analyze the data in the location matrix B, where each row captures meaningful information for each sensor location. When the matrix B is updated due to the arrival of a new event \({\mathcal {X}}_{new}\), we study the variation of the values in each row of matrix B by computing the average distance from B’s row to k-nearest neighboring locations as an anomaly score for damage localization. For severity assessment in damage identification, we study the decision values returned from the one-class model. This is because a structure with more severe damage will behave much differently from a normal one.

5 Evaluation

In this section we present the details of the experimental settings and the comparative analysis between our proposed FP-CPD algorithm and the alike parallel tensor decomposition algorithms: PSGD and SALS. We first analyze the effectiveness and speed of the training process of the three algorithms based on four real-world datasets from SHM. We, then, evaluate the performance of our approach, along with other baselines using the SHM datasets, in terms of damage detection, assessment and localization.

5.1 Experiment setup and datasets

We conducted all our experiments using a dual Intel Xeon processors with 32 GB memory and 12 physical cores. We use R development environment to implement our FP-CPD algorithm and PSGD and SALS algorithms with the help of the two packages rTensor and e1071 for tensor tools and one-class model.

We run our experiments on four real-world datasets, all of which inherently entails multi-way data structure. The datasets are collected from sensors that measure the health of building, bridge or road structures. Specifically, these datasets comprise of:

  1. 1.

    bridge structure measurement data collected from sensors attached to a cable-stayed bridge in Western Sydney, Australia (BRIDGE) [5].

  2. 2.

    building structure measurement data collected from sensors attached to a specimen building structure obtained from Los Alamos National Laboratory (LANL) [24] (BUILDING).

  3. 3.

    measurements data collected from loop detectors in Victoria, Australia (ROAD) [34].

  4. 4.

    road measurements collected from sensors attached to two buses travelling through routes in the southern region of New South Wales, Australia (BUS) [3].

All the datasets are stored in a three-way tensor represented by \(sensor \times frequency \times time\). Further details about these datasets are summarized in Table 1. Using these datasets, we run a number of experiment sets to evaluate our proposed FP-CPD method as detailed in the following sections.

Table 1 Details of datasets

5.2 Evaluating performance of FP-CPD

The goal of first experiment set is to evaluate the performance of our FP-CPD method in terms of training time error rate. To achieve this, we compare the performance of our proposed FP-CPD and PSGD and SALS algorithms. To make a fair and objective comparison, we implemented the three algorithms under the same experimental settings as described in Sect. 5.1. We evaluated the performance of each method by plotting the time needed to complete the training process versus the root-mean-square error (RMSE). We run the same experiment on the four datasets (BRIDGE, BUILDING, ROAD and BUS). Figure 2 shows the RMSE and the training time of the three algorithms resulted from our experiments. As illustrated in the figure, our FP-CPD algorithm significantly outperformed the PSGD and SALS algorithms in terms of convergence and training speed. The SALS algorithm was the slowest among the three algorithms due to the fact that CP decomposition is a non-convex problem which can be better handled using scholastic methods. Furthermore, another important factor that contributed to the significant performance improvements in our FP-CPD method is the utilization of the Nesterov method along with the perturbation approach in our FP-CPD method. From the first experiment set, it can be concluded that our FP-CPD method is more effective in terms of RMSE and can carry on training faster compared to similar parallel tensor decomposition methods.

Fig. 2
figure 2

Comparison of training time and RSME of FP-CPD, SALS and PSGD on the four datasets

5.3 Evaluating effectiveness of FP-CPD

Fig. 3
figure 3

Damage estimation applied on Bridge data using decision values obtained by one-class SVM

Fig. 4
figure 4

Damage localization for the Bridge data: FP-CPD successfully localized damage locations

Fig. 5
figure 5

Damage estimation applied on Building data using decision values obtained by one-class SVM

Fig. 6
figure 6

Damage localization for the Building data: FP-CPD successfully localized damage locations

Our FP-CPD method demonstrated better speed and RSME in comparison to PSGD and SALS methods. However, it is still crucial to ensure that the proposed method is also capable of achieving accurate results in practical tensor decomposition problems. Therefore, the second experiment set aims to demonstrate the accuracy of our model in practice, specifically building structures in smart cities. To achieve this, we evaluate the performance of our FP-CPD in terms of its accuracy to detect damage in build and bridge structures, assessing the severity of detected damage and the localization of the detected damage. We carry on the evaluation on the BRIDGE and BUILDING datasets which are explained in the following sections. For comparative analysis, we choose SALS method as a baseline competitor to our FP-CPD. This is because PSGD has similar convergence as FP-CPD but the later takes less time to train as illustrated in Sect. 5.2.

5.3.1 The cable-stayed bridge dataset

In this dataset, 24 uni-axial accelerometers and 28 strain gauges were attached at different locations of the cable-stayed bridge to measure the vibration and strain responses of the bridge. Figure 7 illustrates the positioning of the 24 sensors on the bridge deck. The data of interest in our study are the accelerations data which were collected from sensors Ai with \(i\in [1;24]\). The bridge is in healthy condition. In order to evaluate the performance of damage detection methods, two different stationary vehicles (a car and a bus) with different masses were placed on the bridge to emulate two different levels of damage severity [7, 21]. The three different categories of data were collected in that study are: “Healthy-Data” when the bridge is free of vehicles; “Car-Damage” when a light car vehicle is placed on the bridge close to location A10; and “Bus-Damage” when a heavy bus vehicle is located on the bridge at location A14. This experiment generates 262 samples (i.e., events) separated into three categories: “Healthy-Data” (125 samples), “Car-Damage” data (107 samples) and “Bus-Damage” data(30 samples). Each event consists of acceleration data for a period of 2 s sampled at a rate of 600 Hz. The resultant event’s feature vector composed of 1200 frequency values. Figure 7 illustrates the setup of the sensors on the bridge under evaluation.

Fig. 7
figure 7

The locations on the bridge’s deck of the 24 Ai accelerometers used in the BRIDGE dataset. The cross-girder j of the bridge is displayed as CGj [5]

5.3.2 The LANL building dataset

These data are based on experiments conducted by LANL [24] using a specimen for a three-story building structure as shown in Fig. 8. Each joint in the building was instrumented by two accelerometers. The excitation data were generated using a shaker placed at corner D. Similarly, for the sake of damage detection evaluation, the damage was simulated by detaching or loosening the bolts at the joints to induce the aluminum floor plate moving freely relative to the Unistrut column. Three different categories of data were collected in this experiment: “Healthy-Data” when all the bolts were firmly tightened; “Damage-3C” data when the bolt at location 3C was loosened; and “Damage-1A3C” data when the bolts at locations 1A and 3C were loosened simultaneously. This experiment generates 240 samples (i.e., events) which also were separated into three categories: Healthy-Data (150 samples), “Damage-3C” data (60 samples) and “Damage-1A3C” data(30 samples). The acceleration data were sampled at 1600 Hz. Each event was measured for a period of 5.12 s resulting in a vector of 8192 frequency values.

Fig. 8
figure 8

Three-story building and floor layout [24]

5.3.3 Feature extraction

The raw signals of the sensing data collected in the aforementioned experiments exist in the time domain. In practice, time domain-based features may not capture the physical meaning of the physical structure. Thus, it is important to convert the generated data to a frequency domain. For all the datasets, we initially normalized the time-domain features to have zero mean and one standard deviation. Then we used the fast Fourier transform method to convert them into the frequency domain. The resultant three-way data collected from the cable-stayed bridge now have a structure of 600 features \(\times \) 24 sensors \(\times \) 262 events. For the LANAL BUILDING dataset, we computed the difference between signals of two adjacent sensors which resulted in 12 different joints in the three stories as in [24]. Then we selected the first 150 frequencies as a feature vector which resulted in a three-way data with a structure of 768 features \(\times \) 12 locations \(\times \) 240 events.

5.3.4 Experiments

For both BUILDING and BRIDGE datasets, we applied the following procedures:

  • Using the bootstrap technique, we selected 80% of the healthy samples randomly for training and the remaining 20% for testing in addition to the damage samples. We computed the accuracy of our FP-CPD model based on the average results over ten trials of the bootstrap experiment.

  • We used the core consistency diagnostic (CORCONDIA) technique described in [6] to determine the number of rank-one tensors \({\mathcal {X}}\) in the FP-CPD.

  • We used the one-class support vector machine (OSVM) [35] as a model for anomaly detection. The Gaussian kernel parameter \(\sigma \) in OCSVM is tuned using the Edged Support Vector (ESV) algorithm [2], and the rate of anomalies \(\nu \) was set to 0.05.

  • We used the \(\textit{F-score}\) measure to compute the accuracy of data values resulted from our model for damage detection. It is defined as \(\text {\textit{F-score}} = 2 \cdot \dfrac{\text {Precision} \times \text {Recall} }{\text {Precision} + \text {Recall}}\) where \(\text {Precision} = \dfrac{\text {TP} }{\text {TP} + \text {FP}}\) and \(\text {Recall} = \dfrac{\text {TP} }{\text {TP} + \text {FN}}\) (the number of true positive, false positive and false negative is abbreviated by TP, FP and FN, respectively).

  • We compared the results of the competitive method SALS proposed in [26] against the ones resulted from our FP-CPD method.

5.3.5 Results and discussion

5.3.6 The cable-stayed bridge dataset

Our FP-CPD method with one-class SVM was initially validated using the vibration data collected from the cable-stayed bridge (described in Sect. 5.3.1). The healthy training three-way tensor data (i.e., training set) was in the form of \( {\mathcal {X}} \in \Re ^{24 \times 600 \times 100}\). The 137 examples related to the two damage cases were added to the remaining 20% of the healthy data to form a testing set, which was later used for model evaluation. We conducted the experiments as followed the steps described in Section. As a result, this experiment generates a damage detection accuracy \(\textit{F-score}\) of \(1 \pm 0.00\) on the testing data. On the other hand, the \(\textit{F-score}\) accuracy of one-class SVM using SALS is recorded at \(0.98 \pm 0.02\).

As demonstrated from the results of this experiment, the tensor analysis with our proposed FP-CPD is capable to capture the underlying structure in multi-way data with better convergence. This is further illustrated by plotting the decision values returned from one-class SVM-based FP-CPD (as shown in Fig. 3). We can clearly separate the two damage cases (“Car-Damage” and “Bus-Damage”) in this dataset where the decision values are further decreased for the samples related to the more severe damage cases (i.e., “Bus-Damage”). These results suggest using the decision values obtained by our FP-CPD and one-class SVM as structural health scores to identify the damage severity in a one-class aspect. In contrast, the resultant decision values of one-class SVM based on SALS are also able to track the progress of the damage severity in the structure but with a slight decreasing trend in decision values for “Bus-Damage” as shown in Fig. 3.

The last step in this experiment is to analyze the location matrix B obtained from FP-CPD to locate the detected damage. Each row in this matrix captures meaningful information for each sensor location. Therefore, we calculate the average distance from each row in the matrix \(B_{new}\) to k-nearest neighboring rows. Figure 4 shows the obtained k-nn score for each sensor. The first 25 events (depicted on the x-axis) represent healthy data, followed by 107 events related to “Car-Damage” and 30 events to “Bus-Damage.” It can be clearly observed that FP-CPD method can localize the damage in the structure accurately, whereas sensors A10 and A14 related to the “Car-Damage” and “Bus-Damage,” respectively, behave significantly different from all the other sensors apart from the position of the introduced damage. In addition, we observed that the adjacent sensors to the damage location (e.g., A9, A11, A13 and A15) react differently due to the arrival pattern of the damage events. The SALS method, however, is not able to accurately locate the damage since it fails to update the location matrix B incrementally.

5.3.7 The building dataset

Following the experimental procedure described in section, our second experiment was conducted using the acceleration data acquired from 24 sensors instrumented on the three-story building as described in Sect. 5.3.2. The healthy three-way data (i.e., training set) are in the form of \( X \in \Re ^{12 \times 768 \times 120}\). The remaining 20% of the healthy data and the data obtained from the two damage cases were used for testing (i.e., testing set). The experiments we conducted using FP-CPD with one-class SVM have achieved an \(\textit{F-score}\) of \(95 \pm 0.01\) on the testing data compared to \(0.91 \pm 0.00\) obtained from one-class SVM and SALS experiments.

Similar to the BRIDGE dataset, we further analyzed the resultant decision values which were also able to characterize damage severity. Figure 5 demonstrates that the more severe damage to the 1A and 3C location test data, the more deviation from the training data with lower decision values.

Similar to the BRIDGE dataset, the last experiment is to compute the k-nn score for each sensor based on the k-nearest neighboring of the average distance between each row of the matrix \(B_{new}\). Figure 6 shows the resultant k-nn score for each sensor. The first 30 events (depicted on the x-axis) represent the healthy data, followed by 60 events describing when the damage was introduced in location 3C. The last 30 events represent the damage occurred in both locations 1A and 3C. It can be clearly observed that the FP-CPD method is capable to accurately localize the structure’s damage where sensors 1A and 3C behave significantly different from all the other sensors apart from the position of the introduced damage. However, the SALS method is not able to locate that damage since it fails to update the location matrix B incrementally (Figs. 7, 8).

In summary, the above experiments on the four real datasets demonstrate the effectiveness of our proposed FP-CPD method in terms of time needed to carry out training during tensor decomposition. Specifically, our FP-CPD significantly improves speed of model training and error rate compared to similar parallel tensor decomposition methods, PSGD and SALS. Furthermore, the other experiment sets on the BRIDGE and BUILDING datasets showed empirical evidence of the ability of our model to accurately carry on tensor decomposition on practical case studies. In particular, the experimental results demonstrated that our FP-CPD is able to detect damage in the build and bridge structures, assess the severity of detected damage and localize of the detected damage more accurately than SALS method. Therefore, it can be concluded that our FP-CPD tensor decomposition method is able to achieve faster tensor model training with minimal error rate while carrying on accurate tensor decomposition in practical cases. Such performance and accuracy gains can be beneficial for many parallel tensor decomposition cases in practice especially in real-time detection and identification problems. We demonstrated such benefits with real use cases in structural health monitoring, namely building and bridge structures.

6 Conclusion

This paper investigated the CP decomposition with a stochastic gradient descent algorithm for multi-way data analysis. This leads to a new method named Fast Parallel-CP Decomposition (FP-CPD) for tensor decomposition. The proposed method guarantees the convergence for a given non-convex problem by modeling the second-order derivative of the loss function and incorporating little noise to the gradient update. Furthermore, FP-CPD employs Nesterov’s method to compensate for the optimization process’s delays and accelerate the convergence rate. Based on laboratory and real datasets from the area of SHM, our FP-CPD, with a one-class SVM model for anomaly detection, achieves accurate results in damage detection, localization and assessment in online and one-class settings. Among the key future work is how to parallelize the tensor decomposition with FP-CPD. Also, it would be useful to apply FP-CPD with datasets from different domains.

Our future work mainly includes the following aspects. First, the proposed model in this research was to detect, localize and assessing the severity of damage in buildings and bridge structures. Does the model have the same prediction performance when we apply it on other domain such as recommender system? Future work should include building a personalized recommender systems but not only based on 2D latent factor models such as users and items. Such personalization requires considering other important information such as user age or gender, and item detail. For example, some books could be more preferred by users of certain age groups. Similarly, movies of specific genre could be a preference for certain age group compared to others. Future work also should consider implementing this system in a federated learning settings which can be also useful when data are distributed among different clients/sources and not feasible to be centralized in a single location/server.