1 Introduction

One of the fundamental computer vision objectives is to detect moving objects in a given video stream. At the most basic level, moving objects can be found in a video by removing the background. However, this is a challenging task in practice, since the true background is often unknown. Algorithms for background modeling are required to be both robust and adaptive. Indeed, the list of challenges is significant and includes camera jitter, illumination changes, shadows and dynamic backgrounds. There is no single method currently available that is capable of handling all the challenges in real time without suffering performance failures. Moreover, one of the great challenges in this field is to efficiently process high-resolution video streams, a task that is at the edge of performance limits for state-of-the-art algorithms. Given the importance of background modeling, a variety of mathematical methods and algorithms have been developed over the past decade. Comprehensive overviews of traditional and state-of-the-art methods are provided by Bouwmans [1], and Sobral and Vacavant [2].

Motivation This work advocates the method of dynamic mode decomposition (DMD), which enables the decomposition of spatiotemporal grid data in both space and time. The DMD has been successfully applied to videos [35]; however, the computational costs are dominated by the singular value decomposition (SVD). Even with the aid of recent innovations around randomized algorithms for computing the SVD [6], the computational costs remain expensive for high-resolution videos. Importantly, we build on the recently introduced compressed dynamic mode decomposition (cDMD) algorithm, which integrates DMD with ideas from compressed sensing and matrix sketching [7]. Hence, instead of computing the DMD on the full-resolution video data, we show that an accurate decomposition can be obtained from a compressed representation of the video in a fraction of the time. The optimal mode selection for background modeling is formulated as a sparsity-constrained sparse coding problem, which can be efficiently approximated using the greedy orthogonal matching pursuit method. The performance gains in computation time are significant, even competitive with Gaussian mixture models [811]. Moreover, the performance evaluation on real videos shows that the detection accuracy is competitive compared to leading robust principal component analysis (RPCA) algorithms.

Organization The rest of this paper is organized as follows. Section 2 presents a brief introduction to the dynamic mode decomposition and its application to video and background modeling. Section 3 presents the compressed DMD algorithm and different measurement matrices to construct the compressed video matrix. A GPU-accelerated implementation is also outlined. Finally a detailed evaluation of the algorithm is presented in Sect. 4. Concluding remarks and further research directions are given in Sect. 5. “Appendix” gives an overview of notation.

2 DMD for video processing

2.1 The dynamic mode decomposition

The dynamic mode decomposition is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatiotemporal coherent structures arising in nonlinear dynamical systems, or short-time future estimates of such systems. DMD was originally introduced in the fluid mechanics community by Schmid [12] and Rowley et al. [13]. A surveillance video sequence offers an appropriate application for DMD because the frames of the video are, by nature, equally spaced in time, and the pixel data, collected in every snapshot, can readily be vectorized. The dynamic mode decomposition is illustrated for videos in Fig. 1. For computational convenience, the flattened grayscale video frames (snapshots) of a given video stream are stored, ordered in time, as column vectors \(\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots , \mathbf{x}_{m}\) of a matrix. Hence, we obtain a 2-dimensional \({\mathbb {R}}^{n \times m}\) spatiotemporal grid, where n denotes the number of pixels per frame, m is the number of video frames taken, and the matrix elements \(x_{it}\) correspond to a pixel intensity in space and time. The video frames can be thought of as snapshots of some underlying dynamics. Each video frame (snapshot) \({\mathbf {x}}_{t+1}\) at time \(t+1\) is assumed to be connected to the previous frame \({\mathbf {x}}_{t}\) by a linear map \({\mathbf {A}}: {\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n}\). Mathematically, the linear map \({\mathbf {A}}\) is a time-independent operator which constructs the approximate linear evolution

$$\begin{aligned} {\mathbf {x}}_{t+1} = {\mathbf {A}}{\mathbf {x}}_{t}. \end{aligned}$$
(1)

The objective of dynamic mode decomposition is to find an estimate for the matrix \({{\mathbf {A}}}\) and its eigenvalue decomposition that characterizes the system dynamics. At its core, dynamic mode decomposition is a regression algorithm. First, the spatiotemporal grid is separated into two overlapping sets of data, called the left and right snapshot sequences

(2)

Equation (1) is reformulated in matrix notation

$$\begin{aligned} \mathbf{X}' = \mathbf{A}{} \mathbf{X}. \end{aligned}$$
(3)

In order to find an estimate for the matrix \(\mathbf{A}\) we face the following least-squares problem

$$\begin{aligned} \hat{\mathbf{A}} = \mathop {\hbox {argmin}}\limits _\mathbf{A} \Vert \mathbf{X}' - \mathbf{A}\mathbf{X}\Vert _{F}^{2}, \end{aligned}$$
(4)

where \(\Vert \cdot \Vert _{F}\) denotes the Frobenius norm. This is a well-studied problem, and an estimate of the linear operator \(\mathbf{A}\) is given by

$$\begin{aligned} {\hat{\mathbf{A}}} = \mathbf{X}'{} \mathbf{X}^\dag , \end{aligned}$$
(5)

where \(\dag\) denotes the Moore-Penrose pseudoinverse, which produces a regression that is optimal in a least-square sense. The DMD modes \({\varvec{\Phi }=\mathbf {W}}\), containing the spatial information, are then obtained as eigenvectors of the matrix \({\hat{\mathbf{A}}}\)

$$\begin{aligned} {\hat{\mathbf{A}}}{} \mathbf{W} = \mathbf{W}\varvec{\varLambda }, \end{aligned}$$
(6)

where columns of \(\mathbf{W}\) are eigenvectors \(\mathbf{\phi }_j\) and \(\varvec{\varLambda }\) is a diagonal matrix containing the corresponding eigenvalues \(\lambda _j\). In practice, when the dimension n is large, the matrix \({\hat{\mathbf{A}}}\in {\mathbb {R}}^{n \times n}\) may be intractable to estimate and to analyze directly. DMD circumvents the computation of \({\hat{\mathbf{A}}}\) by considering a rank-reduced representation \(\tilde{\mathbf{A}} \in {\mathbb {R}}^{k \times k}\). This is achieved by using the similarity transform, i.e., projecting \(\hat{\mathbf{A}}\) on the left singular vectors. Moreover, DMD typically makes use of the low-rank structure so that the total number of modes, \(k \le min(n,m)\), allows for dimensionality reduction of the video stream. Hence, only the relatively small \(\tilde{\mathbf{A}} \in {\mathbb {R}}^{k \times k}\) matrix needs to be estimated and analyzed (see Sect. 3 for more details). The dynamic mode decomposition yields the following low-rank factorization of a given spatiotemporal grid (video stream)

$$\begin{aligned} {\varvec{\Phi }}{} \mathbf{B}{{\mathcal {V}}} = \begin{pmatrix} \phi _{11} &{} \phi _{1p} &{} \cdots &{} \phi _{1k} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \phi _{i1} &{} \phi _{ip} &{} \cdots &{} \phi _{ik} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \phi _{n1} &{} \phi _{np} &{} \cdots &{} \phi _{nk} \end{pmatrix} \begin{pmatrix} b_{1} &{} &{} &{} &{}\\ &{} \ddots &{} &{} &{}\\ &{} &{} b_{p} &{} &{} \\ &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} b_{k} \end{pmatrix} \begin{pmatrix} 1 &{} \lambda _{1} &{} \cdots &{} \lambda _{1}^{m-1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 1 &{} \lambda _{p} &{} \cdots &{} \lambda _{p}^{m-1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 1 &{} \lambda _{k} &{} \cdots &{} \lambda _{k}^{m-1} \end{pmatrix}, \end{aligned}$$
(7)

where the diagonal matrix \(\mathbf{B} \in {\mathbb {C}}^{k \times k}\) has the amplitudes as entries and \({ {\mathcal {V}}} \in {\mathbb {C}}^{k \times m}\) is the Vandermonde matrix describing the temporal evolution of the DMD modes \({\varvec{\Phi }} \in {\mathbb {C}}^{n \times k}\).

Fig. 1
figure 1

Illustration of the dynamic mode decomposition for video applications. Given a video stream, the first step involves reshaping the grayscale video frames into a 2-dimensional spatiotemporal grid. The DMD then creates a decomposition in space and time in which DMD modes contain spatial structure

2.2 DMD for foreground/background separation

The DMD method can attempt to reconstruct any given frame, or even possibly future frames. The validity of the reconstruction thereby depends on how well the specific video sequence meets the assumptions and criteria of the DMD method. Specifically, a video frame \(\mathbf{x}_t\) at time points \(t \in 1,\ldots ,m\) is approximately reconstructed as follows

$$\begin{aligned} {\tilde{\mathbf{x}}}_t = \sum _{j=1}^k b_j \phi _j \lambda _j^{t-1}. \end{aligned}$$
(8)

Notice that the DMD mode \(\phi _{j}\) is a \(n \times 1\) vector containing the spatial structure of the decomposition, while the eigenvalue \(\lambda _j^{t-1}\) describes the temporal evolution. The scalar \(b_j\) is the amplitude of the corresponding DMD mode. At time \(t=1\), Eq. (8) reduces to \({\tilde{\mathbf{x}}}_1 = \sum _{j=1}^k b_j \phi _j\). Since the amplitude is time-independent, \(b_j\) can be obtained by solving the following least-square problem using the video frame \(\mathbf{x}_1\) as initial condition

$$\begin{aligned} {\hat{\mathbf{b}}} = \mathop {\hbox {argmin}}\limits _\mathbf{b} \Vert \mathbf{x}_1 - {\varvec{\Phi }}{} \mathbf{b}\Vert _{F}^{2}. \end{aligned}$$
(9)

It becomes apparent that any portion of the first video frame that does not change in time, or changes very slowly in time, must have an associated continuous-time eigenvalue

$$\begin{aligned} \omega _{j}= \dfrac{\log (\lambda _j)}{\Delta t} \end{aligned}$$
(10)

that is located near the origin in complex space: \(|\omega _{j}| \approx 0\) or equivalent \(|\lambda _{j}| \approx 1\). This fact becomes the key principle to separate foreground elements (approximate sparse) from background (approximate low-rank) information. Figure 2 shows the dominant continuous-time eigenvalues for a video sequence. Subplot (a) shows three sample frames from this video sequence that includes a canoe. Here the foreground object (canoe) is not present at the beginning and the end for the video sequence. The dynamic mode decomposition factorizes this sequence into modes describing the different dynamics present. The analysis of the continuous-time eigenvalue \(\omega _{j}\) and the amplitudes over time \(\mathbf{B{\mathcal {V}}}\) (the amplitudes multiplied by the Vandermonde matrix) can provide interesting insights, shown in subplot (b) and (c). First, the amplitude for the prominent zero mode (background) is constant over time, indicating that this mode is capturing the dominant (static) content of the video sequence, i.e., the background. The next pair of modes corresponds to the canoe, a foreground object slowly moving over time. The amplitude reveals the presence of this object. Specifically, the amplitude reaches its maximum at about the frame index 150, when the canoe is in the center of the video frame. At the beginning and end of the video, the canoe is not present, indicated by the negative values of the amplitude. The subsequent modes describe other dynamics in the video sequence, e.g., the movements of the canoeist and the waves. For instance, the modes describing the waves have high frequency and small amplitudes (not shown here). Hence, a theoretical viewpoint we will build upon with the DMD methodology centers around the recent idea of low-rank and sparse matrix decompositions. Following this approach, background modeling can be formulated as a matrix separation problem into low-rank (background) and sparse (foreground) components. This viewpoint has been advocated, for instance, by Candès et al. [14] in the framework of robust principal component analysis (RPCA). For a thorough discussion of such methods used for background modeling, we refer to Bouwmans et al. [15, 16]. The connection between DMD and RPCA was first established by Grosek and Kutz [3]. Assume the set of background modes \(\{\omega _{p}\}\) satisfies \(|\omega _{p}| \approx 0\). The DMD expansion of Eq. (8) then yields

$$\begin{aligned} \mathbf{X}_{\text {DMD}}= \,& {} \mathbf{L} + \mathbf{S} \nonumber \\= & {} \underbrace{\sum _{p} b_{p}{} \mathbf{\phi }_{p} \lambda _p^{{\mathbf {t}}-1} }_{\text {Background Video}} + \underbrace{\sum _{j \ne p} b_{j}{} \mathbf{\phi }_{j} \lambda _j^{{\mathbf {t}}-1} }_{\text {Foreground Video}}, \end{aligned}$$
(11)

where \({\mathbf {t}}=[1,\ldots ,m]\) is a \(1 \times m\) time vector and \(\mathbf{X}_{\text {DMD}} \in {\mathbb {C}}^{n \times m}\).Footnote 1 Specifically, DMD provides a matrix decomposition of the form \(\mathbf{X}_{DMD} = \mathbf{L} + \mathbf{S}\), where the low-rank matrix \(\mathbf{L}\) will render the video of just the background, and the sparse matrix \(\mathbf{S}\) will render the complementary video of the moving foreground objects. We can interpret these DMD results as follows: Stationary background objects translate into highly correlated pixel regions from one frame to the next, which suggests a low-rank structure within the video data. Thus, the DMD algorithm can be thought of as an RPCA method. The advantage of the DMD method and its sparse/low-rank separation is the computational efficiency of achieving Eq. (11), especially when compared to the optimization methods of RPCA. The analysis of the time evolving amplitudes provides interesting opportunities. Specifically, learning the amplitudes’ profiles for different foreground objects allows automatic separation of video feeds into different components. For instance, it could be of interest to discriminate between cars and pedestrians in a given video sequence.

Fig. 2
figure 2

Results of the dynamic mode decomposition for the ChangeDetection.net video sequence ‘canoe’. Subplot a shows three samples frames of the video sequence. Subplots b and c show the continuous-time eigenvalues and the temporal evolution of the amplitudes. The modes corresponding to the amplitudes with the highest variance are capturing the dominant foreground object (canoe), while the zero mode is capturing the dominant structure of the background. Modes corresponding to high-frequency amplitudes capturing other dynamics in the video sequence, e.g., waves. a Sample frames (\(t={0,150,300}\)) of video sequence. b Dominant continuous-time eigenvalues \(\omega _{j}\). c Amplitudes over time

2.3 DMD for real-time background modeling

When dealing with high-resolution videos, the standard DMD approach is expensive in terms of computational time and memory, because the whole video sequence is reconstructed. Instead a ‘good’ static background model is often sufficient for background subtraction. This is because background dynamics can be filtered out or thresholded. The challenge remains to automatically select the modes best describing the background. This is essentially a bias-variance trade-off. Using just the zero mode (background) leads to an under-fitted background model, while a large set of modes tends to overfit. Motivated, by the sparsity-promoting variant of the standard DMD algorithm introduced by Jovanović et al. [17], we formulate a sparsity-constrained sparse coding problem for mode selection. The idea is to augment Eq. (9) by an additional term that penalizes the number of nonzero elements in the vector \(\mathbf{b}\)

$$\begin{aligned} \hat{\varvec{\beta }}=\mathop {\hbox {argmin}}\limits _\mathbf{\varvec{\beta }} \Vert \mathbf{x}_1 - {\varvec{\Phi }}{} \mathbf{\varvec{\beta }}\Vert _{F}^{2} \quad {\text{such that}} \,\Vert \mathbf{\varvec{\beta }}\Vert _0 < K, \end{aligned}$$
(12)

where \(\varvec{\beta }\) is the sparse representation of \(\mathbf{b}\), and \(\Vert \cdot \Vert _0\) is \(\ell _0\) pseudo-norm which counts the nonzero elements in \(\varvec{\beta }\). Solving this sparsity problem exactly is NP-hard. However, the problem in Eq. (12) can be efficiently solved using greedy approximation methods. Specifically, we utilize orthogonal matching pursuit (OMP) [18, 19]. A highly computationally efficient algorithm is proposed by Rubinstein et al. [20] and is implemented in the scikit-learn software package [21]. The greedy OMP algorithm works iteratively, selecting at each step the mode with the highest correlation to the current residual. Once a mode is selected, the initial condition \(\mathbf{x}_1\) is orthogonally projected on the span of the previously selected set of modes. Then the residual is recomputed and the process is repeated until K nonzero entries are obtained. If no priors are available, the optimal number of modes K can be determined using cross-validation. Finally, the background model is computed as

$$\begin{aligned} \hat{\mathbf{x}}_{BG}={\varvec{\Phi }}\hat{\varvec{\beta }}. \end{aligned}$$
(13)

3 Compressed DMD (cDMD)

Compressed DMD provides a computationally efficient framework to compute the dynamic mode decomposition on massively under-sampled or compressed data [7]. The method was originally devised to reconstruct high-dimensional, full-resolution DMD modes from sparse, spatially under-resolved measurements by leveraging compressed sensing. However, it was quickly realized that if full-state measurements are available, many of the computationally expensive steps in DMD may be computed on a compressed representation of the data, providing dramatic computational savings. The first approach, where DMD is computed on sparse measurements without access to full data, is referred to as compressed sensing DMD. The second approach, where DMD is accelerated using a combination of calculations on compressed data and full data, is referred to as compressed DMD (cDMD); this is depicted schematically in Fig. 3. For the applications explored in this work, we use compressed DMD, since full image data are available and reducing algorithm runtime is critical for real-time performance.

Fig. 3
figure 3

Schematic of the compressed dynamic mode decomposition architecture. The data (video stream) are first compressed via left multiplication by a measurement matrix \({\mathbf {C}}\). DMD is then performed on the compressed representation of the data. Finally, the full DMD modes \(\varvec{\Phi }\) are reconstructed from the compressed modes \(\varvec{\Phi }_Y\) by the expression in Eq. (24)

3.1 Compressed sensing and matrix sketching

Compression algorithms are at the core of modern video, image and audio processing software such as MPEG, JPEG and MP3. In our mathematical infrastructure of compressed DMD, we consider the theory of compressed sensing and matrix sketching.

Compressed sensing demonstrates that instead of measuring the high-dimensional signal, or pixel space representation of a single frame \(\mathbf{x}\), we can measure instead a low-dimensional subsample \(\mathbf{y}\) and approximate/reconstruct the full-state space \(\mathbf{x}\) with this significantly smaller measurement [2224]. Specifically, compressed sensing assumes the data being measured are compressible in some basis, which is certainly the case for video. Thus, the video can be represented in a small number of elements of that basis, i.e., we only need to solve for the few nonzero coefficients in the transform basis. For instance, consider the measurements \(\mathbf{y}\in {\mathbb {R}}^p\), with \(k< p\ll n\):

$$\begin{aligned} \mathbf{y} = \mathbf{C}{} \mathbf{x}. \end{aligned}$$
(14)

If \(\mathbf{x}\) is sparse in \(\varvec{\Psi }\), then we may solve the underdetermined system of equations

$$\begin{aligned} \mathbf{y}=\mathbf{C}\varvec{\Psi }{} \mathbf{s} \end{aligned}$$
(15)

for \(\mathbf{s}\) and then reconstruct \(\mathbf{x}\). Since there are infinitely many solutions to this system of equations, we seek the sparsest solution \(\hat{{\mathbf {s}}}\). However, it is well known from the compressed sensing literature that solving for the sparsest solution formally involves an \(\ell _0\) optimization that is NP-hard. The success of compressed sensing is that it ultimately engineered a solution around this issue by showing that one can instead, under certain conditions on the measurement matrix \(\mathbf{C}\), trade the infeasible \(\ell _0\) optimization for a convex \(\ell _1\)-minimization [22]:

$$\begin{aligned} \hat{{\mathbf {s}}}=\mathop {\hbox {argmin}}_{\mathbf{s}'}\Vert \mathbf{s}'\Vert _1,\quad {\text{such that}}\,{} \mathbf{y}=\mathbf{C}\varvec{\Psi }{} \mathbf{s}'. \end{aligned}$$
(16)

Thus, \(\ell _1\)-norm acts as a proxy for sparsity-promoting solutions of \(\hat{{\mathbf {s}}}\). To guarantee that the compressed sensing architecture will almost certainly work in a probabilistic sense, the measurement matrix \(\mathbf{C}\) and sparse basis \(\varvec{\Psi }\) must be incoherent, meaning that the rows of \(\mathbf{C}\) are uncorrelated with the columns of \(\varvec{\Psi }\). This is discussed in more detail in [7]. Given that we are considering video frames, it is easy to suggest the use of generic basis functions such as Fourier or wavelets in order to represent the sparse signal \(\mathbf{s}\). Indeed, wavelets are already the standard for image compression architectures such as JPEG-2000. As for the Fourier transform basis, it is particularly attractive for many engineering purposes since single-pixel measurements are clearly incoherent given that it excites broadband frequency content.

Matrix sketching is another prominent framework in order to obtain a similar compressed representation of a massive data matrix [25, 26]. The advantage of this approach is the less restrictive assumptions and the straight forward generalization from vectors to matrices. Hence, Eq. (14) can be reformulated in matrix notation

$$\begin{aligned} \mathbf{Y} = \mathbf{C}{} \mathbf{X}, \end{aligned}$$
(17)

where again \(\mathbf{C}\) denotes a suitable measurement matrix. Matrix sketching comes with interesting error bounds and is applicable whenever the data matrix \(\mathbf{X}\) has low-rank structure. For instance, it has been successfully demonstrated that the singular values and right singular vectors can be approximated from such a compressed matrix representation [27].

3.2 Algorithm

The compressed DMD algorithm proceeds similarly to the standard DMD algorithm [28] at nearly every step until the computation of the DMD modes. The key difference is that we first compute a compressed representation of the video sequence, as illustrated in Fig. 4. Hence the algorithm starts by generating the measurement matrix \({\mathbf {C}}\in {\mathbb {R}}^{p\times n}\) in order to compresses or sketch the data matrices as in Eq. (2):

$$\begin{aligned} {\mathbf {Y}} = {\mathbf {C}}{\mathbf {X}}, \quad {\mathbf {Y}}' = {\mathbf {C}}{\mathbf {X}}'. \end{aligned}$$
(18)

Where p is denoting the number of samples or measurements. There is a fundamental assumption that the input data are low-rank. This is satisfied for video data, because each of the columns of \(\mathbf{X}\) and \(\mathbf{X}' \in {\mathbb {R}}^{n\times m-1}\) is sparse in some transform basis \(\varvec{\Psi }\). Thus, for sufficiently many incoherent measurements, the compressed matrices \({\mathbf {Y}}\) and \({\mathbf {Y}}' \in {\mathbb {R}}^{p\times m-1}\) have similar correlation structures to their high-dimensional counterparts. Then compressed DMD approximates the eigenvalues and eigenvectors of the linear map \(\mathbf {A_Y}\), where the estimator is defined as:

$$\begin{aligned} \hat{{\mathbf {A}}}_{{\mathbf {Y}}}&={\mathbf {Y}}'{\mathbf {Y}}^\dagger \end{aligned}$$
(19a)
$$\begin{aligned}&= {\mathbf {Y}}'{{\mathbf {V}}}_{{\mathbf {Y}}}{} \mathbf{S}_{{\mathbf {Y}}}^{-1}{\mathbf {U}}_{\mathbf {Y}}^*, \end{aligned}$$
(19b)

where \(*\) denotes the conjugate transpose. The pseudo-inverse \({\mathbf {Y}}^\dagger\) is computed using the SVD:

$$\begin{aligned} {\mathbf {Y}}={\mathbf {U}}_{\mathbf {Y}}\mathbf{S}_{{\mathbf {Y}}}{{\mathbf {V}}}_{{\mathbf {Y}}}^*, \end{aligned}$$
(20)

where the matrices \(\mathbf{U}\in {{\mathbb {R}}}^{p\times k}\), and \(\mathbf{V}\in {{\mathbb {R}}}^{m-1\times k}\) are the truncated left and right singular vectors. The diagonal matrix \(\mathbf{S}\in {{\mathbb {R}}}^{k\times k}\) has the corresponding singular values as entries. Here k is the target-rank of the truncated SVD approximation to \(\mathbf{Y}\). Note that the subscript \({\mathbf {Y}}\) is included to explicitly denote computations involving the compressed data \({\mathbf {Y}}\). As in the standard DMD algorithm, we typically do not compute the large matrix \(\hat{{\mathbf {A}}_{\mathbf {Y}}}\), but instead compute the low-dimensional model projected onto the left singular vectors:

$$\begin{aligned} \tilde{{\mathbf {A}}}_{{\mathbf {Y}}}&= {\mathbf {U}}_{\mathbf {Y}}^*\hat{{\mathbf {A}}_Y}{\mathbf {U}}_{\mathbf {Y}} \end{aligned}$$
(21a)
$$\begin{aligned}&= {\mathbf {U}}_{\mathbf {Y}}^*{\mathbf {Y}}'{{\mathbf {V}}}_{{\mathbf {Y}}}{} \mathbf{S}_{{\mathbf {Y}}}^{-1}. \end{aligned}$$
(21b)

Since this is a similarity transform, the eigenvectors and eigenvalues can be obtained from the eigendecomposition of \(\tilde{{\mathbf {A}}}_{{\mathbf {Y}}}\)

$$\begin{aligned} \tilde{{\mathbf {A}}}_{{\mathbf {Y}}}{{\mathbf {W}}}_{{\mathbf {Y}}} = {{\mathbf {W}}}_{{\mathbf {Y}}}\varvec{\varLambda }_{\mathbf {Y}}, \end{aligned}$$
(22)

where columns of \(\mathbf{W_Y}\) are eigenvectors \(\mathbf{\phi }_j\) and \(\varvec{\varLambda }_{\varvec{Y}}\) is a diagonal matrix containing the corresponding eigenvalues \(\lambda _j\). The similarity transform implies that \(\varvec{\varLambda } \approx \varvec{\varLambda }_{\varvec{Y}}\). The compressed DMD modes are consequently given by

$$\begin{aligned} \varvec{\Phi }_{{\mathbf {Y}}} = {\mathbf {Y}}'{{\mathbf {V}}}_{{\mathbf {Y}}}\mathbf{S}_{{\mathbf {Y}}}^{-1}{{\mathbf {W}}}_{{\mathbf {Y}}}. \end{aligned}$$
(23)

Finally, the full DMD modes are recovered using

$$\begin{aligned} \varvec{\Phi } = {\mathbf {X}}'{{\mathbf {V}}}_{{\mathbf {Y}}}\mathbf{S}_{{\mathbf {Y}}}^{-1}{{\mathbf {W}}}_{{\mathbf {Y}}}. \end{aligned}$$
(24)

Note that the compressed DMD modes in Eq. (24) make use of the full data \({\mathbf {X}}'\) as well as the linear transformations obtained using the compressed data \({\mathbf {Y}}\) and \({\mathbf {Y}}'\). The expensive SVD on \({\mathbf {X}}\) is bypassed, and it is instead performed on \({\mathbf {Y}}\). Depending on the compression ratio, this may provide significant computational savings. The computational steps are summarized in Algorithm 1, and further numerical details are presented in [7].

Remark 1

The computational performance heavily depends on the measurement matrix used to construct the compressed matrix, as described in the next section. For a practical implementation sparse or single-pixel measurements (random row selection) are favored.

Fig. 4
figure 4

Video compression using a sparse measurement matrix. The compressed matrix faithfully captures the essential spectral information of the video

figure a

Remark 2

One alternative to the predefined target-rank k is the recent hard-thresholding algorithm of Gavish and Donoho [29]. This method can be combined with step 4 to automatically determine the optimal target-rank.

Remark 3

As described in Sect. 2.3, step 9 can be replaced by the orthogonal matching pursuit algorithm, in order to obtain a sparsity-constrained solution: \({\mathbf {b}} = \texttt {omp}(\varvec{\Phi },{\mathbf {x}}_{1})\). Computing the OMP solution is in general extremely fast, but if it comes to high-resolution video streams this step can become computationally expensive. However, instead of computing the amplitudes based on the full-state dynamic modes \({\varvec{\Phi }}\) the compressed DMD modes \({\varvec{\Phi }_Y}\) can be used. Hence, Eq. (12) can be reformulated as

$$\begin{aligned} \hat{\varvec{\beta }}=\mathop {\hbox {argmin}}_\mathbf{\varvec{\beta }} \Vert \mathbf{y}_1 - {\varvec{\Phi }_Y}{} \mathbf{\varvec{\beta }}\Vert _{F}^{2} \quad \text{ such that } \quad \Vert \mathbf{\varvec{\beta }}\Vert _0 < K, \end{aligned}$$
(25)

where \(\mathbf{y}_1\) is the first compressed video frame. Then step 9 can be replaced by: \({\mathbf {b}} = \texttt {omp}(\varvec{\Phi }_{{\mathbf {Y}}},{\mathbf {y}}_{1})\).

3.3 Measurement matrices

A basic measurement matrix \({\mathbf {C}}\) can be constructed by drawing \(p\times n\) independent random samples from a Gaussian, Uniform or a sub Gaussian, e.g., Bernoulli distribution. It can be shown that these measurement matrices have optimal theoretical properties; however, for practical large-scale applications they are often not feasible. This is because generating a large number of random numbers can be expensive and computing Eq. (18) using unstructured dense matrices has a time complexity of O(pnm). From a computational perspective, it is favorable to build a structured random sensing matrix which is memory efficient and enables the execution of fast matrix-matrix multiplications. For instance, Woolfe et al. [30] showed that the costs can be reduced to O(log(p)nm) using a subsampled random Fourier transform (SRFT) sensing matrix

$$\begin{aligned} {\mathbf {C}} = {\mathbf {R}} {\mathbf {F}} {\mathbf {D}}, \end{aligned}$$
(26)

where \({\mathbf {R}}\in {\mathbb {C}}^{p\times n}\) draws p random rows (without replacement) from the identity matrix \({\mathbf {I}}\in {\mathbb {C}}^{n\times n}\). \({\mathbf {F}}\in {\mathbb {C}}^{n\times n}\) is the unnormalized discrete Fourier transform with the following entries \({\mathbf {F}}(j,k)=exp(-2\pi i(j-1)(k-1)/m)\), and \({\mathbf {D}}\in {\mathbb {C}}^{n\times n}\) is a diagonal matrix with independent random diagonal elements uniformly distributed on the complex unit circle. While the SRFT sensing matrix has nice theoretical properties, the improvement from O(pnm) to O(log(p)nm) is not necessarily significant. In practice, it is often sufficient to construct even simpler sensing matrices. An interesting approach making the matrix-matrix multiplication in Eq. (18) redundant is to use single-pixel measurements (random row selection)

$$\begin{aligned} {\mathbf {C}} = {\mathbf {R}}. \end{aligned}$$
(27)

In a practical implementation, this allows construction of the compressed matrix \({\mathbf {Y}}\) from choosing p random rows without replacement from \({\mathbf {X}}\). Hence, only p random numbers need to be generated and no memory is required for storing a sensing matrix \({\mathbf {C}}\). A different approach is the method of sparse random projections [31]. The idea is to construct a sensing matrix \({\mathbf {C}}\) with identical independent distributed entries as follows

$$\begin{aligned} c_{ij} = \left\{ \begin{array}{l ll} 1 &{} \quad \hbox { with prob.}\ \frac{1}{2s} \\ 0 &{} \quad \hbox { with prob.}\ 1-\frac{1}{s}, \\ -1 &{} \quad \hbox { with prob.}\ \frac{1}{2s} \end{array} \right. \end{aligned}$$
(28)

where the parameter s controls the sparsity. While Achlioptas [31] has proposed the values \(s=1,2\), Li et al. [32] showed that also very sparse (aggressive) sampling rates like \(s={n}/{log(n)}\) achieve accurate results. Modern sparse matrix packages allow rapid execution of (18).

3.4 GPU-accelerated implementation

While most current desktop computers allow multithreading and also multiprocessing, using a graphics processing unit (GPU) enables massive parallel processing. The paradigm of parallel computing becomes more important as larger amounts of data stagnate CPU clock speeds. The architecture of a modern CPU and GPU is illustrated in Fig. 5. The key difference between these architectures is that the CPU consists of few arithmetic logic units (ALU) and is highly optimized for low-latency access to cached data sets, while the GPU is optimized for data-parallel, throughput computations. This is achieved by the large number of small arithmetic logic units (ALU). Traditionally, this architecture was designed for the real-time creation of high-definition 2D/3D graphics. However, NVIDIA’s programming model for parallel computing CUDA opens up the GPU as a general parallel computing device [33]. Using high-performance linear algebra libraries, e.g., CULA [34], can help to accelerate comparable CPU implementations substantially. Take for instance the matrix multiplication of two \(n\times n\) square matrices, illustrated in Fig. 6. The computation involves the evaluation of \(n^{2}\) dot products.Footnote 2 The data parallelism therein is that each dot-product can be computed independently. With enough ALUs the computational time can be substantially accelerated. This parallelism applies readily to the generation of random numbers and many other linear algebra routines.

Fig. 5
figure 5

Illustration of the CPU and GPU architecture. a CPU. b GPU

Fig. 6
figure 6

Illustration of the data parallelism in matrix-matrix multiplications

Relatively, few GPU-accelerated background subtraction methods have been proposed [11, 35, 36]. The authors achieve considerable speedups compared to the corresponding CPU implementations. However, the proposed methods barely exceed 25 frames per second for high-definition videos. This is mainly due to the fact that many statistical methods do not fully benefit from the GPU architecture. In contrast, linear algebra-based methods can substantially benefit from parallel computing. An analysis of Algorithm 1 reveals that generating random numbers in line 2 and the dot products in lines 3, 6 and 8 is particularly suitable for parallel processing. But also the computation of the deterministic SVD, the eigenvalue decomposition and the least-square solver can benefit from the GPU architecture. Overall, the GPU-accelerated DMD implementation is substantially faster than the MKL (Intel Math Kernel Library) accelerated routine. The disadvantage of current GPUs is the rather limited bandwidth, i.e., the amount of data which can be exchanged per unit of time, between CPU and GPU memory. However, this overhead can be mitigated using asynchronous memory operations.

4 Results

In this section, we evaluate the computational performance and the suitability of compressed DMD for background modeling. To evaluate the detection performance, a foreground mask \({\mathcal {X}}\) is computed by thresholding the difference between the true frame and the reconstructed background. A standard method is to use the Euclidean distance, leading to the following binary classification problem

$$\begin{aligned} {\mathcal {X}}_{t}(j) = \left\{ \begin{array}{ll ll} 1 &{}&{} \quad \hbox { if}\ \Vert x_{jt}-\hat{x}_{j}\Vert > \tau , \\ 0 &{}&{} \quad \text {otherwise} \end{array} \right. \end{aligned}$$
(29)

where \(x_{jt}\) denotes the jth pixel of the tth video frame and \(\hat{x}_{j}\) denotes the corresponding pixel of the modeled background. Pixels belonging to foreground objects are set to 1 and 0 otherwise. Access to the true foreground mask allows the computation of several statistical measures. For instance, common evaluation measures in the background subtraction literature are recall, precision and the F-measure. While recall measures the ability to correctly detect pixels belonging to moving objects, precision measures how many predicted foreground pixels are actually correct, i.e., false alarm rate. The F-measure combines both measures by their harmonic mean. A workstation (Intel Xeon CPU E5-2620 2.4GHz, 32GB DDR3 memory and NVIDIA GeForce GTX 970) was used for all following computations.

4.1 Evaluation on real videos

We have evaluated the performance of compressed DMD for background modeling using the CD (ChangeDetection.net) and BMC (Background Models Challenge) benchmark dataset [37, 38]. Figure 7 illustrates the nine real videos of the latter dataset, posing many common challenges faced in outdoor video surveillance scenarios. Mainly, the following complex situations are encountered:

  • Illumination changes: Gradual illumination changes caused by fog or sun.

  • Low illumination: Bad light conditions, e.g., night videos.

  • Bad weather: Introduced noise (small objects) by weather conditions, e.g., snow or rain.

  • Dynamic backgrounds: Moving objects belonging to the background, e.g., waving trees or clouds.

  • Sleeping foreground objects: Former foreground objects that becoming motionless and moving again at a later point in time.

Fig. 7
figure 7

BMC dataset: example frames of the nine real videos

Fig. 8
figure 8

The F-measure for varying thresholds is indicating the dominant background modeling performance of the sparsity-promoting compressed DMD algorithm. In particular, the performance gain (over using the zero mode only) is substantial for the dynamic background scenes ‘Canoe’ and ‘Fountain02’. a Highway. b Blizzard. c Canoe. d Fountain02. e Park. f Library

Evaluation settings In order to obtain reproducible results the following settings have been used. For a given video sequence, the low-rank dynamic mode decomposition is computed using a very sparse measurement matrix with a sparsity factor \(s=n/log(n)\) and \(p=1000\) measurements. While, we use here a fixed number of samples, the choice can be guided by the formula \(p > k\cdot \log (n/k)\). The target-rank k is automatically determined via the optimal hard-threshold for singular values [29]. Once the dynamic mode decomposition is obtained, the optimal set of modes is selected using the orthogonal matching pursuit method. In general the use of \(K=10\) nonzero entries achieves good results. Instead of using a predefined value for K, cross-validation can be used to determine the optimal number of nonzero entries. Further, the dynamic mode decomposition as presented here is formulated as a batch algorithm, in which a given long video sequence is split into batches of 200 consecutive frames. The decomposition is then computed for each batch independently.

The CD dataset First, six CD video sequences are used to contextualize the background modeling quality using the sparse coding approach. This is compared to using the zero (static background) mode only. Figure 8 shows the evaluation results of one batch by plotting the F-measure against the threshold for background classification. In five out of six examples, the sparse coding approach (cDMD k=opt) dominates. In particular, significant improvements are achieved for the dynamic background video sequences ‘Canoe’ and ‘Fountain02’. Only in case of the ‘Park’ video sequence, the method tends to overfit. Interestingly, the performance of the compressed algorithm is slightly better than the exact DMD algorithm, overall. This is due to the implicit regularization of randomized algorithms [39, 40].

The BMC dataset In order to compare the cDMD algorithm with other RPCA algorithms, the BMC dataset has been used. Table 1 shows the evaluation results computed with the BMC wizard for all ninevideos. An individual threshold value has been selected for each video to compute the foreground mask. For comparison, the evaluation results of three other RPCA methods are shown [16]. Overall, cDMD achieves an average F-value of about 0.648. This is slightly better than the performance of GoDec [41] and nearly as good as LSADM [42]. However, it is lower than the F-measure achieved with the RSL method [43]. Figure 9 presents visual results for example frames across five videos. The last row shows the smoothed (median filtered) foreground mask.

Fig. 9
figure 9

Visual evaluation results for five example frames corresponding to the BMC videos: 002, 003, 006, 007 and 009. The top row shows the original grayscale images (moving objects are highlighted). The second row shows the differencing between the reconstructed cDMD background and the original frame. Row three shows the thresholded and row four the in addition median filtered foreground mask

Discussion The results reveal some of the strengths and limitations of the compressed DMD algorithm. First, because cDMD is presented here as a batch algorithm, detecting sleeping foreground objects as they occur in video 001 is difficult. Another weakness is the limited capability of dealing with non-periodic dynamic backgrounds, e.g., big waving trees and moving clouds as occurring in the videos 001, 005, 008 and 009. On the other hand, good results are achieved for the videos 002, 003, 004 and 007, showing that DMD can deal with large moving objects and low illumination conditions. The integration of compressed DMD into a video system can overcome some of these initial issues. Hence, instead of discarding the previous modeled background frames, a background maintenance framework can be used to incrementally update the model. In particular, this allows to deal better with sleeping foreground objects. Further, simple post-processing techniques (e.g., median filter or morphology transformations) can substantially reduce the false positive rate.

Table 1 Evaluation results of nine real videos from the BMC dataset

4.2 Computational performance

Figure 10 shows the fps rate and the F-measure for a varying number of samples p and different measurement matrices. Gaussian measurements achieve the best accuracy in terms of the F-measure, but the computational costs become increasingly expensive. Single-pixel measurements (sPixel) are the most computationally efficient method. The primary advantages of single-pixel measurements are the memory efficiency and the simple implementation. Sparse sensing matrices offer the best trade-off between computational time and accuracy, but require access to sparse matrix packages.

It is important to stress that randomized sensing matrices cause random fluctuations influencing the background model quality, illustrated in Fig. 11. The bootstrap confidence intervals show that sparse measurements have lower dispersion than single-pixel measurements. This is, because single-pixel measurements discard more information than sparse and Gaussian sensing matrices.

Figure 12 shows the average frames per seconds (fps) rate required to obtain the foreground mask for varying video resolutions. The results illustrate the substantial computational advantage of the cDMD algorithm over the standard DMD. The computational savings are mainly achieved by avoiding the expensive computation of the singular value decomposition. Specifically, the compression step reduces the time complexity from O(knm) to O(kpm). The computation of the full modes \(\varvec{\Phi }\) in Eq. 24 remains the only computational expensive step of the algorithm. However, this step is embarrassingly parallel and the computational time can be further reduced using a GPU-accelerated implementation. The decomposition of a HD \(1280\times 720\) videos feed using the GPU-accelerated implementation achieves a speedup of about 4 and 21 compared to the corresponding CPU cDMD and (exact) DMD implementations. The speedup of the GPU implementation can even further be increased using sparse or single-pixel (sPixel) measurement matrices.

Fig. 10
figure 10

Algorithms runtime (excluding computation of the foreground mask) and accuracy for a varying number of samples p. Here a \(720\times 480\) video sequence with 200 frames is used

Fig. 11
figure 11

Bootstrap \(95\%\)-confidence intervals of the F-measure computed using both sparse and single-pixel measurements

Fig. 12
figure 12

CPU and GPU algorithms runtime (including the computation of the foreground mask) for varying video resolutions (200 frames). The optimal target-rank is automatically determined, and \(p=1000\) samples are used

5 Conclusion and outlook

We have introduced the compressed dynamic mode decomposition as a novel algorithm for video background modeling. Although many techniques have been developed in the last decade and a half to accomplish this task, significant challenges remain for the computer vision community when fast processing of high-definition video is required. Indeed, real-time HD video analysis remains one of the grand challenges of the field. Our cDMD method provides compelling evidence that it is a viable candidate for meeting this grand challenge, even on standard CPU computing platforms. The frame rate per second is highly competitive compared to other stat-of-the-art algorithms, e.g., Gaussian mixture-based algorithms [911]. Compared to current robust principal component analysis-based algorithm, the increase in speed is even more substantial. In particular, the GPU-accelerated implementation substantially improves the computational time.

Despite the significant computational savings, the cDMD remains competitive with other leading algorithms in the quality of the decomposition itself. Our results show that for both standard and challenging environments, the cDMD’s background subtraction accuracy in terms of the F-measure is competitive to leading RPCA-based algorithms [16]. Though, the algorithm cannot compete, in terms of the F-measure, with highly specialized algorithms, e.g., optimized Gaussian mixture-based algorithms for background modeling [2]. The main difficulties arise when video feeds are heavily crowded or dominated by non-periodic dynamic background objects. Overall, the trade-off between speed and accuracy of compressed DMD is compelling.

Future work will aim to improve the background subtraction quality as well as to integrate a number of innovative techniques. One technique that is particularly useful for object tracking is the multi-resolution DMD [44]. This algorithm has been shown to be a potential method for target tracking applications. Thus, one can envision the integration of multi-resolution ideas with cDMD, i.e., a multi-resolution compressed DMD method, in order to separate the foreground video into different dynamic targets when necessary.