The Sparse Dynamic Factor Model: A Regularised Quasi-Maximum Likelihood Approach

The concepts of sparsity, and regularised estimation, have proven useful in many high-dimensional statistical applications. Dynamic factor models (DFMs) provide a parsimonious approach to modelling high-dimensional time series, however, it is often hard to interpret the meaning of the latent factors. This paper formally introduces a class of sparse DFMs whereby the loading matrices are constrained to have few non-zero entries, thus increasing interpretability of factors. We present a regularised M-estimator for the model parameters, and construct an efficient expectation maximisation algorithm to enable estimation. Synthetic experiments demonstrate consistency in terms of estimating the loading structure, and superior predictive performance where a low-rank factor structure may be appropriate. The utility of the method is further illustrated in an application forecasting electricity consumption across a large set of smart meters.


Introduction
Originally formalised by Geweke [1977], the premise of the Dynamic Factor Model (DFM) is to assume that the common dynamics of a large number of stationary zero-mean time series X t = (X 1,t , . . ., X p,t ) stem from a relatively small number of unobserved (latent) factors F t = (F 1,t , . . ., F r,t ) where r p through the linear system for observations t = 1, . . ., n.The matrix Λ provides a direct link between each factor in F t and each variable in X t .The larger the loading |Λ i,j | for variable i and factor j, the more correlated this variable is with the factor.The common component χ t = ΛF t captures the variability in the timeseries variables that is due to the common factors, while the idiosyncratic errors t = ( 1,t , . . ., p,t ) capture the features that are specific to individual series, such as measurement error.What makes the factor model in (1) a dynamic factor model is the assumption that the factors, and possibly the idiosyncratic errors may be temporally dependent, i.e., are time series themselves.Arguably, it was the application of Sargent et al. [1977] showing how just two dynamic factors were able to explain the majority of variance in headline US macroeconomic variables that initiated the DFMs popularity.The DFM is nowadays ubiquitous within the economic statistics community, with applications in nowcasting/forecasting [Giannone et al., 2008, Banbura et al., 2010, Foroni and Marcellino, 2014], constructing economic indicators [Mariano andMurasawa, 2010, Grassi et al., 2015], and counterfactual analysis [Harvey, 1996, Luciani, 2015].Examples in other domains include psychology [Molenaar, 1985, Fisher, 2015], the energy sector [Wu et al., 2013, Lee andBaldick, 2016] and many more, see Stock and Watson [2011] and Poncela et al. [2021] for detailed surveys of the literature.
The DFM can be used in both an exploratory (inferential) setting, as well as a predictive (forecasting) mode.When dealing with the former its use is analogous to how one might apply Principal Component Analysis (PCA) to understand the directions of maximum variation in a dataset, of course, the DFM does not just describe the cross-correlation structure, like PCA, but also the autocovariance.The loadings matrix Λ is usually used to assess how one should interpret a given (estimated) factor.Unfortunately, as in PCA, the interpretation of the factors in a traditional DFM is blurred as all variables are loaded onto all factors.

Our Contributions
This paper seeks to bring modern tools from sparse modelling and regularised estimation to bear on the DFM.Specifically, we formalise a class of sparse factor models whereby only a subset of the factors will be active for a given variable-we assume the matrix Λ is sparse.Unlike regular sparse-PCA approaches, we take a penalised likelihood estimation approach, and noting that the likelihood is incomplete, we suggest a novel EM algorithm to perform estimation.The algorithms developed are computationally efficient, and give users a new method for imposing weakly informative (sparse) structural priors on the factor model.The data-driven estimation of the loadings support contrasts with the hard constraints that are more traditional in the use of DFMs.
The analysis within this paper is empirical in nature, we consider three aspects: a) how our EM algorithm performs in recovering the true sparsity pattern in the factor loadings; b) how the model contrasts with alternative models in a predictive setting, e.g.where we want to forecast either all the p time series, or just a subset of these; and c) how the model and estimation routine can be used in practice to extract insights from complex real-world datasets.The first two points are illustrated through extensive synthetic experiments, whilst for the latter, we give an example application to a set of smart-meter data from across our University campus.To our knowledge this is the first time a DFM has been used to study building level energy data, and illustrates some of the benefits that come from imposing sparsity in terms of increasing the interpretability of the model.

Background and Related Work
Canonically, the dynamics of the latent factors in the DFM are specified as a stationary VAR(1) model: where u t is a zero-mean series of disturbances with covariance matrix Σ u .Furthermore, the idiosyncratic errors, t , in (1), are commonly assumed to be zero-mean and cross-sectionally uncorrelated, meaning their covariance matrix, which we denote Σ , is diagonal-models with these assumptions are termed exact.However, as shown by Doz et al. [2011], even when this assumption is relaxed and the idiosyncratic errors are (weakly) cross correlated, referred to as an approximate DFM, then consistent estimation of the factors is possible as (n, p) → ∞.Therefore, the 'curse of dimensionality', often a burden for analysing time-series models, can actually be beneficial in DFMs.

Estimation
The measurement equation (1) along with the state equation ( 2) form a state space model.A simple approach to estimate factor loadings is to consider the first r eigenvectors of the sample covariance matrix of X, essentially applying PCA to the time series.This has been extensively reviewed in the literature [Stock and Watson, 2002, Bai, 2003, Doz and Fuleky, 2020].When mild conditions are placed on the correlation structure of idiosyncratic errors, the PCA estimator is the optimal non-parametric1 estimator for a large approximate DFM.With even tighter conditions of spherical idiosyncratic components, i.e. they are i.i.d.Gaussian, then the PCA estimator is equivalent to the maximum likelihood estimator [Doz and Fuleky, 2020].The problem with using non-parametric PCA methods to estimate the loading structure is that there is no consideration of the dynamics of the factors or idiosyncratic components.In particular, there is no feedback from the estimation of the state equation ( 2) to the measurement equation (1).For this reason, it is preferable to use parametric methods that are able to account for temporal dependencies in the system.An alternative approach is proposed in [Giannone et al., 2008] whereby the initial estimates of the factors and loadings are derived from PCA, the VAR(1) parameters are estimated from these preliminary factors, before updating the factor estimates using Kalman smoothing.This two-stage approach has been theoretically analysed in Doz et al. [2011] and successfully applied to the field of nowcasting in many national statistical institutes and central banks.The Kalman smoothing step in particular is very helpful for handling missing data, whether it be backcasting missing at the start of the sample, forecasting missing data at the end of the sample2 or interpolating arbitrary patterns of missing data throughout the sample.Bańbura and Modugno [2014] build on the DFM representation of Watson and Engle [1983] and adopt an expectation-maximisation (EM) algorithm to estimate the system (1)-( 2) with a quasi maximum likelihood estimation (QMLE) approach.Doz et al. [2012], Bai and Li [2016], and Barigozzi and Luciani [2022] provide theoretical results whereby, as (n, p) → ∞, the QMLE estimates (based on an exact Gaussian DFM) are consistent under milder assumptions allowing for correlated idiosyncratic errors.The EM approach to estimation is beneficial as it allows feedback between the estimation of the factors and the loadings, and thus handle arbitrary patterns of missing data.

Relation to Current Work
In the literature, the idea of a sparse DFM is not new.A classic approach is to use factor rotations that aim to minimise the complexity in the factor loadings to make the structure simpler to interpret.See Kaiser [1958] for the well-established varimax rotation and see Carroll [1953] and Jennrich and Sampson [1966] for the well-established quartimin rotation.For a recent discussion paper on the varimax rotation see Rohe and Zeng [2020].An alternative approach based on LASSO regularisation is to use sparse principle components analysis (SPCA) [Zou et al., 2006] in place of regular PCA on the sample covariance matrix in the preliminary estimation of factors and loadings, i.e. in stage one of the two-stage approach by Giannone et al. [2008].For factor modelling, it has been used by Croux and Exterkate [2011] in a typical macroeconomic forecasting setting where they consider a robustified version.Kristensen [2017] use SPCA to estimate diffusion indexes with sparse loadings.Despois and Doz [2022] prove that SPCA consistently estimates the factors in an approximate factor model if the 1 penalty is of O(1/ √ p).They also compare SPCA with factor rotation methods and show an improved performance when the true loadings structure is sparse.Unlike previous research, our methodology implements regularisation within an EM algorithm framework, allowing us to robustly handle arbitrary patterns of missing data, model temporal dependence in the processes, and impose weakly informative (sparse) prior knowledge on the factor loadings.We argue that in settings where autocorrelation is moderately persistent, that the feedback provided through our EM procedure is important in aiding recovery of the factor loadings, as well as producing accurate forecasts.
The rest of the paper is structured as follows.In Section 3 we formalise our DFM model and the sparsity assumptions placed on the loading matrices.Sect. 4 presents a regularised likelihood estimator for the model parameters, and introduces an EM algorithm to enable finding feasible estimates.Sect. 5 discusses how we implement the method using the R package sparseDFM [Mosley et al., 2023].Numerical results, including simulation studies and real data analysis, are presented in Sects.6 and 7, respectively.The paper concludes with a discussion of the results, and how the models and estimators can be further generalised to provide flexibility to users.
3 The Sparse DFM Consider the p-variate time series {X t } and r factors {F t } related according to the model where { t } and {u t } are multivariate white noise processes.For simplicity we assume E[ t t ] = Σ = diag(σ 2 ) and σ 2 ∈ R p + is a vector of idiosyncratic variances.Similarly, let E[u t u t ] = Σ u and assume the eigenvalues of the VAR matrix are bounded A < 1, thus the latent process is assumed stationary.This model corresponds to an exact DFM, where all the temporal dependence is modelled via the latent factors.
In this context, our notion of sparsity relates to the assumption that many of the entries in Λ 0 will be zero.For instance, let the support of the kth column of the loading matrix be denoted We refer to a DFM as being sparse if s k < p for some or all of the k = 1, . . ., r factors.In practice, this is an assumption that many of the observed series are driven by only a few (r) latent factors, and that for many series only a subset of the factors will be relevant.

Consistency and Pervasiveness
In the sparse situation, whereby s k < p, we will be able to model only a subset of the observations with each factor.To enable us to model all p variables and gain information relating to the r factors as n, p increase we assume a couple of conditions on the specification.First, that the support of the observations, and the union of factor supports is equal, i.e. ∪ r k=1 S k = {1, . . ., p}, thus all observations are related to at least one of the factors.Second, that the support for each factor grows with the number of observed variables, in that {s k } is a non-decreasing sequence in p. Assumptions of this form would allow us, in principle, to assess the consistency of factor estimation as p grows.
This asymptotic analysis in p (and n) contrasts with the traditional setting with a fixed p-for which the factors cannot be consistently recovered and can only be approximated, with error that depends on the signal-noise-ratio Bai and Li, 2016].Intuitively, this is due to the fact that if p is fixed, then we cannot learn anything more about the factor at a specific time t, as we do not get more information on the factors as n increases, instead we just get more samples (at different time points) relating to the series {F t }.When we go to the doubly asymptotic, or just p → ∞ setting, then if the number of factors r is fixed or restricted to slowly grow in n then we can not only recover structures relating to {F t }, e.g. the specification of A, but we can also get more information relating to the factor at the specific time t [Bai andLi, 2016, Barigozzi andLuciani, 2022].One way to ensure this growing information about the factors is to assume that they are in some sense pervasive-the more variables p we sample, the more this tells us about the r factors.We note, that for a more formal analysis of the DFM, a usual pervasiveness assumption placed on the loading conditions is given by Doz et al. [2011], whereby lim p→∞ p −1 λ min (Λ 0 Λ 0 ) > 0, i.e. the average loading onto the least-influential factor is bounded away from zero.
In this paper, we choose to focus on the empirical performance of our estimator, thus we do not formalise the sparsity assumptions further.However, it is worth noting our empirical studies meet the pervasiveness assumptions regarding the support of the factor loadings.

Identifiability
In the following section, we will consider a QMLE estimator for the factor model based on assuming Gaussian errors t and u t , it is thus of interest to consider how the associated likelihood relates to the factors and their loadings.Adopting a Gaussian error structure and taking expectations over the factors, the likelihood for (3) is given by An obvious identifiability issue arises here, such that if Λ = ΛQ, Ft = QF t , for any unitary matrix Q = Q −1 , we have L( Λ) = L(Λ).Now consider the case of Λ0 , i.e. performing a rotation on the true loadings, denote the set of all possible equivalent loading as The invariance of the likelihood to elements of this set mandates that theoretical analysis of the DFM is typically constructed in a specific frame of reference, c.f. Doz et al. [2011Doz et al. [ , 2012]], Bai and Li [2016].Interestingly, our sparsity assumptions restrict the nature of this equivalence class considerably, in that only loading matrices with sparse structure are permitted.In general, there will still be multiple sparse representations that are allowed, and the issue of the scale invariance remains, however, the latter can be fixed by imposing a further constraint on the norms of the loading matrices.In this work, we demonstrate empirically that it is possible to construct estimators that are consistent up to rotations that maintain an optimal level of sparsity, in the sense that the true loading matrix is given by ( where Λ •,k 0 := |supp(Λ •,k )| counts the number of non-zero loadings.More generally (see Remark 1) we could consider selecting on the basis of the q norm, Λ q := ( ik Λ q i,k ) 1/q , the 1 norm may still provide selection, however, the 2 norm provides no selection as it maintains the rotational The impact of rotation on the function L( Λ(θ)) + Λ(θ) q , in the case of q = 0, 1 the set of feasible Λ 0 from ( 5) is restricted to the points θ ∈ {0, ± 1 2 π, ±π} corresponding to either swapping columns, or flipping signs.
invariance of the likelihood.In this paper, we restrict our equivalence set on the basis of the 0 norm, as above, that is, we specify the true loading matrices as those that maintain the highest number of zero values after consideration for all unitary linear transformations.
In practice, these issues mean we are unable to recover the correct sign of the factor loadings, whilst columns in the loading matrix may also be permuted, e.g.factor k can be swapped (under permutation of the columns in the loading matrix) with factor l, for any k, l ∈ {1, . . ., r}.These are the same identifiability issues which we face in PCA, whereby the eigenvectors can be exchanged in terms of order and direction.
Remark 1 Sparsity and Invariance To illustrate how the sparsity constraint (5) breaks the more general invariance that regular DFMs suffer, we can consider the quantity Λ * 0 Q rot (θ) q , where Q rot (θ) ∈ R 2×2 is a rotation matrix with argument θ ∈ (−π, π), and Λ * 0 ∈ R 10×2 has the first column half filled with ones, and the rest zero, the second column is set to be one minus the first.As we see from Fig. 1, without the additional restriction on our specification of Λ 0 , via Eq. 5, we would not be able to determine a preference for any particular element from the set

Estimation
Under the Gaussian error assumption, and collecting all parameters of the DFM (3) in θ = (Λ, A, Σ , Σ u ), we are able to write the joint log-likelihood of the data X t and the factors F t as: , and we have assumed an initial distribution at t = 0 of the factors as F 0 ∼ N (α 0 , P 0 ).
We propose to induce sparsity in our estimates using the familiar 1 penalty, with motivation similar to that of the LASSO [Tibshirani, 1996].Alternative penalty functions are available, however, the convexity of the 1 penalty is appealing.Even though the overall objective for the parameters is non-convex, due to the rotational invariance of the log-likelihood, the convexity of the penalty ensures we can quickly and reliably apply the sparsity constraints.We will make use of this structure in the algorithms we construct to find estimates in practice.It is worth noting that our focus here is on the factor loadings, and thus this is the object we regularise, possible extensions could consider additional/alternative constraints, for instance on the latent VAR matrix.
Our proposed estimator attempts to minimise a penalised negative log-likelihood, as follows where α ≥ 0. A larger α corresponds to a higher degree of shrinkage on the loadings, e.g. for a larger α we would expect more zero values in the loading matrices.

A Regularised Expectation Maximisation Algorithm
The regularised likelihood ( 7) is incomplete, as whilst we have observations, we do not observe the factors.To solve this problem, we propose to construct an Expectation-Maximisation (EM) framework where we take expectations over the factors (fixing the parameters), then conditional on the expected factors we maximise the log-likelihood with respect to the parameters θ, we iterate this process until our estimates converge.
The EM algorithm involves calculating and maximising the expected log-likelihood of the DFM conditional on the available information Ω n .Given the log-likelihood in ( 6), the conditional expected log-likelihood is Ultimately, we wish to impose our regularisation on the expected log-likelihood, our feasible estimator being given by θ = arg min

Maximisation-Step
We use the following notation for the conditional mean and covariances of the state: conditional on all information we have observed up to a time s, denoted by Ω s .
As shown in Bańbura and Modugno [2014], the maximisation of (8) results in the following expressions for the parameter estimates: α0 = a t|n ; P0 = P t|n (10) and letting S t|n = a t|n a t|n + P t|n , and S t,t−1|n = a t|n a t−1|n + P t,t−1|n we have To minimise (9) for parameters Λ and Σ , we should also consider there might be missing data in X t .Let us define a selection matrix W t to be a diagonal matrix such that and note that X t = W t X t + (I − W t )X t .The update for the idiosyncratic error covariance is then given by where Σ * is obtained from the previous EM iteration.As noted in Algorithm 1, in practice we update Σ after estimating Λ, as the former is based on the difference between the observations and the estimated common component.The following section details precisely how we practically obtain sparse estimates for the factor loadings, the estimates can then be used in ( 13) and thus complete the M-step of the algorithm.

Incorporating Sparsity
In this work, we propose to update Λ by constructing and Alternative Directed Method of Moments (ADMM) algoritghm [Boyd et al., 2011] to solve (9) with the parameters ( Â, Σu , α0 , P0 ) fixed.The algorithm proceeds by sequentially minimising the augmented Lagrangian where Z ∈ R p×r is an auxiliary variable, U ∈ R p×r are the (scaled) Lagrange multipliers and ν is the scaling term.Under equality conditions relating the auxilary (Z) to the primal (Λ) variables, this is equivalent to minimising (9), e.g.arg min as ( 9) is convex in the argument Λ with all other parameters fixed, this argument holds for any ν > 0 [Boyd et al., 2011, Lin et al., 2014].
for k = 0, 1, 2, . . ., until convergence.The first (primal) update is simply a least-squares type problem, whereby on vectorising Λ one finds Remark 2 (Exploiting Dimensionality Reduction) For the Λ (k+1) update, the dimensionality of the problem is quite large, leading to a naïve per-iteration cost of order O(r 3 p 3 ).A more efficient method for this step can be sought by looking at the specific structure of the matrix to be inverted.Define ), then the solution (15) can be written as Since Σ is diagonal in an exact DFM, B t is also diagonal and thus D is made up of r 2 blocks such that each (i, j) th block is a diagonal matrix of length p for i, j = 1, . . ., r.To speed up the computation, we note that νI pr = νI r ⊗I p and use the properties of commutation matrices [Magnus and Neudecker, 2019, p. 54], denoted by K rp , to write The matrix needing to be inverted in the final line of equation ( 16) is now a block diagonal matrix.
We can extract each of the 1, . . ., p blocks separately and invert them one-by-one.The final result from ( 16) has the expected block structure with a diagonal matrix in each block but we can stack them into a cube to save storage.Overall, the operations can be completed with cost O(r 3 p).Given that this needs to be performed for every iteration of the EM algorithm, our commutation trick results in significant computational gains.
Whilst other optimisation routines could be used to estimate the sparse loadings, the ADMM approach is appealing as it allows us to split (9) into sub-problems that can easily be solved.If one wished to incorporate more specific/structured prior knowledge, this approach can easily be altered to impose these assumptions, for instance, future work could consider group-structured regularisation allowing for more informative prior knowledge on the factor loadings to be incorporated.Hard constraints, e.g.where we require a loading to be exactly zero can also be incorporated at the Z update stage by explicitly setting some entries to be zero.

Expectation Step
So far, we have discussed how to update the parameters conditional on the quantities In our application, under the Gaussian error assumption, these expectations can be easily calculated via the Kalman smoother.For completeness, we detail this step in the context of the DFM model, as well as discussing some methods to speed up the computation which make use of the exact DFM structure.
The classical multivariate Kalman smoother equations can be slow when p is large.However, since we assume Σ is diagonal, we can equivalently filter the observations X t one element at a time, as opposed to updating all p of them together as in the classic approach [Durbin and Koopman, 2012].As matrix inversion becomes scalar divisions, huge speedups are possible.This approach, sometimes referred to as the univariate treatment, sequentially updates across both time and variable index when filtering/smoothing. Let us define the individual elements X t = (X t,1 , . . ., X t,p ) , Λ = (Λ 1 , . . ., Λ p ) , Σ = diag(σ 2 1 , . . ., σ 2 p ).Following Koopman and Durbin [2000] we expand the conditional expectations according to for i = 1, . . ., p and t = 1, . . ., n.The univariate treatment now filters this series over indices i and t.This is equivalent in form to the multivariate updates of the classic [Shumway and Stoffer, 1982] approach, except that the t subscript now becomes a t, i subscript, and the t|t subscript now becomes t, i + 1.
for i = 1, . . ., p and t = 1, . . ., n.If X t,i is missing or C t,i is zero, we omit the term containing K t,i .The transition to t + 1 is given by the following prediction equations: These prediction equations are exactly the same as the multivariate ones (i.e., predictions are not treated sequentially but all at once).From our perspective, this univariate treatment may be more appropriately referred to as performing univariate updates plus multivariate predictions.
Unlike Shumway and Stoffer [1982], the measurement update comes before the transition; however, we can revert to doing the transition first if our initial state means and covariances start from t = 0 instead of t = 1.Likewise, univariate smoothing is defined by: for i = p, . . ., 1 and t = n, . . ., 1, with b n,p and J n,p initialised to 0. Again, if X t,i is missing or C t,i is zero, drop the terms containing K t,i .Finally, the equations for a t|n and P t|n are: a t|n = a t,1 + P t,1 b t,0 , P t|n = P t,1 − P t,1 J t,0 P t,1 .These results will be equivalent to a t|n and P t|n from the classic multivariate approach, yet obtained with substantial improvement in computational efficiency.In order to calculate the cross-covariance matrix P t,t−1|n , we use De Jong and Mackinnon [1988]'s theorem:

Parameter Tuning
There are two key parameters that need to be set for the DFM model.The first is to select the number of factors, and the second is to select an appropriate level of sparsity.One may argue that these quantities should be selected jointly, however, in the interests of computational feasibility, we here propose to use heuristics, first selecting the number of factors, and then deciding on the level of sparsity.This mirrors how practitioners would typically apply the DFM model, where there is often a prior for the number of relevant factors (or more usually an upper bound).Both the number of factors, and the structure of the factor loadings impact the practical interpretation of the estimated factors.

Choosing the Number of Factors
To calculate the number of factors to use in the model we opt to take the information criteria approach of Bai and Ng [2002].There are several criteria that are discussed in the literature, for example, the paper of Bai and Ng [2002] suggests three forms4 .For this paper, we use the criteria of the following form: where and ¯ i,t = X t,i − Λi,• Ft is found using PCA when applied to the standardized data.The preliminary factors F correspond to the principle components, and the estimated loadings Λ corresponding to the eigenvectors.Should the data contain missing values, we first interpolate the missing values using the median of the series and then smooth these with a simple moving window.

Remark 3
We note that ideally one may wish to apply the EM procedure to get more refined estimates of both the factors and loadings, however, in the interests of computational cost and inline with current practice we propose to use the quick (prelininary) estimates above, denoted with Λ rather than Λ.

Tuning the Regulariser
Once a number of factors r has been decided, we tune α by performing a simple search over a logarithmically spaced grid and minimise a Bayesian Information Criteria defined as where ŝk is the number of non-zero entries in the kth column of the estimated loading matrix.In this case, we run the EM algorithm until convergence (usually after a dozen or so iterations) and then evaluate the BIC using the resulting F and Λ, this procedure is repeated for each α in the grid.An example of the resulting curve can be seen in the empirical application of Section 7. To limit searching over non-optimal values, an upper limit for α is set whereby, if the loadings for a particular factor are all set to zero, then we terminate the search.
Remark 4 Tuning both the number of factors, and the regulariser for these models is a topic of open research and discussion.Indeed, whilst the criteria of Bai and Ng [2002] are well used, there is still lively debate about what is an appropriate number of factors, and this usually determined by a mix of domain (prior) knowledge and heuristics such as those presented above.The heuristics provided here seem reasonable in the applications and experiments we consider, however, we do not claim they are optimal for all scenarios.

Implementation
We have implemented the estimation routine as part of the R package sparseDFM available via CRAN.The EM routine and ADMM updates are implemented in C++ using the Armadillo library.Initialisation of the ADMM iterates utilises a warm start procedure whereby the solution at the previous iteration of the EM algorithm initialises the next solution.Furthermore, warm-starts are utilised when searching over an α tuning grid.As noted in other applications [Hu et al., 2016] starting the ADMM procedure can lead to considerable speed-ups.With regards to the augmentation parameter ν in the ADMM algorithm, we simply keep this set to 1 for the experiments run here, however, it is possible that tuning this parameter could lead to further speedups.On the first iteration of the algorithm, the EM procedure is initialised by a simple application of PCA to the standardised data, analagously to how the preliminary factors and loadings Λ were found in Section 4.2.A summary of the EM algorithm as a whole is given in Algorithm 1.

Synthetic Experiments
We provide a Monte-Carlo numerical study to show the performance of our QMLE estimator in terms of recovery of sparse loadings and the ability of the sparse DFM to forecast missing data at the end of the sample.In particular, we simulate from a ground-truth model according to: for t = 1, . . ., n and X t having p variables.We set the number of factors to be r = 2 and consider true model parameters of the form: The loadings matrix Λ is a block-diagonal matrix which is 1/2 sparse with p/2 ones in each block.We set up the VAR(1) process of the factors in this way such that we can adjust the cross-correlation parameter ρ between the factors while having factors that always have variance one.This allows us to understand how important a cross-correlation at non-zero lags structure is when assessing model performance.We vary the ρ parameter between ρ = {0, 0.6, 0.9}, going from no cross-correlation to strong cross-correlation between the factors.We set the covariance of the idiosyncratic errors to be I p in order to have a signal-to-noise ratio between the common component ΛF t and the errors t equal to one.

Recovery of Sparse Loadings
We apply our sparse DFM (SDFM) to simulated data from the data generating process above to assess how well we can recover the true loadings matrix Λ.We compare our method to sparse principle components analysis5 (SPCA) applied to X t to test which settings we are performing better in.We tune for the best 1 -norm parameter in both SDFM and SPCA using the BIC function ( 19) by searching over a large grid of logspaced values from 10 −3 to 10 2 .We also make comparisons to the regular DFM approach of Bańbura and Modugno [2014] to test the importance of using regularisation when the true loading structure is sparse.The plots represent a setting with a fixed n = 100 and varying number of variables p and where the cross-correlation parameter in the VAR(1) process is set to ρ = 0 (left plot), ρ = 0.6 (middle plot) and ρ = 0.9 (right plot).
The estimation accuracy is assessed with mean absolute error (MAE) between the true loadings according to (rp) −1 Λ − Λ 1 .We also provide results for the F1 score for the sparsity inducing methods of SDFM and SPCA to measure how well the methods capture the true sparse structure.Due to invariance issues discussed, the estimated loadings may not be on the same scale as the true loadings, we thus first re-scale the estimated loadings such that their norm is equal to that of the simulated loadings, i.e.Λ 2 = Λ 2 .The estimated loadings from each model are identified up to column permutations and therefore we permute the columns of Λ to match the true order of Λ.We do this by measuring the 2-norm distance between the columns of Λ and Λ and iteratively swapping to match the smallest distances.
Figure 2 displays the results for the loadings recovery where we have fixed the number of observations to be n = 100 and vary the number of variables between p = {18, 60, 120, 180} along the x-axis and the cross-correlation parameter in the VAR(1) process between ρ = {0, 0.6, 0.9} going from the left to middle to right plot respectively.The top panel shows the median MAE score (in logarithms) over 100 experiments while the bottom panel shows the F1 scores.We provide confidence bands for both representing the 25th and 75th percentiles.It is clear from the plots that the sparsity inducing methods of SDFM and SPCA are dominating a regular DFM when the true loadings structure is in fact sparse.It is also clear that SPCA performs poorly, compared with SDFM, when the cross-section of the data increases for a fixed n.This is even more noticeable from the F1 score when ρ increases.This highlights the importance of the SDFM's ability to capture correlations between factors at non-zero lags.Unlike SPCA, the EM algorithm of SDFM allows feedback from the estimated factors when updating model parameters, allowing it to capture these factor dependencies.We see improved scores in MAE as the cross-section increases for SDFM.This follows the intuition of the EM algorithm framework as we learn more about the factors as the dimension p → ∞.We should remark that for most scenarios the F1 score for SDFM is almost one, however, when p = 18 and ρ is high, the score does drop.In this setting a low value for α minimises BIC, meaning almost no sparsity is applied (a very similar result to a regular DFM fit).Here, the two factors are highly correlated and there is not enough cross-section to determine factor structure.In practice it is likely that cross-section will be large and hence this result is not too concerning.From left-right: ρ = 0, ρ = 0.6, ρ = 0.9.Plot indicates the 50th percentile of performance across 100 experiments with n = 100, p = 64.

Forecasting Performance
To evaluate our ability to forecast missing data at the end of the sample we simulate data from the data generating process above with n = 200, p = 64 and consider ρ = {0, 0.6, 0.9}, and assume different patterns of missing data at the end of the sample.We consider a 1-step ahead forecast case where we set 25%, 50%, 75% and then 100% of variables to be missing in the final row of X.
When allocating variables to be missing we split the data up into the two loading blocks and set the first 25%, 50%, 75% and 100% of each loading block to be missing.For example, the variables 1 to 8 and 33 to 40 are missing in the 25% missing scenario.We are interested in forecasting the missing data in the final row of X and we calculate the average MAE over 100 experiments.
We make comparisons with a sparse vector-autoregression (SVAR) model6 as this is a very popular alternative forecasting strategy for high-dimensional time series that is based on sparse assumptions.As our factors are generated using a VAR(1) process with a sparse auto-regression matrix, we are interested to see whether SVAR will be able to capture the cross-factor autocorrelation when producing forecasts.We also apply a standard AR(1) process to each of the variables needing to be forecasted as a benchmark comparison.
Figure 3 displays the results of the simulations plotting MAE for each of the 3 methods and each simulation setting.In all settings we find SDFM to outperform both SVAR and AR(1).When ρ is set to be 0.9, we find SVAR does improve its forecasting performance as opposed to when ρ = 0 as the VAR(1) process driving the factors becomes more prominent.The results confirm SDFM's ability to make use of variables that are present at the end of the sample when forecasting the missing variables.We see this by the rise in MAE when 100% of the variables are missing at the end of the sample and the model can no longer utilise available data in this final row.The MAE remains fairly flat as the amount of missingness rises from 25% to 75% showing SDFM's ability to forecast correctly even when there is small amount of data available at the end of the sample.

Computational Efficiency
To assess the computational scalaibility, we simulate from a sparse DFM where Λ = I r ⊗ 1 p/r and Σ = I p , and the factors are a VAR(1) with A = 0.8 × I r and Σ u = (1 − 0.8 2 ) × I r .We record the number of EM iterations and the time they take for each 1 -norm parameter α up to the optimal 1 -norm parameter α and then take the average time of a single EM iteration.We repeat the experiment ten times for each experimental configuration.
The results are presented in Figure 4, which demonstrates scalability as a function of n, and p, under different assumptions on the number of factors r = 2, 4, 6, 8.As expected, the cost is approximately linear in n and p, with increasing cost as a function of the number of factors r.The results demonstrate the utility of using the univariate smoothing approach as well as the matrix decomposition when calculating required inversions.

The Dynamic Factors of Energy Consumption
This section details application of the sparse DFM to a real-world problem, namely the forecasting and interpretation of energy consumption.Beyond forecasting consumption in the near-term future, our aim here is to also characterise the usage in terms of what may be considered typical consumption profiles.These are of specific interest to energy managers and practitioners, as understanding how energy is consumed in distinct buildings can help target interventions and strategy to reduce waste.We also highlight how the sparse DFM, and in particular our EM algorithm, can be used to impute missing data and provide further insight.

Data and Preprocessing
In this application, the data consists of one month of electricity consumption data measured across p = 42 different buildings on our universities campus.This data is constructed based on a larger dataset, which monitors energy at different points throughout a building, in our case, we choose to aggregate the consumption so that one data-stream represents the consumption of a single building.The data is gathered at 10 minute intervals (measuring consumption in kWh over that interval), resulting in n = 3, 456 data points spanning 24 days worth of consumption in November 2021, we further hold out one day n test = 144 data points to evaluate the out-of-sample performance of the DFM model.An example of time series from the dataset is presented in Figure 5.There are many alternative ways one may wish to model this data, however, one of the key tasks for energy managers is to understand how consumption in this diverse environment is typically structured.This is our primary objective in this study, i.e. we wish to extract typical patterns of consumption that can well represent how energy is used across the campus.To this end, we decide not to remove the relatively clear seasonal (daily) patterns in consumption prior to fitting the factor model, the hope being, that these patterns will somehow be pervasive in the derived factors.
Whilst we do have meta-data associated with each of these buildings for sensitivity purposes we choose to omit this in our discussions here, the buildings are presented as being approximately categorised under the following headings: Accomodation: Student residences, and buildings primarily concerned with accomodation/student living.
Admin: Office buildings, e.g.HR, administration, and central university activities.
Misc: Other student services, e.g.cinema, shopping, sports facilities.
Mixed: Buildings which mix teaching and accomodation.For instance, seminar rooms on one floor with accomodation on another.
Services: Management buildings, porter/security offices.
Teaching: Teaching spaces like lecture theatres, seminar rooms.

Factor Estimates and Interpretation
To estimate factors we first choose a number of factors according to criterion (18), which leads to 4 factors being specified as seen in Fig. 6.Next, we apply the sparse DFM model via the EM procedure in Algorithm 1.We run the algorithm to scan across a range of α parameters, and in this case, the BIC criteria suggests to impose moderate sparsity corresponding to α ≈ 0.01.One may note in Figure 6 that there is a second dip in the BIC criteria around α ≈ 0.03 after which the BIC rapidly rises until the cutoff constraint, after which all Λij are set to zero.In this case, the sparsity pattern of the two above values of α appear very similar, and the loading of the variables on the factors appears largely stable as a function of α.To give some intuition, the loadings Λ for α = 0.01 and α = 0 are visualised in Fig 7, a visualisation for the corresponding factors a t|n are given in Fig. 8.  and is most obvious for the third and fourth factors in this case.A visualisation of the factor behaviour on a typical weekday is given in Figure 9 where there is a clear ordering in the uncertainty surrounding the factor behaviour, e.g.Factor one has small confidence intervals, whereas Factor 4 has more uncertain behaviour, especially during the working day.Interestingly, the sparse DFM only really differs from the regular DFM in these third and fourth factors, where the latter exhibits slightly greater variation in behaviour.The sparse DFM is able to isolate these further factors to specific buildings.For example, the building identified by the circle in Fig. 7 is known to be active primarily throughout the night, and we see its factor loadings reflect this, e.g. the regular working day cycles for Factor 1 are not present, however, the evening and early morning features (Factors 3, and 4) are represented.For the teaching buildings, we see that the loading on Factor 2, and 3, are negative, indicating a sharp drop-off in energy consumption in the evening/overnight, again, this aligns with our expectations based on the usage of the facilities.

Forecasting Performance
The primary motivation for applying the sparse DFM in the context of this application is to aid in interpreting the consumption across campus.However, it is still of interest to examine how forecasts from the DFM compare with competitor methods.For consistency, we here provide comparison to the AR(1) and sparse VAR methods detailed earlier.These models all harness a simple autoregressive structure to model temporal dependence, specifically regressing only onto the last set of observations (or factors), i.e. they are Markov order 1.Our experiments asses performance of the models in forecasting out-of-sample data, either h = 6 steps ahead (1 hour), or h = 36 steps ahead (6 hours).The forecasts are updated in an expanding window manner, whereby the model parameters are estimated on the 24 days of data discussed previously, the forecasts are then generated sequentially based on n + t = 1, . . ., n test = 144 − h observations.An example of the forecasts generated (and compared to the realised consumption) is given in Figure 10.A striking feature of the DFM based model is its ability to (approximately) time the increases/decreases in consumption associated with the daily cycle.These features in the AR(1) and sparse VAR model are only highlighted after a period of h steps has passed, e.g. the models cannot anticipate the increase in consumption.
A more systematic evaluation of the forecast performance is presented in Figure 11, where the average error is calculated for each building, for each of the different models.We see that for the 1 hour ahead forecasts, all methods perform similarly, with the sparse DFM winning marginally, and the AR(1) forecasts demonstrating more heterogeneity in the performance.There is no clear winner across all the buildings, for most (30) buildings the DFM forecasts prove most accurate, with the AR being best on 2, and the SVAR winning on the remaining 10. Moving to the 6 hour ahead forecasts, the dominance of the sparse DFM becomes clear, winning across 39 of the buildings, and the AR method winning on 3. Interestingly, the SVAR fails to win on any building, falling behind the simpler AR approach.This suggests, that in this application the activity of one building may not impact that of another across longer time-frames, however, the behaviour of the latent factors (common component) does provide predictive power.
One could reasonably argue that we should not use these competitor models in this way for forecasting, e.g.we would likely look to add seasonal components corresponding to previous days/times, and/or potentially a deterministic (periodic) trend model.However, these extensions can also potentially be added to the DFM construction.Instead of absolutely providing the best forecasts possible, this case-study aims instead to highlight the differences in behaviour across the different classes of models (univariate, multivariate sparse VAR, and sparse DFM), and the fact that the sparse DFM can borrow information from across the series in a meaningful way, not only to aide interpretation of the consumption, but also to provide more accurate forecasts by harnessing the common component.

Conclusion
In this paper, we have presented a novel method for performing inference in sparse Dynamic Factor models via a regularised Expectation Maximisation algorithm.Our analysis of the related  6 hour ahead forecast.Performance evaluated on one hold out day 144 − h data points.Each bar is colored according to which method performs best for that building.Blue: SDFM, Red: AR(1), Navy: SVAR.The solid black line indicates average performance across all buildings, the grouping of buildings is indicated via the dashed line under the plots. .
QML estimator provides support for its ability to recover structure in the factor loadings, up to permutation of columns, and scaling.To our knowledge this is the first time the QMLE approach has been studied for the sparse DFM model, and our analysis extends recent investigations by Despois and Doz [2022] using more simplistic sparse PCA based approaches.When factors are thought to be dependent, e.g. as in our VAR(1) construction, the QMLE approach appears particularly beneficial relative to SPCA.We also validate that simple BIC based hyper-parameter tuning strategies appear to be able to provide reasonable calibration of sparsity in the high-dimensional setting.
There is much further work that can be considered for the class of sparse DFM models proposed here, for example looking at developing theoretical arguments on consistency, of both factor loadings, and the factor estimates themselves.In this paper, we opted for an empirical analysis of the EM algorithm, which we believe is more immediately useful for practitioners.Theoretical analysis of the proposed estimation routine is challenging for several reasons.First, one would need to decide whether to analyse the theoretical minimiser (QMLE), or the feasible estimate provided by the EM algorithm.Second, we need to consider the performance as a function of both n and p.For example, Proposition 2 from Barigozzi and Luciani [2022] gives theoretical results for the consistency of factor loadings for the regular unregularised QMLE and for a dense DFM model.A further line of work would be to generalise these results to the sparse DFM setting, for instance, can we show a result analogous to Theorem 1 in Bai and Li [2016], that shows the QMLE estimator of the loadings is equivalent to the OLS estimator applied to the true factors?These kind of approaches could potentially enable a formal comparison of the sparse PCA based approaches and our QMLE approach.
On a more methodological front, one could consider extending the regularisation strategy presented here to look at different types of sparsity assumption, or indeed to encode other forms of prior.Two potential extensions could be to relax the assumption that the factor loadings remain constant over time, or adopt a group-lasso type regularisation on the loadings.The latter would enable users to associate factors with pre-defined sub-sets of the observerd series, but still in a somewhat data-driven manner.For instance, in the energy application we could consider grouping the series via type of building and encouraging sparsity at this grouped level, rather than at the building level.This could be particularly useful if we consider the application to smart-meters at the sub-building, e.g.floor-by-floor, or room-by-room level.One of the benefits of the ADMM optimisation routine developed here is that it easily extended to these settings.
A final contribution of our work is to demonstrate the application of the sparse DFM on a real-world dataset, namely the interpretation and prediction of smart meter data.Traditionally, application of DFM based models has been within the economic statistics community, however, there is no reason they should not find much broader utility.The application to modelling energy consumption in a heterogeneous environment is novel in itself, and serves to raise awareness of how the DFM can help provide an exploratory tool for complex high-dimensional time series.In this case, not only is the sparse DFM beneficial for interpreting consumption patterns, identifying distinctive profiles of buildings that qualitatively align with our intuition, e.g. based on type of use, but also in forecasting consumption ahead of time.With the latter, the DFM can borrow from buildings with similar consumption profiles to better predict consumption peaks/dips further ahead in time.
Finally, we would like to remark that further applications of our proposed sparse DFM estimator can be found in our paper [Mosley et al., 2023], that also provides guidance on how to implement the methods in R. In particular, the application of the DFM to predicting trade-in-goods flows demonstrates that assuming sparse factors can improve forecast performance relative to the DFM, and that the structure of the loadings can be substantially altered as a function of α.

Figure 2 :
Figure 2: Median log-MAE score (top panel) and median F1 score (bottom panel) for recovering factor loadings across 100 experiments with a shaded confidence band of the 25th and 75th percentile.The plots represent a setting with a fixed n = 100 and varying number of variables p and where the cross-correlation parameter in the VAR(1) process is set to ρ = 0 (left plot), ρ = 0.6 (middle plot) and ρ = 0.9 (right plot).

Figure 3 :
Figure 3: Average MAE score forecasting, as a function of the level of missing data in the last sample.From left-right: ρ = 0, ρ = 0.6, ρ = 0.9.Plot indicates the 50th percentile of performance across 100 experiments with n = 100, p = 64.

Figure 4 :
Figure 4: Summary of computational cost.Top: as a function of n, with fixed p = 24.Bottom: as a function of p, with fixed n = 100.Average performance across 10 experiments.

Figure 5 :
Figure 5: Example of time-series readings for the 24 days under analysis.The figures present the square root of the consumption in each hour ( √ 6kWh) for different types of building, and illustrate the diverse nature of consumption.

Figure 6 :Figure 7 :
Figure 6: Top: Proportion of variance explained, based on PCA applied to the scaled and pre-processed (interpolated) dataset.Middle: Information Criteria (18) as a function of number of retained factors r.Bottom: BIC as a function of α, vertical line indicates minimiser and the α used in the subsequent analysis.

Figure 8 :
Figure 8: Estimated Factors (black) with original data (grey) as a function of time using the optimal α = 0.01 chosen according to BIC.When multiplied by the factor loadings (top) gives the estimated component.

Figure 9 :
Figure9: Average factor profile as a function of time-of-day, t = 0 corresponding to midnight.The solid line is a pointwise average of the factor ât|n across the 18 weekdays in the sample, confidence intervals are constructed as ±1.96 the standard-deviation.

Figure 10 :Figure 11 :
Figure 10: Example of predicted consumption ( √ kWh) in one (accommodation) building on the campus.The top row represents 1 hour ahead forecasts based on an expanding window, whilst the bottom represents 6 hour ahead forecasts.The SDFM and SVAR are tuned on the 24 days of data prior to that presented in the figure.Confidence intervals for the SDFM are based on 1.96×[ ΛP t|n Λ + Σ ] 1/2 ii