Skip to main content

Dynamic Networks with Multi-scale Temporal Structure


We describe a novel method for modeling non-stationary multivariate time series, with time-varying conditional dependencies represented through dynamic networks. Our proposed approach combines traditional multi-scale modeling and network based neighborhood selection, aiming at capturing temporally local structure in the data while maintaining sparsity of the potential interactions. Our multi-scale framework is based on recursive dyadic partitioning, which recursively partitions the temporal axis into finer intervals and allows us to detect local network structural changes at varying temporal resolutions. The dynamic neighborhood selection is achieved through penalized likelihood estimation, where the penalty seeks to limit the number of neighbors used to model the data. We present theoretical and numerical results describing the performance of our method, which is motivated and illustrated using task-based magnetoencephalography (MEG) data in neuroscience.


The automated, simultaneous monitoring of each unit in a large complex system has become commonplace. Frequently the data observed in such a system is in the form of a high dimensional multivariate time series. Domain areas where such a paradigm is particularly pertinent include computational neuroscience (e.g., temporal imaging across voxels or brain regions) and finance (e.g., investment returns across stocks or levels of lending among central banks). The combination of system and time series in these settings suggests a role for dynamic network modeling, a quickly developing area of study in the field of network analysis.

As the basic object of treatment in this paper we consider a multivariate time series, \(\left (X_{t}(1),\cdots , X_{t}(N)\right )\), observed at each of N units at times t = 1,…,T, as a set of measurements from across a system. We will use a graph G = (V, E) to describe the conditional dependencies among the time series across the system. Here V = {1,…,N} are vertices corresponding to the N units in the system, and E is the collection of vertex pairs joined by edges. Given data, we seek to select an appropriate choice of G that best characterizes the system, using techniques of statistical modeling and inference. This task is known as network topology inference (Kolaczyk, 2009, Ch 7.3). The notion of association used in this paper is a type of partial correlation, analagous to that underlying so-called Granger causality (Granger, 1969). Granger causal types of models have been widely utilized in financial economics – see Hamilton (1983) and Hiemstra and Jones (1994) and (Sims, 1972), for example – and in biological studies – see Mukhopadhyay and Chatterjee (2007) and Bullmore and Sporns (2009) for instance.

Granger causal models traditionally assume a stationary time series and take a vector-autoregressive (VAR) form. Here we adopt a restricted-VAR(p) model, defined as a VAR model without the self driven components:

$$ \begin{array}{@{}rcl@{}} X_{t}(u) = \sum\limits_{v \in V\backslash\{u\}}\sum\limits_{\ell=1}^{p} X_{t-\ell}(v)\theta^{(\ell)}(u,v) + \epsilon_{t}(u), \end{array} $$

where 𝜃()(u, v) collects the influence of the node v on node u at lag and 𝜖t(u) is independent Gaussian white noise. It is said that X(v) Granger causes X(u) if and only if 𝜃()(u, v)≠ 0 for some = 1,⋯ ,p. We use the term ‘restricted’ in describing this model because we restrict 𝜃()(u, u) to be 0 for all u, . This requirement is made for notational convenience, and without loss of generality, in that it essentially assumes the self-driven component has been removed and that our network characterizes only relationships between distinct nodes. The notion of ‘network’ in this framework is made precise through graphs defined as a function of the underlying graphical model. That is, through conditional independence relations, coded in one-to-one correspondence with patterns of non-zero elements among the 𝜃()(u, v). Specifically, G = (V, E) is a directed graph with an edge from v to u if and only if ∥𝜃(u, v)∥ 2≠ 0, where \(\boldsymbol \theta (u,v) = \left (\theta ^{(1)}(u,v),\cdots ,\theta ^{(p)}(u,v) \right )^{\prime }\).

Multivariate time series data is often non-stationary. Furthermore, it is not uncommon to expect changes in a system across multiple time scales. For example, it is widely recognized that financial time series of quantities like equity, interest, and credit can exhibit volatility across multiple scales (e.g., Fouque et al. 2011). Similarly, it is believed that neuronal dynamics within the cerebral cortex in the brain interact with anatomical connectivity in such a way as to produce functional connectivity relationships between brain regions at multiple time scales (Honey et al. 2007). These observations suggest the need for a notion of multi-scale analysis when doing network-based modeling of multivariate time series in systems like these. However, while temporal multi-scale analysis is a concept well-established in time series analysis, it does not appear to have yet emerged in network modeling.

Motivated by the elements of the above discussion, we focus in this paper on the problem of detecting dynamic connectivity changes across multiple time scales in a network-centric representation of a system, based on multivariate time series observations. Our approach combines the traditional Granger causal type of modeling with partition-based multi-scale modeling. We adopt a change point perspective, so that our model class consists of concatenations of restricted-VAR(p) models, each with its own 𝜃 constant over a given interval of time. The result is then a time-indexed directed graphical model, from which we define a dynamic network Gt = (V, Et), in analogy to the stationary case. Our goal is then to infer the change points distinguishing the stationary intervals and the corresponding edge sets Et.

A number of works in recent years have focused on modeling multivariate time series using causal network types of models. A common theme among these is to generalize the work of Meinshausen and Bühlmann (2006), who show that the Lasso can consistently recover the neighborhood structure of a Gaussian graphical model in high-dimensional settings under appropriate assumptions. Seminal examples of such extensions include (Bolstad et al. 2011), where they assume the time series are stationary and carry out variable selection using group-lasso principles; and Basu et al. (2015), where they estimate the network Granger causality for panel data using the group-lasso. Similarly, in the work by Barigozzi and Brownlees (2014), networks are defined and inferred through use of the long-run partial correlation matrix between multiple time series. For non-stationary multivariate time series processes, Long et al. (2005) use time-varying auto-regressive models with adaptively chosen – but fixed – windows. These latter are applied to functional MRI data.

While we make use of ideas similar to those above, our approach is significantly different from those proposed previously in the sense that we incorporate them within a multi-scale framework. Multi-resolution analysis was formally proposed by Mallat (1989) and others in the late 1980’s and has been known for mathematically elegant, computationally efficient and often domain-specific representations of data that are inhomogeneous in their support. While there is by now a vast literature on the topic of multiscale statistical modeling, with literally scores of representations for standard signal and image analysis applications alone, a key representation is that of recursive dyadic partitioning. A fundamental result from Donoho (1997) relates the method of recursive dyadic partitioning and the selection of a best-orthonormal basis, where the basis is selected from a class of unbalanced Haar wavelets. The partition-based multi-scale method has proven to be particularly natural and useful in extending wavelet-like ideas to nontraditional settings, for example, in the context of generalized linear models, irregular spatial domains, etc. – see Kolaczyk and Nowak (2005) and Louie and Kolaczyk (2006), and (Willett and Nowak, 2007), for instance. For a recent survey of statistical methods for network inference from time series, in general, see (Betancourt et al. 2017, Sec 4.2).

Our main contribution in this paper is to present a partition-based multi-scale dynamic causal network model, and a corresponding method of network topology inference, that captures the dynamics of a system in a manner sensitive to changes at multiple time scales, while encouraging sparsity of network connectivity. There are three key elements in the framework: (i) we partition the non-stationary time axis into blocks at various scales, with independent, stationary VAR models indexed by blocks; (ii) to prevent overfitting, we impose a counting penalty to penalize the number of blocks used; and (iii) we do neighborhood selection within each block using a group-lasso type of estimator.

This paper is organized as follows. In Section 2, we provide the details our partition-based dynamic multi-scale network model and methodology. In Section 3, we present several characterizations of theoretical properties of our estimator. The broad potential impact of our method is demonstrated in Section 4, through the use of both simulated data and a magnetoencephalography (MEG) data set. Technical proofs are provided in the Appendix. Code implementing the methodology proposed in this paper is available from

Partition-based multi-scale dynamic network models

In this section we define the class of dynamic network models developed in this paper, we describe our proposed approach to network inference within this class, and we summarize the implementation of this approach in the form of an algorithm.

Piecewise vector autoregressive models

We are interested in non-stationary multivariate time series, as the stationarity assumption required by traditional vector autoregressive modeling is overly restrictive in the types of financial and biological applications motivating our work. Accordingly, we define a class of restricted piece-wise vector autoregressive models. These models are of order p [rP-VAR(p)] and break the non-stationary time series into an unknown number of M stationary blocks, with a stationary restricted VAR(p) model within each block.

More specifically, we equip the parameters in our previously defined restricted VAR(p) model with a time index:

$$ X_{t}(u) = \sum\limits_{v\in V\backslash\{u\}}\sum\limits_{\ell=1}^{p} X_{t-\ell}(v) \theta^{(\ell)}_{t}(u,v) + \epsilon_{t}(u). $$

Next we restrict the coefficient vectors \(\boldsymbol \theta _{t}(u,v) = \left (\theta ^{(1)}_{t}(u,v),\cdots ,\theta ^{(p)}_{t}(u,v) \right )^{\prime }\) to be constant within each of M blocks defined by change points τ0 = 0 and τM+ 1 = T. Finally, we assume independence of the multivariate time series across blocks. We then capture the evolving dependency structure of the data using a time-varying directed graph G = (V, Et) with an edge from \(v \rightarrow u\) if and only if ∥𝜃t(u, v)∥ 2≠ 0.

Certain of these choices could be relaxed, at the expense of a nontrivial increase in complexity of both computation and exposition. The assumption of independence between blocks could be relaxed to allow for weak dependence over p time steps just prior to and after each changepoint, following the suggestion in Davis et al. (2008, Remark 1). Additionally, we assume the number of lags p is fixed and known. In contrast, an unknown value of p in principle could be incorporated into our framework, with selection made through an additional penalty term.

To organize the collection of blocks defining our class of rP-VAR(p) models, we use the notion of recursive partitioning. This choice is both consistent with our goal of capturing multi-scale structure (as described above) and facilitates the development of sensible algorithms for computational purposes. We will consider two types of partitioning: recursive dyadic partitioning and (general) recursive partitioning. Without loss of generality, we consider partitioning restricted to the unit interval (0,1] interchangeably with partitioning of the interval (0,T]. A partition \(\mathcal {P}\) of (0,1] is a decomposition of the latter into a collection of disjoint subintervals whose union is the unit interval. In our treatment we restrict attention to partitions of finite cardinality.

Both recursive dyadic partitioning and recursive partitioning produce partitions \(\mathcal {P}\) by recursively partitioning the unit interval. They differ only in the rule defining the choice of partitions that may be produced at each iteration, with that for the former being more restrictive than that for the latter. Under recursive dyadic partitioning, starting with the unit interval, we recursively split some previously resulting interval into two sub-intervals of equal length. Under recursive partitioning more generally, the restriction to dyadic subintervals is removed. Under both approaches, partitioning is done only up to the resolution of the data. Therefore, with T observation times, partitioning is done only at the points \(\{i/T\}_{i=1}^{T-1}\), and only up to a total of T subintervals. Under recursive dyadic partitioning, we require that the number of observations T = 2J be a power of two.

Let \(\mathcal {P}^{*}_{D_{y}}\) denote the complete recursive dyadic partition (with the dependence on T suppressed for notational convenience), and \(\mathcal {P}^{*}\), a complete recursive partition. Additionally, denote by \(\mathcal {P}\preceq \mathcal {P}^{*}_{D_{y}}\) (respectively, \(\mathcal {P}\preceq \mathcal {P}^{*}\)) a subpartition of \(\mathcal {P}^{*}_{D_{y}}\) (respectively, \(\mathcal {P}^{*}\)), i.e., as one of the partitions defined through the process of successive refinement from (0,1] to \(\mathcal {P}^{*}_{D_{y}}\) (respectively, \(\mathcal {P}^{*}\)). This notation helps emphasize one of the key advantages of the partition-based perspective, i.e., that algorithms to search efficiently over model spaces indexed by these partition classes can be designed to do so in \(\mathcal {O}(T)\) and \(\mathcal {O}(T^{3})\) computational complexity, respectively, using dynamic programming principles. See Kolaczyk and Nowak (2005). The advantage of recursive dyadic partitioning over recursive partitioning therefore typically is in computational cost. We will define a class of rP-VAR(p) models indexed by these partition classes and propose algorithms for model selection that exploit the accompanying dynamic programming principles.

Network Inference

The graphs G corresponding to the restricted piece-wise V AR(p) class of models we have introduced can be thought of as a union of the neighborhoods surrounding each node u. And, in fact, we will infer the topology of the network G neighborhood by neighborhood.

Consider, for example, the cartoon illustration in Fig. 1 where, without loss of generality, the focus is on the local neighborhood of a node/series u and T = 160 for illustration. From time [0,60), each of the four other nodes B, D, C, and E Granger causes u. From time [60,80), only node B Granger causes u, and for the rest of the time, B and D Granger cause u. Under our proposed approach, we estimate the times τm at which the changes happened. Given the estimated change points, we then infer the neighborhood structure during the time interval \([0,\hat {\tau _{1}})\), and then \([\hat {\tau }_{1}, \hat {\tau }_{2})\), and so on. Put simply, our approach is to estimate the change-points and the neighborhood structures within each stationary time-interval defined by those change-points, where the changepoints are defined through either a recursive dyadic partition or a recursive partition. We describe each of these two cases in turn below.

Figure 1
figure 1

Cartoon version of the underlying network structure

Suppose that our changepoints τi are restricted to correspond to the boundaries of some recursive dyadic partition. For a given node u, we estimate the vector \(\boldsymbol \theta \equiv \left (\theta _{t_{i}}^{(\ell )}(u,v)\right )\), defined for all nodes vV ∖{u} and at all times ti = i/T where i = 1,…,T, by choosing some optimal member from the classes rP-VAR(p) defined by all possible partitions \(\mathcal {P}\preceq \mathcal {P}^{*}_{D_{y}}\) of the unit interval. Formally, we define the space of all possible values of 𝜃

$$ \begin{array}{@{}rcl@{}} {\Gamma}_{RDP}^{(N-1)p} \!\!\!&\equiv&\!\!\! \left\{\boldsymbol{\theta} \left | \theta_{t}^{(\ell)}(u, v) = \beta_{0}^{(\ell)}(u,v) + \sum\limits_{I \in \ell_{NT}(\mathcal{P})}\beta_{I}^{(\ell)}(u,v) h_{I}(t) \right.\right.\\&& \left. \quad\forall \ell, v, \text{for some } \mathcal{P} \preceq \mathcal{P}_{D_{y}}^{*}\vphantom{\sum\limits_{I \in \ell_{NT}(\mathcal{P})}} \right\}, \end{array} $$

where \(\mathcal {P}\) is a partition common to all coefficient functions \(\theta ^{(\ell )}_{t}(u,v)\) across nodes v and lags , for each fixed u. In this expression, \(\ell _{NT}(\mathcal {P})\) is the set of all non-terminal (NT) intervals encountered in the construction of \(\mathcal {P}\), while \(\beta _{0}^{(\ell )}(u,v)\) and \( \beta _{I}^{(\ell )}(u, v)\) are the (non-zero) coefficients in a reparameterization of \(\theta _{t}^{(\ell )}(u, v)\) with respect to the unique (dyadic) Haar wavelet basis \(\{h_{I}\}_{I \in \ell _{NT}(\mathcal {P}^{*}_{Dy})}\) associated with the complete recursive dyadic partition \(\mathcal {P}_{Dy}\). In particular, a wavelet hI has as its support the interval I, and is proportional to the values 1 and − 1 on the two subintervals defined by a split at the midpoint of I. See Donoho (1997) or Kolaczyk and Nowak (2005), for example, for details on this correspondence between recursive dyadic partitions and classical Haar wavelet bases. It is this correspondence that makes explicit the multiscale nature of our approach.

Based on this model class, we define a complexity-penalized estimator \(\boldsymbol {\hat {\theta }}_{RDP}\) of 𝜃 as follows:

$$ \hat{\boldsymbol{\theta}}_{RDP} \!\equiv\! \operatornamewithlimits{arg min}_{\boldsymbol{\tilde\theta} \in {\Gamma}_{RDP}^{(N-1)p}}\left\{ \!-\log p\left( \mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol{\tilde\theta}\right) + 2\sum\limits_{v \in V \backslash \{u\}}\text{Pen}_{RDP}(\boldsymbol{\tilde\theta}(u,v))\right\}. $$

Here X(−u) is the lagged design matrix of dimension T × (N − 1)p based on the observed time series information for all nodes except u. That is, we define X(−u) = (X(1),⋯ ,X(u − 1),X(u + 1),⋯ ,X(N)), with each X(⋅) a T × p matrix defined as X(⋅) = (X− 1(⋅),⋯ ,Xp(⋅)), where X(⋅) contains the lagged observations \(\mathbf {X}_{-\ell }(\cdot ) = (X_{T-\ell }(\cdot ),\cdots ,X_{-\ell +1}(\cdot ))^{\prime }\). The function \(\text {Pen}_{RDP}(\boldsymbol {\tilde \theta }(u,v))\) is the penalty imposed for incorporating node v into the model.

Now consider the case where the network changepoints τi are restricted to correspond to the boundaries of some arbitrary (i.e., non-dyadic) recursive partition. Define \({\mathscr{L}}\) to be the library of all (T − 1)! possible complete recursive partitions \(\mathcal {P}^{*}\), and let

$$ \begin{array}{@{}rcl@{}} {\Gamma}_{RP}^{(N-1)p}& \!\!\!\equiv\!\!\! & \left\{\boldsymbol{\theta} \left| \theta_{t}^{(\ell)}(u, v) = \beta_{0}^{(\ell)}(u,v) + \sum\limits_{I \in \ell_{NT}(\mathcal{P})}\beta_{I}^{(\ell)}(u,v) h_{I}(t) \forall \ell,v,\right.\right.\\&&\left. \text{ for some } \mathcal{P} \preceq \mathcal{P}^{*}, \mathcal{P}^{*} \in \mathcal{L}\vphantom{\sum\limits_{I \in \ell_{NT}(\mathcal{P})}} \right\}. \end{array} $$

Here \(\{h_{I}\}_{I \in \ell _{NT}(\mathcal {P}^{*})}\) is the unique (unbalanced) Haar wavelet basis corresponding to a given complete recursive partition \(\mathcal {P}\). As in the case of the classical dyadic Haar basis, there will be T piecewise constant basis functions for T time points, each indexed according to its support interval I and proportional in value to 1 or − 1 on two subintervals (except for one ‘father’ wavelet, defined to capture the average of \(\theta _{t}^{(\ell )}(u,v)\) over (0,T]). But, unlike before, the subintervals defining these wavelets are not necessarily of equal length. This definition allows, for example, for the representation of non-dyadic changepoints in a potentially more efficient manner (i.e., using fewer recursive splits). See (Kolaczyk and Nowak, 2005) for details.

Analogous to the dyadic case, our estimator defined under recursive partitioning is given by:

$$ \begin{array}{@{}rcl@{}} \hat{\boldsymbol{\theta}}_{\mathit{RP}} \!\equiv\! \operatornamewithlimits{arg min}_{\boldsymbol{\tilde\theta} \in {\Gamma}_{RP}^{(N-1)p}}\left\{ - \log p\left( \mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol{\tilde\theta}\right) + 2\sum\limits_{v \in V\backslash \{u\}} \text{Pen}_{RP}(\boldsymbol{\tilde\theta}(u,v))\!\right\}. \end{array} $$

This is a maximum complexity-penalized likelihood estimator of 𝜃 defined on a much broader space. It includes all possible partitions that divide the unit interval into MT blocks, where sub-intervals need not necessarily be of equal size. This increase in richness of representation, however, will be seen to come at a computational cost.

The penalty function used to define these two estimators is described as follows. Define the p-length vector 𝜃I(u, v) to be the collection of (fixed) values \(\theta ^{(\ell )}_{t}(u,v)\) over all lags = 1,…,p for tI. For recursive partitioning, we then define the penalty of incorporating a given node v into the model to be

$$ \begin{array}{@{}rcl@{}} \text{Pen}_{RP}(\boldsymbol{\theta}(u, v)) = \frac{3}{2}\#\{\mathcal{P}(\boldsymbol{\theta})\} \log T + \lambda\sum\limits_{I \in \mathcal{P}(\boldsymbol{\theta})} \|\boldsymbol{\theta}_{I}(u, v)\|{~}_{2}. \end{array} $$

For recursive dyadic partitioning, we replace the value 3/2 by 1/2, indicating that we penalize less severely in the simpler model class.

Note that this penalty is composed of two parts. In the first part, \(\#\{\mathcal {P}(\boldsymbol {\theta })\}\) is the cardinality of the partition \(\mathcal {P}(\boldsymbol {\theta })\) corresponding to a given value 𝜃 in \({\Gamma }^{(N-1)p}_{RDP}\) or \({\Gamma }^{(N-1)p}_{RP}\). Because this partition is assumed common across lags and for all vV ∖{u}, it may be thought of as a union, i.e., \(\mathcal {P}(\boldsymbol {\theta }) = \bigcup \limits _{v} \mathcal {P}(\boldsymbol {\theta }(u,v))\), where \(\mathcal {P}(\boldsymbol {\theta }(u,v))\) is a partition corresponding specifically to the dynamic behavior of the coefficients \(\theta ^{(\ell )}_{t}(u,v)\) collectively over all lags . Thus the contribution of \(\#\{\mathcal {P}(\boldsymbol {\theta })\}\) to the penalty may be thought of as counting the number of times there is a need to insert a changepoint due to a change in the relation of node u with any other node v at any lag . That is, it controls the number of partitions for the entire neighborhood.

The second part of the penalty in Eq. 2.6 is a sum, over intervals I in the relevant partition \(\mathcal {P}\), of the 2 norms of the corresponding coefficient lag vectors. It is essentially a group lasso type penalty, in the spirit of that originally proposed by Yuan and Lin (2006), with tuning parameter λ. The purpose of introducing this term is to encourage sparseness in the connectivity of each neighborhood, and hence of the network as a whole. Our use of the group lasso here derives from the definition of our network G, where an edge is present regardless of in which lag there is a causal effect of a node v on the node u. The choice of tuning parameter controls the amount of shrinkage of the group of coefficients. Large λ results in sparser coefficient vectors. We describe a method for choosing the tuning parameter in Section 3.


In this section, we discuss the implementation of our proposed methods of inference. For both the recursive dyadic partitioning estimator in Eq. 2.3 and the recursive partitioning estimator in Eq. 2.5, the general structure of the algorithm is similar. We describe the latter here and, for the sake of completeness, provide the former in the Appendix.

figure a

Calculation of the estimator Eq. 2.5 can be accomplished as detailed in Algorithm 1. The required inputs are the time series X(u) for node u, the lagged time series X(−u) for all other nodes, and a prespecified number of lags p. Note that p + 1 is the minimum number of observations necessary to fit a model of p lags. Initially we set the penalized likelihood to be the sum of squares of the data in the intervals I that contain less than the minimum required number of observations. There are (T − 1)! possible ways of partitioning (i.e., complete recursive partitions \(\mathcal {P}^{*}\)) in the library \({\mathscr{L}}\). Each partition, however, is composed only of subsets of \({T+1}\choose {2}\) unique intervals, given that each interval is defined between two endpoints. The algorithm begins by fitting group lasso penalized models on intervals I that contain more than p + 1 observations. Therefore we have \(\mathcal {O}(T^{2})\) calls for fitting the group lasso type of models. (Because solving the group lasso regression generally requires iterative convex optimization, we do not quantify specifically the corresponding time complexity of this step.) We then consider intervals that contains 2(p + 1) observations and compare the penalized likelihood plI in those intervals to the sum of the penalized likelihoods of the optimal sub intervals containing p + 1 observations and retain the one with smaller value. The procedure is repeated for intervals containing k observations, with k = 2(p + 1) + 1,⋯ ,T. There are (k − 1) ways of partitioning an interval of length k into two. Let \(\{{I_{l}^{i}}, {I_{r}^{i}}\}_{i=1}^{k=1}\) be all possible pairs of subintervals of I such that \({I_{l}^{i}} \bigcup {I_{r}^{i}} = I\). We compare the penalized likelihood plI, defined in Eq. 2.5 but restricted to I, versus \(\min \limits _{i}\{ pl_{{I_{l}^{i}}} + pl_{{I_{r}^{i}}} + \text {Penalty}\}\), and select the optimal model to be the one which has smallest value. The comparison is of order \(\mathcal {O}(T^{3})\) and thus the total computational cost is \(\mathcal {O}(T^{2})\) calls to group lasso type of fitting and \(\mathcal {O}(T^{3})\) comparisons.

Theoretical properties

In the previous section, we introduced our partition-based approach to modeling dynamical changes in the dependency relational structure among multiple time series, defined two estimators of the time-varying parameters underlying our models, and described an appropriate algorithm for calculations. In this section, we first show that the proposed approach can estimate a change point consistently. We then present an empirically-based choice of the penalty parameter λ in Eq. 2.6 and show that through this choice we can control the Type I error rate in recovering the true neighborhood structure of a node u within a given stationary time block. Finally, we quantify the overall risk behavior of our estimators.

Consistency of changepoint estimation

Suppose that there is a single change point at time τ, with 1 < τ < T. Then under our approach the time series X(u) can be written as a concatenation of two parts of length τ and Tτ. We use L to denote the set of all observations in the pre-τ period and use R to denote the set of all observations in the post-τ period. Then we have:

$$ \begin{array}{@{}rcl@{}} X_{t}(u) = \left\{ \begin{array}{l} \sum\limits_{v\in V\backslash \{u\}}\sum\limits_{\ell = 1}^{p}X_{t-\ell}(v)\theta_{L}^{(\ell)}(u,v) + \epsilon_{t}(u),\quad t \in [1, \tau]\\ \sum\limits_{v\in V\backslash \{u\}}\sum\limits_{\ell = 1}^{p}X_{t-\ell}(v)\theta_{R}^{(\ell)}(u,v) + \epsilon_{t}(u),\quad t \in (\tau, T]. \end{array} \right. \end{array} $$

Our change point selection consistency result extends the result of Bach (2008), where the estimation consistency of the group lasso regression is established. The assumptions needed are the same as in that previous work, which we briefly restate here.

assumption 1.

Xt(u) and Xt(−u) have finite fourth order moments: \(\mathbb {E}(X_{t}\) \((u))^{4} < \infty \), and \(\mathbb {E}\|\mathbf {X}_{t}(-u)\|^{4} < \infty \).

assumption 2.

Invertibility of the joint covariance matrix, defined as: \({\Sigma }_{\mathbf {X}_{t}(-u) \mathbf {X}_{t}(-u)} := \mathbb {E} (\mathbf {X}_{t}(-u)^{\prime } \mathbf {X}_{t}(-u)) - \left (\mathbb {E} \mathbf {X}_{t}(-u)\right )^{\prime }\left (\mathbb {E} \mathbf {X}_{t}(-u)\right ) \in \) \(\mathbb {R}^{(N-1)p \times (N-1)p.} \)

assumption 3.

We denote \(\boldsymbol {\hat {\theta }_{t}}\) any minimizer of \(\mathbb {E}\left (X_{t}(u)-\mathbf {X}_{t}(-u)\boldsymbol \theta _{t} \right )^{2}\). We assume that \(\mathbb {E}\left (\left (X_{t}(u)-\mathbf {X}_{t}(-u)\boldsymbol {\hat {\theta }}_{t} \right )^{2}|\mathbf {X}_{t}(-u)\right )\) is almost surely greater than some \(\sigma _{\min \limits }^{2} > 0\).

assumption 4.

\(\max \limits _{v\in S^{c}}\frac {1}{p} \left \| {\Sigma }_{\mathbf {X}(v)\mathbf {X}(S)} {\Sigma }_{\mathbf {X}(S)\mathbf {X}(S)}^{-1} \text {Diag}(1/\|\boldsymbol {\theta }_{t}(u,v)\|{~}_{2})\boldsymbol {\theta }_{t}(u,S)\right \|{~}_{2}\) < 1, where S is the set of nodes in the neighborhood of the u where (∥𝜃t(u, v)∥ 2 ≠ 0) and Diag(1/∥𝜃t(u, v)∥ 2) denotes the block-diagonal matrix of size |S|p in which each diagonal block equals to \(\frac {1}{\|\boldsymbol \theta _{t}(u,v)\|{~}_{2}}\mathbf {I}_{|S|p}\) with I|S|p the identity matrix of size |S|p. 𝜃t(u, S) denotes the concatenation of the coefficient vectors indexed by S.

Note that when p = 1, Assumption 4 is referred to as the strong irrepresentable condition in Zhao and Yu (2006).

assumption 5.

The size of the network increases no faster than the square root of the length of the time series: ∃γ > 0, such that \(N= \mathcal {O}(T^{\gamma })\) as \(T \rightarrow \infty \) for γ < 1/2.

Consider the local test of

$$ \begin{array}{@{}rcl@{}} H_{0} : \mathcal{P} = [1, T]\quad vs \quad H_{1}: \mathcal{P} = [1, \tau] \cup (\tau, T], \end{array} $$

using group lasso penalized least squares. This test corresponds to the basic step of comparing models for two adjacent intervals at the heart of Algorithm 1 (i.e., one model for the union versus a separate model for each interval), where the penalty is simply the second component of PenRP in Eq. 2.6. We have the following theorem:

Theorem 3.1.

Assume that Assumptions 1 to 5 are satisfied, where λ varies such that \(\lambda \rightarrow 0\), \(\lambda N \rightarrow 0\) and \(\lambda T^{1/2} \rightarrow \infty \), as \(T\rightarrow \infty \). Then we have that

$$ \begin{array}{@{}rcl@{}} &\mathbb{P}_{H_{0}}\left( \text{Decide } \mathcal{P} = [1, T] \right) \longrightarrow 1 \end{array} $$
$$ \begin{array}{@{}rcl@{}} &\mathbb{P}_{H_{1}}\left( |\hat{\tau} - \tau| > \epsilon \right) \longrightarrow 0,\quad \forall \epsilon > 0. \end{array} $$

Theorem 3.1 contains two parts. The first part states that when the null hypothesis is true – that is, when the time series contains no change point – our method favors the model with no change point. The second part states that under the alternative hypothesis, where there is a change point at τ, our method favors the model with one estimated change point \(\hat {\tau }\) and, furthermore, the probability that \(\hat {\tau }\) differs from τ by an arbitrary amount 𝜖 tends to zero. The proof can be found in the Appendix. The proof technique can be generalized for the case of multiple change points, although it would require appropriate conditions on the number of change points M and the number of data points T.

Finite sample control of Type I error rate in neighborhood selection

We see that consistent splitting and change point estimation is possible to achieve with the group lasso type of estimation. However, our asymptotic result offers little advice on how to choose a specific penalty parameter for a given problem. We propose a way to adaptively choose the penalty parameters λ, given a stationary time interval. For a specific λ, we guarantee that the probability of committing a certain notion of Type I error in recovering the connected component corresponding to the fixed node u is less than some user specified level α. The connected component CuG of a node uV is defined as the set of nodes which are connected to node u by a chain of edges. We denote the neighborhood of node u as neu. The neighborhood neu is clearly part of the connected component Cu. To guarantee the accuracy of the neighborhood selection, we need the following additional assumption:

assumption 6.

Denote by Θ = BV (C) the ball of functions of bounded variation for some constant C. We assume that is \(\theta _{(\cdot )}^{(\ell )}(u,v)\in {\Theta }\), for all = 1,⋯ ,p and all vV ∖{u}:

$$ \begin{array}{@{}rcl@{}} \sup_{J \geq 2}\sup_{t_{1} \leq {\cdots} \leq t_{J}}\sum\limits_{j = p}^{J}\left|\theta_{t_{j}}^{(\ell)}(u,\cdot) - \theta_{t_{j-1}}^{(\ell)}(u,\cdot) \right| < C. \end{array} $$

This assumption indicates that ∥𝜃t(u, v)∥ 2 is bounded.

In the case where X(u) is stationary on a given interval [1,T], we have the following theorem regarding the estimated connected component \(\hat {C_{u}}\):

Theorem 3.2.

Assume Assumptions 1 to6 hold, and fix α ∈ (0,1). If X(u) is stationary on [1,T] and the penalty parameter λ(α) is chosen such that

$$ \begin{array}{@{}rcl@{}} \lambda(\alpha) = 2\hat{\sigma}(u)\sqrt{pQ\left( 1 - \frac{\alpha}{N(N-1)}\right)}, \end{array} $$

where \(\hat {\sigma }^{2}(u) = \|\mathbf {X}(u)\|{~}_{2}^{2}/T\) and Q(⋅) is the quantile function of χ2(p) distribution, then

$$ \begin{array}{@{}rcl@{}} \mathbb{P}\left( \exists u\in V: \hat{C_{u}} \nsubseteq C_{u} \right) \leq \alpha. \end{array} $$

Theorem 3.3 says that by choosing the penalty parameter at λ = λ(α), the probability of falsely joining two distinct connected components with the estimate of the edge set is bounded above by the level of α. The proof of the theorem is provided in the Appendix.

Risk analysis

We now provide a theorem that gives an upper bound on the risk of the estimators \(\boldsymbol {\hat {\theta }}_{RDP}\) and \(\boldsymbol {\hat {\theta }}_{RP}\). Through this approach we provide a certain measure of quality for the overall dynamic network inference procedure. Following the perspective of Li and Barron (2000), as implemented in Kolaczyk and Nowak (2005), we measure the loss of estimating 𝜃 by \(\hat {\boldsymbol \theta }\) in terms of the squared Hellinger distance between the two corresponding conditional densities:

$$ \begin{array}{@{}rcl@{}} L(\hat{\boldsymbol\theta}, \boldsymbol\theta) &\equiv& H^{2}(p_{\hat{\boldsymbol\theta}},p_{\boldsymbol\theta}) \\ & =& \int \left[\sqrt{p_{\hat{\boldsymbol\theta}}(\mathbf{x}|\mathbf{X}(-u))} - \sqrt{p_{\boldsymbol\theta}(\mathbf{x}|\mathbf{X}(-u))}\right]^{2}d\nu(\mathbf{x}) \end{array} $$

with respect to some dominating measure ν(x). Additionally, define the Kullback-Leibler divergence between two densities of X(u), conditional on the past of all the neighborhood time series:

$$ \begin{array}{@{}rcl@{}} K(p_{\boldsymbol\theta^{1}}, p_{\boldsymbol\theta^{2}}) \equiv \int \log \frac{p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta^{1})}{p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta^{2})}p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta^{1})d\nu(\mathbf{x}). \end{array} $$

Theorem 3.3.

Denote the loss function of estimating 𝜃 by \(\boldsymbol {\hat {\theta }}\) by \(L(\boldsymbol {\hat {\theta }}, \boldsymbol {\theta })\) and the corresponding risk, by \(R(\boldsymbol {\hat {\theta }}, \boldsymbol {\theta }) = T^{-1}\mathbb {E}_{\mathbf {X}(u)|\mathbf {X}(-u)} \left [ L(\boldsymbol {\hat {\theta }}, \boldsymbol {\theta }) \right ]\). Let Λ = αmax/T, where αmax is the largest eigenvalue of \(\mathbf {X}(-u)^{\prime } \mathbf {X}(-u)\). Assume each \(\theta ^{(\ell )}_{t} (u,v)\) is of bounded variation on (0,1] for some constant C. Then for any λ of the same order as in Theorem 3.1 and for T > ⌈e2p/3⌉, our risk is bounded as

$$ R(\boldsymbol{\hat{\theta}}_{RDP}, \boldsymbol{\theta}) \le \mathcal{O}\left( \left( \frac{\Lambda \log^{4}T}{T}\right)^{1/3}\right) $$

for recursive dyadic partitioning and

$$ R(\boldsymbol{\hat{\theta}}_{RP}, \boldsymbol{\theta}) \le \mathcal{O}\left( \left( \frac{\Lambda \log^{2}T}{T}\right)^{1/3}\right) $$

for recursive partitioning.

Theorem 3.3 shows that both estimators have risks that end to zero at rates slightly worse than T− 1/3. The asymptotic risk for recursive partitioning is smaller than the risk for recursive dyadic partitioning, albeit at the cost of increased computational complexity. Proof of this result is in line with the work by Kolaczyk and Nowak (2005) and can be found in the Appendix.

Simulation study

In this section, we illustrate the practical performance of our method through a series of simulation studies. In the first part, we simulate multivariate time series data under different settings, as dictated by models A - C below. In the second part, we scale up model B by increasing the size of the vertex set V and include more irrelevant variables. Under each model, we simulate 100 datasets and the white noise is always set to be \(\epsilon _{t}(\cdot ) \sim N(0,1)\). In all models, we set α = 0.05 and p = 2. These choices match that of the computational neuro-science example we present later, in Section 5. We measure performance in three ways: (i) how many change points were detected, (ii) Out of the detected change points, how many specify the right location (iii) whether the correct neighborhood structure was detected. The models we investigate are:

  • Model A: VAR(2) process with no change point.

    This scenario is designed to see the performance of the methods when there is no change point and the process is stationary. Specifically,

    $$ \begin{array}{@{}rcl@{}} X_{t}(1) = 0.5 X_{t-1}(2) + 0.25X_{t-2}(2) + 0.5 X_{t-1}(3) + 0.25X_{t-2}(3) + \epsilon_{t}(1) \end{array} $$

    with sample size T = 1024.

  • Model B: piecewise stationary VAR(2) process with 2 change points. Specifically,

    $$ \begin{array}{@{}rcl@{}} X_{t}(1) = \left\{ \begin{array}{l} 0.5X_{t-1}(2) + 0.25X_{t-2}(2)+\epsilon_{t}(1) \quad \quad 0 < t \leq 512\\ 0.5X_{t-1}(3) + 0.25X_{t-2}(3) + \epsilon_{t}(1) \quad \quad 512<t \leq 768\\ 0.5X_{t-1}(2) - 0.5X_{t-1}(3)+\epsilon_{t}(1) \quad \quad 768 < t \leq 1024 \end{array}\right. \end{array} $$
  • Model C: change point close to the boundary. Specifically,

    $$ \begin{array}{@{}rcl@{}} X_{t}(1) = \left\{ \begin{array}{l} 0.5X_{t-1}(2) + 0.25X_{t-2}(2) + \epsilon_{t}(1) \quad \quad 0 < t \leq 128\\ 0.5X_{t-1}(3) + 0.25X_{t-2}(3) + \epsilon_{t}(1) \quad \quad 128<t \leq 1024 \end{array}\right. \end{array} $$
  • Model B with VAR(2) process in a larger vertex set V.

    We use the same coefficients as used in Model B, but with the size of the vertex set ranging from 5 to 15.

The results for models A, B, and C are summarized in Table 1. For some error measures, results under the truth are marked in blue. For example, under model A where there is no change point in the true model, positions corresponding to 0 change point and 0 exact detection are marked in blue, i.e., one should not detect anything where there is no change point. Under model B, where there are two change points, results corresponding to the case of two change points and two exact detections are marked in blue. Note that in the case recursive partitioning (i.e., non-dyadic), we treat a detection as being ’exact’ if an estimated change point is within ± 5 time points of the true change point (i.e., less than 0.5% the length of the full time series).

Table 1 Simulation results under Model A, Model B and Model C, using RDP and RP

A few comments on these results are in order:

  • From the results we see that our proposed estimators did not overestimate the number of change points, as they never detected more change points than the true number of change points.

  • Under Model B, in 72 out of 100 and in 89 out of 100 trials we correctly specified the number of positions of the change points using the recursive dyadic partition and the recursive partition estimators, respectively. Note that if we are less conservative and allow more tolerance in defining an ‘exact detection’ under recursive partitioning, all change points identified in Model B using recursive partitioning are located within [− 13,13] points of the true change points (i.e., within 1.5% of the total length of the full time series).

  • Based on the results under model C, we conclude that our methods lose sensitivity to detection of change points as the location of the change points moves closer to the boundary, with recursive partitioning performing better than recursive dyadic partitioning. These results are to be expected. We have not observed the same performance degradation when the changepoint is away from the boundary (results not shown).

  • We have good control over the false detection of causal structures.

The performance of the proposed estimators upon increasing the size N of the vertex set V, under model B, is summarized in Table 2. As N increases, we see the performance decreases, due to the fact that in this setting the variables we are adding are irrelevant and thus induce additional uncertainty. Note that under our proposed approach there is a tendency to underfit the number of change points rather than over fit. This trait will be relevant to the real data application we describe next.

Table 2 Simulation results under Model B for vertex sets of increasing cardinality

Illustration: Inference of a task-based MEG network

Neuroscientists are interested in understanding the interactions among cortical areas that allow subjects to detect the motion of objects. In Calabro and Vaina (2012), fMRI was used to study subjects who were asked to perform visual search tasks and it was found that the monitored regions of interest (ROIs) formed four clusters. However, fMRI does not have good temporal resolution for more detailed investigation of the interaction between these clusters. Rana and Vaina (2014) studied the 10 Hz Alpha-band power extracted from MEG signals under a similar multiple-trial visual motion search experiment. They found evidence showing that regions of interest within the identified clusters have similar temporal activation profiles. Specifically, they found significant inhibition of 10Hz alpha power in the visual processing region after 300ms relative to the stimulus, and longer and sustained alpha power in the frontoparietal region. Other evidence of co-activations among regions of interest have been reported by other studies under different experimental set up. For example, see Braddick et al. (2000) and Amano et al. (2012) and Bettencourt and Xu (2016).

To demonstrate the application of our method, we examined the same 10 Hz Alpha-band power data used by Rana and Vaina (2014). MEG data has excellent temporal resolution, but the spatial resolution is less good than that of fMRI. As a result, it is typical that functional connectivity analyses with MEG data incorporate coarsely defined brain regions and hence networks with only a handful of vertices. We therefore chose three regions of interest each from the two clusters known to have similar activation profiles. The regions of interest are V3a, MT+ and VIP from the visual processing region, and FEF, SPL and DLPFC from the frontoparietal region. This choice corresponds to a network of six nodes, which is consistent with studies of this type.

Details of the experiment and the data are as follows. In the experiment, a participant was asked to perform a visual search task of a moving object, repeated over 160 trials. Each trial began with a 300 ms blank screen. Then, 9 spheres fade in over a 1000 ms period and these 9 spheres remained static for another 1000 ms. A 1000 ms motion display period then follows, where 8 of the spheres move forward (simulating forward motion of the obeserver) and the target sphere moves independently from the others. The beginning of the motion display period defined the 0 ms marker for each trial. Finally, in the 3000 ms response period, the 9 spheres remained static, four (including the target) were grayed out, and the participant was asked to identify the target sphere.

The MEG signal of the participant was recorded throughout the experiment. The data we used is the 10 Hz Alpha-band power, truncated in a uniform manner across trials, to focus upon the period just prior to the appearance and movement of the spheres. It starts from the second half of the static period and the length of the data is T = 1502, corresponding to a time interval of length 2500 ms. The time series we used for our analyses contains the last 500 ms of the static period, the entire motion display period, and the first 1000 ms of the response period, where most of the correct responses occurred. The timeline of our data is illustrated by Fig 2. For a more detailed description of the experiment, please refer to Rana and Vaina (2014).

Figure 2
figure 2

Visual search experiment time line

Each time series has been pre-processed by taking the first order difference to remove the self-driven component. We then use the recursive partition based method with lag p = 7 (chosen in preliminary analysis using the Akaike information criterion). We set the level α in Theorem 3.3 to 0.05. The recursive dyadic method does not apply here because the length of the data is not a power of 2.

Figure 3 shows the distribution of the detected change points among each of the two clusters we examined. The two dashed vertical lines indicate the time of the two phase changes. There are 497 change points detected across the 160 trials in the visual search region, of which 427 lie between -150 ms and 750 ms, relative to the stimulus onset. Compared with the visual processing regions, there are much fewer change points detected among the frontoparietal regions, where the Alpha-band power is more sustained.

Figure 3
figure 3

The change point distribution among the visual processing region and the frontoparietal region

Strength of the connections between regions of interest, within each of the two clusters, is shown in Figs. 4 and 5, where we have plotted the pointwise means and one standard deviation error bars of the 2 norm of the coefficients across the 160 trials. The inhibitive role of the Alpha-band power in the visual processing region (i..e, the creation of a common co-deactivation pattern), in response to the stimulus, is understood to be the reason for the significant increase in the 2 norms of the coefficients among V3a, MT+ and VIP from -150 ms to 750 ms. And, in fact, most of the changepoints in this time interval among these three regions of interest correspond to an increase in the 2 norm of the pair-wise regression coefficients. In contrast, the changes of the 2 norms of the coefficients in the frontoparietal region are much more gradual.

Figure 4
figure 4

2 norms of coefficients between pairs of time series in the visual processing region

Figure 5
figure 5

2 norms of coefficients between pairs of time series in the frontoparietal region

As an aside, we note that comparatively few interactions were found between the visual processing region and the frontoparietal region using our method (results not shown).


Motivated by the types of questions arising in task-based neuroscience – particularly using imaging modalities with fine-scale temporal resolution – we proposed a novel method for simulataneous network inference and change point detection. Various extensions are possible. For example, a penalty in the spirit of the fused-lasso would be of interest here, to encourage a certain notion of temporal contiguity. More subtle, one could envision allowing the lag p to vary, perhaps with a larger range of p available across longer temporal scales. In addition, a speed-up of the implementation (particularly for the non-dyadic case) would be desirable – and, indeed, necessary for larger networks than those studied here – adopting, for example, ideas like those underlying the PELT algorithm presented by Killick et al. (2012). Finally, it would be natural to explore the utility of our proposed method in the context of financial economics.


  • Amano, K., Takeda, T., Haji, T., Terao, M., Maruya, K., Matsumoto, K., Murakami, I. and Nishida, S. (2012). Human neural responses involved in spatial pooling of locally ambiguous motion signals. Journal of Neurophysiology107, 3493–3508.

    Article  Google Scholar 

  • Bach, F.R. (2008). Consistency of the group lasso and multiple kernel learning. The Journal of Machine Learning Research 9, 1179–1225.

    MathSciNet  MATH  Google Scholar 

  • Barigozzi, M. and Brownlees, C.T. (2014). Nets: network estimation for time series. Available at SSRN 2249909.

  • Basu, S., Shojaie, A. and Michailidis, G. (2015). Network granger causality with inherent grouping structure. Journal of Machine Learning Research 16, 417–453.

    MathSciNet  MATH  Google Scholar 

  • Betancourt, B., Rodríguez, A. and Boyd, N. (2017). Bayesian fused lasso regression for dynamic binary networks. Journal of Computational and Graphical Statistics.

  • Bettencourt, K.C. and Xu, Y. (2016). Decoding the content of visual short-term memory under distraction in occipital and parietal areas. Nature Neuroscience 19, 150–157.

    Article  Google Scholar 

  • Bolstad, A., Van Veen, B.D. and Nowak, R. (2011). Causal network inference via group sparse regularization. IEEE Transactions on Signal Processing 59, 2628– 2641.

    MathSciNet  Article  Google Scholar 

  • Braddick, O., O'Brien, J., Wattam-Bell, J., Atkinson, J. and Turner, R. (2000). Form and motion coherence activate independent, but not dorsal/ventral segregated, networks in the human brain. Current Biology 10, 731–734.

    Article  Google Scholar 

  • Bullmore, E. and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience10, 186– 198.

    Article  Google Scholar 

  • Calabro, F. and Vaina, L. (2012). Interaction of cortical networks mediating object motion detection by moving observers. Experimental Brain Research 221, 177– 189.

    Article  Google Scholar 

  • Davis, R.A., Lee, T. and Rodriguez-Yam, G.A. (2008). Break detection for a class of nonlinear time series models. Journal of Time Series Analysis 29, 834–867.

    MathSciNet  Article  Google Scholar 

  • Donoho, D.L. (1993). Unconditional bases are optimal bases for data compression and for statistical estimation. Applied and Computational Harmonic Analysis1, 100– 115.

    MathSciNet  Article  Google Scholar 

  • Donoho, D.L. (1997). Cart and best-ortho-basis: a connection. Annals of Statistics 25, 1870–1911.

    MathSciNet  Article  Google Scholar 

  • Fouque, J.-P., Papanicolaou, G., Sircar, R. and Sølna, K. (2011). Multiscale stochastic volatility for equity, interest rate, and credit derivatives. Cambridge University Press, Cambridge.

    Book  Google Scholar 

  • Granger, C.W. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37, 3, 424–438.

    Article  Google Scholar 

  • Hamilton, J.D. (1983). Oil and the macroeconomy since world war ii. The Journal of Political Economy 91, 2, 228–248.

    Article  Google Scholar 

  • Hiemstra, C. and Jones, J.D. (1994). Testing for linear and nonlinear granger causality in the stock price-volume relation. Journal of Finance 49, 1639–1664.

    Google Scholar 

  • Honey, C.J., Kötter, R., Breakspear, M. and Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences of the United States of America 104, 10240– 10245.

    Article  Google Scholar 

  • Killick, R., Fearnhead, P. and Eckley, I.A. (2012). Optimal detection of changepoints with a linear computational cost. Journal of the American Statistical Association 107, 1590–1598.

    MathSciNet  Article  Google Scholar 

  • Kolaczyk, E.D. (2009). Statistical Analysis of Network Data: Methods and Models. Springer Publishing Company, Incorporated, 1st edn.

  • Kolaczyk, E.D. and Nowak, R.D. (2005). Multiscale generalised linear models for nonparametric function estimation. Biometrika 92, 119–133.

    MathSciNet  Article  Google Scholar 

  • Li, J.Q. and Barron, A.R. (2000). Mixture density estimation, p. 279–285.

  • Long, C., Brown, E., Triantafyllou, C., Aharon, I., Wald, L. and Solo, V. (2005). Nonstationary noise estimation in functional mri. Neuroimage28, 890–903.

    Article  Google Scholar 

  • Louie, M.M. and Kolaczyk, E.D. (2006). A multiscale method for disease mapping in spatial epidemiology. Statistics in Medicine 25, 1287–1306.

    MathSciNet  Article  Google Scholar 

  • Mallat, S.G. (1989). A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 674–693.

    Article  Google Scholar 

  • Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. The Annals of Statistics 34, 3, 1436–1462.

    MathSciNet  Article  Google Scholar 

  • Mukhopadhyay, N.D. and Chatterjee, S. (2007). Causality and pathway search in microarray time series experiment. Bioinformatics 23, 442–449.

    Article  Google Scholar 

  • Müller, A. (2001). Stochastic ordering of multivariate normal distributions. Annals of the Institute of Statistical Mathematics 53, 567–575.

    MathSciNet  Article  Google Scholar 

  • Rana, K.D. and Vaina, L.M. (2014). Functional roles of 10 hz alpha-band power modulating engagement and disengagement of cortical networks in a complex visual motion task. PloS One 9, e107715.

    Article  Google Scholar 

  • Sims, C.A. (1972). Money, income, and causality. American Economic Review 62, 540–552.

    Google Scholar 

  • Willett, R.M. and Nowak, R.D. (2007). Multiscale poisson intensity and density estimation. IEEE Transactions on Information Theory 53, 3171–3187.

    MathSciNet  Article  Google Scholar 

  • Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B 68, 49–67.

    MathSciNet  Article  Google Scholar 

  • Zhao, P. and Yu, B. (2006). On model selection consistency of lasso. The Journal of Machine Learning Research 7, 2541–2563.

    MathSciNet  MATH  Google Scholar 

Download references


We would like to thank Lucia Vaina and Kunjan Rana for providing the MEG data and offering helpful discussion throughout. This work was supported in part by funding under AFOSR award 12RSL042 and NIH award 1R01NS095369-01.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Eric D. Kolaczyk.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


Appendix A

A.1 Algorithm using RDP

Here we provide the algorithm for implementation based on recursive dyadic partitions. Assume the length of the time series equals T = 2J and \(j_{min} = \min \limits _{j}\) such that 2j > p + 1. Note that p + 1 is the minimum required number of observations to fit the restricted VAR(p) model. Assume J > jmin,

figure b

Algorithm 2 splits only at dyadic positions. The candidate partitions \(\mathcal {P} \preceq \mathcal {P}_{D_{y}}^{*}\) can be represented as subtrees of a binary tree of depth \(\log _{2} T\). Given a dataset of length T = 2J, we have 20 root node, 21 nodes at level 1, 22 nodes, 23 nodes, and so on, at the following levels, until we reach the leaf level, which has 2(J− 1) nodes. The complexity of the algorithm is then of order \(\mathcal {O}(T)\) calls to fit the group lasso regression and \(\mathcal {O}(T)\) calls for comparisons.

A.2 Proof of Theorem 3.1


Theorem 3.1

The proof contains two parts. In the first part, we show that Eq. 3.1 holds, under H0. In the second part, we show that Eq. 3.2 holds, under H1.

Part 1

We begin by defining the group lasso penalized likelihood on an interval I:

$$ \begin{array}{@{}rcl@{}} PL_{I} = \frac{1}{|I|}\left\|\mathbf{X}_{I}(u) -\mathbf{X}_{I}(-u)\boldsymbol\theta_{I}(u,v) \right\|{~}_{2}^{2} + \lambda_{I} \sum\limits_{v \in V\backslash \{u\}}\left\|\boldsymbol\theta_{I}(u,v)\right\|{~}_{2}. \end{array} $$

Let \(\boldsymbol {\hat {\theta }}_{1:T}\) be the 𝜃 that minimizes the penalized likelihood Eq. 6.1 on the interval from 1 to T and \(\hat {PL}_{1:T}\) be the quantity upon substituting \(\boldsymbol {\hat {\theta }}_{1:T}\) in Eq. 6.1. Consider any alternative model with a change point detected at point \(\hat {\tau }\in (1, T)\). Denote by \(\boldsymbol {\hat {\theta }}_{1:\hat {\tau }}\) and \(\boldsymbol {\hat {\theta }}_{\hat {\tau }:T}\) the coefficients 𝜃 that minimize Eq. 6.1 over intervals \([1, \hat {\tau }]\) and \((\hat {\tau }, T]\), respectively. Given our model, Eq. 3.1 in Theorem 3.1 is equivalent to

$$ \begin{array}{@{}rcl@{}} \mathbb{P}_{H_{0}}(\hat{PL}_{1:T} \leq \hat{PL}_{1:\hat{\tau}} + \hat{PL}_{\hat{\tau}:T} + C_{3}\log T) \longrightarrow 1. \end{array} $$

The additional term \(C_{3}\log T\) comes from the fact that the alternative model has 1 more partition than the null model, with C3 = 1/2 using RDP and C3 = 3/2 using RP. We expand \(\hat {PL}_{1:\hat {\tau }} + \hat {PL}_{\hat {\tau }:T} - \hat {PL}_{1:T} +C_{3}\log T\) and get:

$$ \begin{array}{@{}rcl@{}} &&{} \frac{1}{\hat{\tau}}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} \\ &&+\! \frac{1}{T- \hat{\tau}}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2}\\ &&+ \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2}- \frac{1}{T}\left\|\vphantom{\left.\sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2}}\mathbf{X}_{1:T}(u)\right.\\&&\left.- \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2}- \lambda_{1:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:T}(u,v)\right\|{~}_{2}\\ && + C_{3}\log T. \end{array} $$

By rewriting the last line of Eq. 6.2, we have

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{\hat{\tau}}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} \\ &&~~+ \frac{1}{T - \hat{\tau}}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2}\\ &&+ \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2}\! \\ &&- \frac{1}{T}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{1}{T}\left\|\mathbf{X}_{\hat{\tau} :T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau} :T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \lambda_{1:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:T}(u,v)\right\|{~}_{2} + C_{3}\log T. \end{array} $$

We then add and subtract a term in both line 3 and line 4 of Eq. 6.3. In doing so, we have:

$$ \begin{array}{@{}rcl@{}} &&{} \frac{1}{\hat{\tau}}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - {\sum}_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:\hat{\tau}}{\sum}_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} \\ &&+ \frac{1}{T - \hat{\tau}}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2} + \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2} \\ &&- \frac{1}{T}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) + \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right.\\&&\left.- \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{1}{T}\left\|\mathbf{X}_{\hat{\tau} :T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) + \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right.\\&&\left.- \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau} :T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \lambda_{1:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:T}(u,v)\right\|{~}_{2} + C_{3}\log T. \end{array} $$

From which we have that:

$$ \begin{array}{@{}rcl@{}} &&\text{equation~(6.4)} \\ &\geq & \frac{1}{\hat{\tau}}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} \\ &&+ \frac{1}{T- \hat{\tau}}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{1}{T}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{1}{T}\left \| \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)- \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{2}{T}\left( \left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2} \right. \\ &&\times \left. \left\| \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)- \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2} \right) \\ &&- \frac{1}{T}\left\|\mathbf{X}_{\hat{\tau} :T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right \|{~}_{2}^{2} \\ &&- \frac{1}{T}\left\| \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau} :T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2}^{2} \\ &&- \frac{2}{T}\left( \left\|\mathbf{X}_{\hat{\tau} :T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right \|{~}_{2} \right. \\ &&\times \left. \left\|\sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau} :T}(v)\boldsymbol{\hat{\theta}}_{1:T}(u,v) \right\|{~}_{2} \right) \\ &&+ \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2}+ \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} \\&&- \lambda_{1:T}{\sum}_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:T}(u,v)\right\|{~}_{2} \\ &&+ C_{3}\log T. \end{array} $$

Under assumptions (1) to (5), Bach (2008) reformulated the group lasso penalized likelihood (6.1) as:

$$ PL_{I} = \hat{\boldsymbol{\Sigma}}_{\mathbf{X}(u)\mathbf{X}(u)} - 2\hat{\boldsymbol{\Sigma}}_{\mathbf{X}(-u)\mathbf{X}(u)}^{\prime}\boldsymbol{\theta} + \boldsymbol{\theta}^{\prime}\boldsymbol{\hat{\Sigma}}_{\mathbf{X}(-u)\mathbf{X}(-u)}\boldsymbol\theta + \lambda_{I} \sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\theta}(u,v)\right\|{~}_{2} $$

where \(\hat {\boldsymbol {\Sigma }}_{\mathbf {X}(u)\mathbf {X}(u)} = \frac {1}{|I|}\mathbf {X}(u)^{\prime } {\Pi }_{|I|}\mathbf {X}(u)\), \(\hat {\boldsymbol {\Sigma }}_{\mathbf {X}(-u) \mathbf {X}(u)} = \frac {1}{|I|}\mathbf {X}(-u)^{\prime }{\Pi }_{|I|}\mathbf {X}(u)\) and \(\boldsymbol \theta ^{\prime }\hat {\boldsymbol {\Sigma }}_{\mathbf {X}(-u)\mathbf {X}(-u)}\boldsymbol \theta = \frac {1}{|I|}\mathbf {X}(-u)^{\prime }{\Pi }_{|I|}\mathbf {X}(-u)\) are the empirical covariance matrices with π|I| defined as \({\Pi }_{|I|} = \mathbf {I}_{|I|}-\frac {1}{|I|}\mathbf {1}_{|I|}\mathbf {1}_{|I|}^{\prime }\) and showed that the group lasso estimator \(\boldsymbol {\hat {\theta }}\) converges in probability to 𝜃. Using expression in Eq. 6.6 and collecting similar terms, we could then rewrite (6.5) as:

$$ \begin{array}{@{}rcl@{}} &&{} \frac{T-\hat{\tau}}{T}\left\{\hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{1:\hat{\tau}}(u) \mathbf{X}_{1:\hat{\tau}}(u)} - 2\hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{1:\hat{\tau}}(-u) \mathbf{X}_{1:\hat{\tau}}(u)} \hat{\boldsymbol\theta}_{1:\hat{\tau}} + \hat{\boldsymbol\theta}_{{1:\hat{\tau}}}^{\prime} \hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{1:\hat{\tau}}(-u)\mathbf{X}_{1:\hat{\tau}}(-u)} \hat{\boldsymbol\theta}_{1:\hat{\tau}} \right\} \\ &&{}+ \frac{\hat{\tau}}{T}\left\{\hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{\hat{\tau}:T}(u) \mathbf{X}_{\hat{\tau}:T}(u)} - 2\hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{\hat{\tau}:T}(-u) \mathbf{X}_{\hat{\tau}:T}(u)} \hat{\boldsymbol\theta}_{\hat{\tau}:T} + \hat{\boldsymbol\theta}_{{\hat{\tau}:T}}^{\prime} \hat{\boldsymbol{\Sigma}}_{\mathbf{X}_{\hat{\tau}:T}(-u)\mathbf{X}_{1:\hat{\tau}}(-u)} \hat{\boldsymbol\theta}_{\hat{\tau}:T} \right\} \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}- \left\| \hat{\boldsymbol{\Sigma}}^{1/2}_{\mathbf{X}_{1:\hat{\tau}}(-u) \mathbf{X}_{1:\hat{\tau}}(-u)} \left( \hat{\boldsymbol\theta}_{1:\hat{\tau}} - \hat{\boldsymbol\theta}_{1:T} \right)\right\|{~}_{2}^{2} - \left\| \hat{\boldsymbol{\Sigma}}^{1/2}_{\mathbf{X}_{\hat{\tau}:T}(-u) \mathbf{X}_{\hat{\tau}:T}(-u)} \left( \hat{\boldsymbol\theta}_{\hat{\tau}:T} - \hat{\boldsymbol\theta}_{1:T} \right)\right\|{~}_{2}^{2} \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}- \frac{2}{T}\left( \left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}\right.\\&&\left. \left \| \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\left( \boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)- \boldsymbol{\hat{\theta}}_{1:T}(u,v)\right) \right\|{~}_{2}\right) \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}- \frac{2}{T}\left( \left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}\right.\\&&\left. \left \| \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\left( \boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)- \boldsymbol{\hat{\theta}}_{1:T}(u,v)\right) \right\|{~}_{2}\right) \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}+ \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\hat{\boldsymbol\theta}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} +\lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\hat{\boldsymbol\theta}_{\hat{\tau}:T}(u,v)\right\|{~}_{2}\\ &&{}- \lambda_{1:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\hat{\boldsymbol\theta}_{1:T}(u,v)\right\|{~}_{2} + C_{3}\log T. \end{array} $$

Note that in the previous expression, the first two lines are by definition non-negative. The expression in the last line is composed of a collection of penalty terms. They are the group lasso penalties, and all of them converge to zero asymptotically assuming λ(⋅)→0 and λ(⋅)N→0.

Since \(\hat {\boldsymbol \theta }_{1:\hat {\tau }} \stackrel {P}{\longrightarrow } \boldsymbol \theta \), \(\hat {\boldsymbol \theta }_{\hat {\tau }:T} \stackrel {P}{\longrightarrow } \boldsymbol \theta \) and \(\hat {\boldsymbol \theta }_{1:T} \stackrel {P}{\longrightarrow } \boldsymbol \theta \), \(\hat {\boldsymbol \theta }_{1:\hat {\tau }} - \hat {\boldsymbol \theta }_{1:T} \stackrel {P}{\longrightarrow } 0\) and X’s have finite moments up to order 4, each term in Eqs. 6.86.9 and 6.10 converges to 0 in probability.

Putting everything together, we then complete the proof of the first part of the theorem:

$$ \begin{array}{@{}rcl@{}} \mathbb{P}_{H_{0}}(\hat{PL}_{1:T} \leq \hat{PL}_{1:\hat{\tau}_{i}} + \hat{PL}_{\hat{\tau}:T} + C_{3}\log T) \longrightarrow 1. \end{array} $$

Part 2

Suppose H1 is true. We denote the estimated change point by \(\hat {\tau }\). We show that \(\hat {PL}_{1:\hat {\tau }} + \hat {PL}_{\hat {\tau }:T}\) is minimized at \(\hat {\tau } = \tau \). Assume we have a competing estimator \(\tilde \tau \) with change point detected at time \(\tilde \tau = s\) with sτ. We show that

$$ \begin{array}{@{}rcl@{}} \hat{PL}_{1:\hat{\tau}} + \hat{PL}_{\hat{\tau}:T} \leq \hat{PL}_{1:s} + \hat{PL}_{s:T} \end{array} $$

holds with high probability under H1. Without loss of generality, we assume that τs = δ, for some δ > 0 as shown in Fig. 6. For the case that s > τ, a similar argument holds.

Figure 6
figure 6

Relative position of two detected change points

Denote by \(\boldsymbol {\hat {\theta }}_{1:\hat {\tau }}\) and \(\boldsymbol {\hat {\theta }}_{\hat {\tau }:T}\) the estimated coefficients that minimize the penalized likelihoods, given that \(I = \{t: t \in [1,\hat {\tau })\}\) and \(I = \{t: t \in [\hat {\tau }, T]\}\). We also define \(\boldsymbol {\hat {\theta }}_{1:s}\) and \(\boldsymbol {\hat {\theta }}_{s:T}\) to be the estimated coefficients that minimize the penalized likelihoods in Eq. 6.1, given that I = {t : t ∈ [1,s)} and I = {t : t ∈ [s, T]}. The key idea is that \(\hat {\boldsymbol \theta }_{1:\hat {\tau }}\) and \(\hat {\boldsymbol \theta }_{\hat {\tau }:T}\) are consistent estimators of 𝜃1:τ and 𝜃τ:T but \(\hat {\boldsymbol \theta }_{s:T}\) is not a consistent estimator of 𝜃1:τ nor 𝜃τ:T due to the mis-specification error. Therefore, one of the estimators from \(\boldsymbol {\hat {\theta }}_{1:s}\) and \(\boldsymbol {\hat {\theta }}_{s:T}\) such that s < τ is not a consistent estimator on the corresponding intervals. Formally, we have that

$$ \begin{array}{@{}rcl@{}} &&{}\hat{PL}_{1:s} + \hat{PL}_{s:T}\\ &&{}= \frac{1}{s}\left\|\mathbf{X}_{1:s}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:s}(v)\boldsymbol{\hat{\theta}}_{1:s}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2}\\ &&{\kern2pt}+ \frac{1}{T-s}\left\|\mathbf{X}_{s:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:T}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} + \lambda_{s:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:T}(u,v)\right\|{~}_{2}\\ &&{}= \frac{1}{s}\left\|\mathbf{X}_{1:s}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:s}(v)\boldsymbol{\hat{\theta}}_{1:s}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2}\\ &&{}+ \frac{1}{T - s}\left\|\mathbf{X}_{s:\tau}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\tau}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} \\&&{}+ \frac{\delta \lambda_{s:T}}{T-s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:T}(u,v)\right\|{~}_{2} \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}+ \frac{1}{T-s}\left\|\mathbf{X}_{\tau:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\tau:T}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} \\&&{}+ \frac{(T-s-\delta)\lambda_{s:T}}{T-s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2} \end{array} $$


$$ \begin{array}{@{}rcl@{}} &&{}\hat{PL}_{1:\hat{\tau}} + \hat{PL}_{\hat{\tau}:T} \\ &&{}= \frac{1}{\tau}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2} \\ &&{}+ \frac{1}{T - \tau}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2} + \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2}. \end{array} $$

We write expression (6.13) as \(\hat {PL}_{1:s} + \hat {PL}_{s:\hat {\tau }}\), and expression (6.14), as \(\tilde {PL}_{s:T}\). We show (6.12) holds by first showing that \(\hat {PL}_{1:s} + \hat {PL}_{s:\hat {\tau }} \geq \hat {PL}_{1:\hat {\tau }}\), and then showing \(\tilde {PL}_{s:T} \geq \hat {PL}_{\hat {\tau }:T}\). We first compute \(\hat {PL}_{1:s} + \hat {PL}_{s:\hat {\tau }} - \hat {PL}_{1:\hat {\tau }}\):

$$ \begin{array}{@{}rcl@{}} &&{}= \frac{1}{s}\left\|\mathbf{X}_{1:s}(u) - \sum\limits_{v\in V\backslash\{u\}}v_{1:s}(v)\boldsymbol{\hat{\theta}}_{1:s}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:s}\sum\limits_{\mathbf{X}\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2} \\ &&{}+ \frac{1}{T - s}\left\|\mathbf{X}_{s:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} + \frac{\delta \lambda_{s:T}}{T - s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:T}(u,v)\right\|{~}_{2} \\ &&{}-\frac{1}{\tau}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} - \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2}. \end{array} $$

Assuming there is another group-lasso estimator defined on the the interval between s and \(\hat {\tau }\), which is given by

$$ \begin{array}{@{}rcl@{}} \boldsymbol{\hat{\theta}}_{s:\hat{\tau}} \!\!\!&=&\!\!\! \operatornamewithlimits{arg min}_{\boldsymbol{\theta}} \frac{1}{\hat{\tau}-s}\left\|\mathbf{X}_{s:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\hat{\tau}}(v)\boldsymbol{\theta}_{s:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} \\&&\!\!\!+ \lambda_{s:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol\theta_{s:\hat{\tau}}(u,v)\right\|{~}_{2}. \end{array} $$

The estimator \(\boldsymbol {\hat {\theta }}_{s:\hat {\tau }} \) is again a consistent estimator of \(\boldsymbol {\theta }_{1:\hat {\tau }}\) and we have that:

$$ \begin{array}{@{}rcl@{}} &&{}\frac{1}{\hat{\tau} - s}\left\|\mathbf{X}_{s:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{s:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{s:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:\hat{\tau}}(u,v)\right\|{~}_{2} \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}\leq \frac{1}{T - s}\left\|\mathbf{X}_{s:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} \\&&{}+ \frac{\delta \lambda_{s:T}}{T - s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:T}(u,v)\right\|{~}_{2}. \end{array} $$

These are directly implied by Theorem (2) in Bach (2008) given that \(\boldsymbol {\hat {\theta }}_{s:T}\) is not consistent in the 2 sense of estimating \(\boldsymbol \theta _{1:\hat {\tau }}\) whenever \(s \neq \hat {\tau }\). Given (6.16), we have that

$$ \begin{array}{@{}rcl@{}} &&{}\hat{PL}_{1:s} + \hat{PL}_{s:\hat{\tau}} - \hat{PL}_{1:\hat{\tau}} \\ &&{}\geq \frac{1}{s}\left\|\mathbf{X}_{1:s}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:s}(v)\boldsymbol{\hat{\theta}}_{1:s}(u,v) \right\|{~}_{2}^{2} + \lambda_{1:s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2} \\ &&{\kern2pt}+ \frac{1}{\hat{\tau}-s}\left\|\mathbf{X}_{s:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{s:\hat{\tau}}(v)\boldsymbol{\hat{\theta}}_{s:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} + \lambda_{s:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{s:\hat{\tau}}(u,v)\right\|{~}_{2} \\ &&{\kern2pt}-\frac{1}{\hat{\tau}}\left\|\mathbf{X}_{1:\hat{\tau}}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{1:\hat{\hat{\tau}}}(v)\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v) \right\|{~}_{2}^{2} - \lambda_{1:\hat{\tau}}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:\hat{\tau}}(u,v)\right\|{~}_{2}. \\ \end{array} $$

The same argument in Part 1 holds here and we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}_{H_{1}}\left( \hat{PL}_{1:s} + \hat{PL}_{s:\hat{\tau}} \geq \hat{PL}_{1:\hat{\tau}}\right) \longrightarrow 1. \end{array} $$

Note that \(\boldsymbol {\hat {\theta }}_{s:T}\) is not a consistent estimator of \(\boldsymbol {\theta }_{\hat {\tau }:T}\) given the change point. Therefore, similar to Eq. 6.16, we have

$$ \begin{array}{@{}rcl@{}} & &{}\frac{1}{T - \hat{\tau}}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v) \right\|{~}_{2}^{2} + \lambda_{\hat{\tau}:T}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{\hat{\tau}:T}(u,v)\right\|{~}_{2} \\ &&{}\leq \frac{1}{T - s}\left\|\mathbf{X}_{\hat{\tau}:T}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}_{\hat{\tau}:T}(v)\boldsymbol{\hat{\theta}}_{s:T}(u,v) \right\|{~}_{2}^{2} \\&&~~~~~~~~~~+ \frac{(T-s-\delta)\lambda_{s:T}}{T-s}\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\hat{\theta}}_{1:s}(u,v)\right\|{~}_{2} \\ \end{array} $$

and so

$$ \begin{array}{@{}rcl@{}} \mathbb{P}_{H_{1}}\left( \tilde{PL}_{s:T} \geq \hat{PL}_{\hat{\tau}:T}\right) \longrightarrow 1. \end{array} $$

Putting the two parts together, we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}_{H_{1}}\left( \hat{PL}_{1:s} + \hat{PL}_{s:T}\geq \hat{PL}_{1:\hat{\tau}} + \hat{PL}_{\hat{\tau}:T} \right) \longrightarrow 1 \end{array} $$

for any \(s < \hat {\tau }\). □

A.3 Proof of Theorem 9

Under the assumption of stationarity, we could omit the time index in this section, that is 𝜃 = 𝜃t,∀t. To show Theorem 3.3, we begin with the following lemma.

Lemma 8.1.

Given \(\boldsymbol {\theta } \in \mathbb {R}^{(N-1)p}\), let G(𝜃(u, v)) be a p-dimensional vector with elements

$$ \begin{array}{@{}rcl@{}} G(\boldsymbol{\theta}(u,v)) &= -2T^{-1}\left( \mathbf{X}(v)^{\prime}(\mathbf{X}(u) - {\sum}_{v\in V\backslash\{u\}}\mathbf{X}(v)\boldsymbol{\theta}(u,v))\right). \end{array} $$

A vector \(\boldsymbol {\hat {\theta }}\) with \(\|\boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} = 0\), ∀vV ∖{u} is a solution to the group lasso type of estimator iff for all vV ∖{u}, \( G(\boldsymbol {\hat {\theta }}(u,v)) + \lambda \mathbf {D}(\boldsymbol {\hat {\theta }}(u,v)) = \mathbf {0}\), where \(\|\mathbf {D}(\boldsymbol {\hat {\theta }}(u,v))\|{~}_{2} = 1\) in the case of \(\|\boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} > 0\) and \(\|\mathbf {D}(\boldsymbol {\hat {\theta }}(u,v) \|{~}_{2} < 1\) in the case of \(\|\boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} = 0\).


Lemma 8.1

Under KKT conditions, using subdifferential methods, the subdifferential of

$$ \frac{1}{T}\left\|\mathbf{X}(u) - \sum\limits_{v\in V\backslash\{u\}}\mathbf{X}(v)\boldsymbol{\theta}(u,v)\right\|^{2} + \lambda\sum\limits_{v\in V\backslash\{u\}}\left\|\boldsymbol{\theta}(u,v)\right\|{~}_{2} $$

is given by \(G(\boldsymbol {\theta }(u,v)) + \lambda \mathbf {D}(\boldsymbol {\hat {\theta }}(u,v))\), where \(\|\mathbf {D}(\boldsymbol {\hat {\theta }}(u,v))\|{~}_{2} = 1\) if ∥𝜃(u, v)∥ 2 > 0 and \(\|\mathbf {D}(\boldsymbol {\hat {\theta }}(u,v))\|{~}_{2} < 1\) if ∥𝜃(u, v)∥ 2 = 0. The lemma follows. □

We now proof Theorem 3.3.

Proof 3.

Assuming that \(\hat {C}_{u} \nsubseteq C_{u}\), there must exist at least one estimated edge that joins two nodes in two different connectivity components. Given the assumptions, we use similar arguments as in the proof of Theorem 3 in Meinshausen and Bühlmann (2006). Hence we have

$$ \mathbb{P}(\exists u \in V: \hat{C}_{u} \nsubseteq C_{u}) \leq N \max_{u \in V} \mathbb{P}(\exists v \in V \backslash C_{u}: v \in \hat{\text{ne}}_{u}), $$

where \(\hat {\text {ne}}_{u}\) is the estimated neighborhood of node u and \(v \in \hat {\text {ne}}_{u}\) means \(\| \boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} > 0\).

Let \({\mathscr{E}}\) be the event that

$$ \max\limits_{u\in V \backslash C_{u}} \left\|G \left( \boldsymbol{\hat{\theta}}(u,v)\right) \right\|{~}_{2}^{2} < \lambda^{2}. $$

Conditional on the event \({\mathscr{E}}\), \(\boldsymbol {\hat {\theta }}\) is also a solution to the group lasso problem. As \(\|\boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} = 0\) for all vVCu, it follows from Lemma 8.1 that \(\|\boldsymbol {\hat {\theta }}(u,v)\|{~}_{2} = 0\) for all vVCu. Hence

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(\exists v\in V \backslash C_{u}: \|\boldsymbol{\hat{\theta}}(u,v)\|{~}_{2} > 0) &\leq & 1 - \mathbb{P}(\mathscr{E})\\ &=& P\left( \max\limits_{v \in V \backslash C_{u}} \left\|G\left( \boldsymbol{\hat{\theta}}(u,v)\right) \right\|{~}_{2}^{2} \geq \lambda^{2} \right). \end{array} $$

It is then sufficient to show that

$$ N^{2} \max_{u \in V\text{, } v\in V \backslash C_{u}} \mathbb{P}\left( \left\|G(\boldsymbol{\hat{\theta}}(u,v))\right\|{~}_{2}^{2} \geq \lambda^{2}\right) \leq \alpha. $$

Note that now the v and Cu are in different connected components, which means that X(v) is conditionally independent of X(Cu). Hence, conditional on all X(Cu), we have

$$ \begin{array}{@{}rcl@{}} \left\|G(\boldsymbol{\hat{\theta}}(u,v))\right\|{~}_{2}^{2} &=& \left\|-2T^{-1}\left( \mathbf{X}(v)^{\prime}(\mathbf{X}(u) - \sum\limits_{i \in C_{u}}\mathbf{X}(i)\boldsymbol{\hat{\theta}}(u,i))\right)\right\|{~}_{2}^{2}\\ &=& 4T^{-2}\left\|(\mathbf{\hat R}_{1}, \cdots, \mathbf{\hat R}_{p} )^{\prime} \right\|{~}_{2}^{2} \end{array} $$

where \(\mathbf {\hat R}_{\ell } =X_{-\ell }(v)^{\prime } \left (\mathbf {X}(u) - {\sum }_{i \in C_{u}}\mathbf {X}(i)\boldsymbol {\hat {\theta }}(u,i)\right )\) is the remainder term and is independent of X(v), at all lags , for = 1,⋯ ,p. It follows that the joint distribution

$$ (\mathbf{\hat R}_{1}, \cdots, \mathbf{\hat R}_{p}| \mathbf{X}(C_{u})) \sim N(\mathbf{0}, \mathbf{\Omega}) $$

for some covariance matrix Ω. Note that this is a conditional distribution given X(Cu). Hence, in the expression of Ω, every term appearing with a suffix u is constant and every term appearing with a suffix v is a normalized random variable. This simplifies the covariance term. Note that

$$ {\boldsymbol{\Omega}}_{p \times p} = \textbf{Cov}\left( \mathbf{\hat R}_{1}, \cdots, \mathbf{\hat R}_{p} \right) $$


$$ \begin{array}{@{}rcl@{}} &&{}\textbf{tr}\left( \boldsymbol{\Omega}\right) = \sum\limits_{\ell=1}^{p} \textbf{Var} (\boldsymbol{\hat R}_{\ell}) = \sum\limits_{\ell=1}^{p} \textbf{Var}\!\left( \!{\sum}_{t=1}^{T}\left( \!\!X_{t}(u) - \sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\!\right)X_{t-\ell}(v)\!\!\right)\! \\ &&{\kern7pt}=\sum\limits_{\ell=1}^{p}\sum\limits_{s=1}^{T}\sum\limits_{t=1}^{T}\textbf{Cov}\left[\left( \left( X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)X_{t-\ell}(v)\right),\right.\\&&{} \left.\left( \left( X_{s}(u)-\sum\limits\limits_{i\in C_{u}}X_{s-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)X_{s-\ell}(v)\right)\right]. \end{array} $$

Conditional on X(Cu), Eq. 6.18 can be further simplified as:

$$ \begin{array}{@{}rcl@{}} \textbf{tr}\left( \boldsymbol{\Omega}\right) \!\!\!&=&\!\!\! \sum\limits_{\ell=1}^{p}\sum\limits_{t=1}^{T}\sum\limits_{s=1}^{T}\left( X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)\\&& \left( X_{s}(u)-\sum\limits_{i\in C_{u}}X_{s-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right) \textbf{Cov} \left[ X_{t-\ell}(v) ,X_{s-\ell}(v) \right] \\ \!\!\!&\leq&\!\!\! \sum\limits_{\ell=1}^{p}\sum\limits_{t=1}^{T}\sum\limits_{s=1}^{T}\left( X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)\\&& \left( X_{s}(u)-\sum\limits_{i\in C_{u}}X_{s-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)\sqrt{\textbf{Var}(X_{t-\ell}(v)) \textbf{Var}(X_{s-\ell}(v))}. \end{array} $$

We have the above bounded by

$$ \begin{array}{@{}rcl@{}} &\leq& p \sum\limits_{s=1}^{T}\sum\limits_{t=1}^{T} \left( X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)\\&& \left( X_{s}(u)-\sum\limits_{i\in C_{u}}X_{s-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right) \\ &=& p \left[{\sum}_{t=1}^{T} \left( X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i)\right)\right]^{2} \\ & \leq& Tp \left[X_{t}(u)-\sum\limits_{i\in C_{u}}X_{t-\ell}(i)\hat{\theta}^{(\ell)}(u,i) \right]^{2}\\ &\leq& Tp \|\mathbf{X}(u)\|^{2}_{2}.\\ \end{array} $$

The last inequality comes from the Cauchy-Schwarz inequality. Denote by νmax the largest eigenvalue of the covariance matrix Ω. Since Ω is PSD, we have (νmaxIΩ) is also PSD. Following Müller (2001)’s argument, we can show \((\hat {\mathbf {R}}_{1}, \cdots , \hat {\mathbf {R}}_{p}) \leq _{cx} \mathbf {Y}\) for some random vector \(\mathbf {Y} \sim N(\mathbf {0}, \nu _{max}\mathbf {I}_{p})\), where ≤cx is the convex order that means XY, if and only if μx = μy and \({\sigma _{x}^{2}} \leq {\sigma _{y}^{2}}\). It follows that

$$ \begin{array}{@{}rcl@{}} \max_{u \in V, v\in V \backslash C_{u} } \mathbb{P}\left( \left\|G(\hat{\boldsymbol\theta}(u,v))\right\|{~}_{2}^{2} \geq \lambda^{2}\right) \!\!\!&\leq&\!\!\! \max_{u \in V, v\in V \backslash C_{u}} \mathbb{P}(4T^{-2}(\mathbf{Y}^{\prime}\mathbf{Y}) \geq \lambda^{2})\\ \!\!\!&=&\!\!\! \max_{u \in V, b\in V \backslash C_{u}} \mathbb{P}\left( \frac{1}{\nu_{max}}\mathbf{Y}^{\prime}\mathbf{Y} \geq \frac{\lambda^{2}T^{2}}{4\nu_{max}}\right) . \end{array} $$

Note that the matrix \(\frac {1}{\nu _{max}}\mathbf {Y}^{\prime }\mathbf {Y}\) is idempotent and thus it follows a χ2(p) distribution, and \(\nu _{max} \leq \textbf {tr}(\boldsymbol {\Omega }) \leq Tp\|\mathbf {X}(u)\|{~}_{2}^{2}\). Put everything together, we have

$$ \begin{array}{@{}rcl@{}} \max_{u \in V, b\in V \backslash C_{u}} \mathbb{P}\left( \|G(\hat{\boldsymbol\theta}(u,v))\|{~}_{2}^{2} \geq \lambda^{2}\right) \!\!\!&\leq&\!\!\! \max_{u \in V, v\in V \backslash C_{u}} \mathbb{P}\left( \chi^{2}(p) \geq \frac{\lambda^{2}T^{2}}{4\nu_{max}} \right)\\ \!\!\!&\leq&\!\!\! \max_{u \in V, v\in V \backslash C_{u}} \mathbb{P}\left( \chi^{2}(p) \geq \frac{\lambda^{2}T^{2}}{4Tp\|\mathbf{X}(u)\|{~}_{2}^{2}} \right) \\&\leq&\!\!\! \frac{\alpha}{N(N-1)} \end{array} $$

and thus we have the desired λ(α, a)

$$ \lambda(\alpha) = 2\hat{\sigma}_{u}\sqrt{pQ\left( 1-\frac{\alpha}{N(N-1)}\right)}. $$

A.4 Proof of Theorem 3.3

The proof of the theorem is in line with the work in Kolaczyk and Nowak (2005). The core idea is to bound the expected Hellinger loss in terms of the Kullback-Leibler distance. This approach, building on the original work of Li and Barron (2000), leverages the union of unions bound, after discretizing the underlying parameter space. We assume a similar discretization here, while omitting the straightforward but tedious numerical analysis arguments that accompany. See, for example, Kolaczyk and Nowak (2005) for details. Our fundamental bound is given by the following theorem.

Theorem 8.2.

Let \({\Gamma }_{T}^{(N-1)p}\) be a space of finite collection of estimators \(\boldsymbol {\tilde \theta }\) for 𝜃, and pen(⋅) a function on \({{\Gamma }_{T}^{p}}\) satisfying the condition

$$ \begin{array}{@{}rcl@{}} \sum\limits_{\boldsymbol{\tilde\theta}(u, v) \in {{\Gamma}_{T}^{p}}}e^{-pen(\boldsymbol{\tilde\theta}(u, v))} \leq 1. \end{array} $$

Let \(\hat {\boldsymbol \theta }\) be a penalized maximum likelihood estimator of the form

$$ \begin{array}{@{}rcl@{}} \hat{\boldsymbol\theta} \equiv \operatornamewithlimits{arg min}_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}}\left\{ -\log p(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol{\tilde\theta}) + 2\sum\limits_{v \in V\backslash \{u\}} \text{Pen}(\boldsymbol{\tilde\theta}(u,v))\right\}. \end{array} $$


$$ \begin{array}{@{}rcl@{}} \mathbb{E}[H^{2}(p_{\hat{\boldsymbol\theta}},p_{\boldsymbol\theta})] \leq \min_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}}\left\{K(p_{\boldsymbol\theta},p_{\boldsymbol{\tilde\theta}}) + 2\sum\limits_{v \in V\backslash \{u\}}\text{Pen}(\boldsymbol{\tilde\theta}(u,v))\right\}. \end{array} $$

Note that the result of Theorem 6 requires that inequality (6.20) holds. Lemma 8.3 shows that our proposed penalty satisfies inequality (6.20). We now prove Theorem 6.


Note that we have

$$ \begin{array}{@{}rcl@{}} H^{2}(p_{\hat{\boldsymbol\theta}},p_{\boldsymbol\theta}) &=& \int \left[\sqrt{p(\mathbf{x}|\mathbf{X}(-u), \boldsymbol{\hat{\theta}})} - \sqrt{p(\mathbf{x}|\mathbf{X}(-u), \boldsymbol\theta)}\right]^{2} d\nu(\mathbf{x}) \\ &=& 2\left( 1 - \int \sqrt{p(\mathbf{x}|\mathbf{X}(-u), \hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u), \boldsymbol\theta)}d\nu(\mathbf{x}) \right)\\ &\leq &-2 \log \int \sqrt{p(\mathbf{x}|\mathbf{X}(-u), \hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u), \boldsymbol\theta)} d\nu(\mathbf{x}). \end{array} $$

Taking the conditional expectation respect to X(u)|X(−u), we then have

$$ \begin{array}{@{}rcl@{}} \mathbb{E}[H^{2}(p_{\hat{\boldsymbol\theta}},p_{\boldsymbol\theta})] \!\!\!&\leq&\!\!\! 2\mathbb{E}\log \left( \frac{1}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})} \right) \\ \!\!\!& \leq&\!\!\! 2\mathbb{E}\log \left( \frac{p^{1/2}(\mathbf{X}(u)|\mathbf{X}(-u),\hat{\boldsymbol\theta})e^{- \sum\limits_{v} pen(\hat{\boldsymbol\theta}(u,v))}}{p^{1/2}(\mathbf{X}(u)|\mathbf{X}(-u),\check{\boldsymbol\theta})e^{- \sum\limits_{v} pen(\check{\boldsymbol\theta}(u,v))}}\right.\\&&\left. \frac{1}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})} \right), \end{array} $$

where the collection of \(\check {\boldsymbol \theta }(u,v)\)’s are the arguments that minimize the right-hand side of the expression (6.21). The last expression can be written in two pieces, that is

$$ \begin{array}{@{}rcl@{}} &&{} \mathbb{E}\left[ \log \frac{p(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol\theta)}{p(\mathbf{X}(u)|\mathbf{X}(-u),\check{\boldsymbol\theta})}\right] + 2 \sum\limits_{v} pen(\check{\boldsymbol\theta}(u,v)) \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&{}+ 2\mathbb{E} \log \left( \frac{p^{1/2}(\mathbf{X}(u)|\mathbf{X}(-u),\hat{\boldsymbol\theta})}{p^{1/2}(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol\theta)}\frac{\prod\limits_{v} \prod\limits_{\ell} e^{-pen(\hat{\boldsymbol\theta}^{(\ell)}(u,v))}}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})} \right).\\ \end{array} $$

Note that the expression (6.22) is the right hand side of Eq. 6.21. What we need to show then is that expression (6.23) is bounded above by zero. By applying Jensen’s inequality, we have Eq. 6.23 bounded by:

$$ \begin{array}{@{}rcl@{}} 2\log \mathbb{E}\left[\prod\limits_{v}e^{-pen(\hat{\boldsymbol\theta}(u,v))}\frac{\sqrt{p(\mathbf{X}(u)|\mathbf{X}(-u),\hat{\boldsymbol\theta})/p(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol\theta)}}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\hat{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})} \right]. \end{array} $$

The integrand in the expectation in Eq. 6.24 can be bounded by

$$ \begin{array}{@{}rcl@{}} \sum\limits_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}} \prod\limits_{v}e^{-pen(\tilde{\boldsymbol\theta}(u,v))}\frac{\sqrt{p(\mathbf{X}(u)|\mathbf{X}(-u),\tilde{\boldsymbol\theta})/p(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol\theta)}}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\tilde{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})}. \end{array} $$

Given the fact that \(\tilde {\boldsymbol \theta }\) does not depend on the X(−u), Eq. 6.24 can be bounded by

$$ \begin{array}{@{}rcl@{}} &&{} 2\log \sum\limits_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}} \prod\limits_{v}e^{-pen(\tilde{\boldsymbol\theta}(u,v))}\frac{\mathbb{E}\left[\sqrt{p(\mathbf{X}(u)|\mathbf{X}(-u),\tilde{\boldsymbol\theta})/p(\mathbf{X}(u)|\mathbf{X}(-u),\boldsymbol\theta)}\right]}{\int \sqrt{p(\mathbf{x}|\mathbf{X}(-u),\tilde{\boldsymbol\theta})p(\mathbf{x}|\mathbf{X}(-u),\boldsymbol\theta)}d\nu(\mathbf{x})} \\ &&{}= 2\log \sum\limits_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}} \prod\limits_{v}e^{-pen(\tilde{\boldsymbol\theta}(u,v))}. \end{array} $$

Since \(e^{-pen(\tilde {\boldsymbol \theta }(u,v))} > 0\) for any \(\boldsymbol {\tilde \theta }(u,v)\), and using the inequality \({\sum }_{i} a_{i} b_{i} \leq {\sum }_{i} a_{i} {\sum }_{i}b_{i}\) for any ai > 0,bi > 0, we can bound (6.25) by:

$$ \begin{array}{@{}rcl@{}} 2\log \prod\limits_{v} \sum\limits_{\boldsymbol{\tilde\theta}(u,v) \in {{\Gamma}_{T}^{p}}} e^{- pen(\tilde{\boldsymbol\theta}(u,v))}. \end{array} $$

From the condition in Eq. 6.20, we see that the above expression is bounded by zero. We now show that our proposed estimator satisfies condition (6.20) by the following lemma.

Lemma 8.3.

Let ΓT be the collection of all \(\boldsymbol {\tilde \theta }^{(\ell )}(u, v)\) with components \(\boldsymbol {\tilde \theta }_{t}^{(\ell )}(u, v) \in D_{T}[-C, C]\) and possessed of a Haar like expansion through a common partition, using either RDP (see expression (2.2)) or RP (see expression (2.4)), where DT[−C, C] denotes a discretization of the interval [−C, C] into T1/2 equispaced values. For any type of penalty such that

$$ \begin{array}{@{}rcl@{}} Pen(\boldsymbol{\tilde\theta}(u, v)) = C_{3}\log T \#\{\mathcal{P}(\boldsymbol{\tilde\theta)}\} + \lambda\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})} \|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u, v)\|{~}_{2}, \end{array} $$

where C3 = 1/2 for recursive dyadic partitioning and C3 = 3/2 for recursive partitioning, we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{\boldsymbol{\tilde\theta}(u, v) \in {{\Gamma}_{T}^{p}}} e^{-pen(\boldsymbol{\boldsymbol{\tilde\theta}}(u, v))} \leq 1, \end{array} $$

for T > ⌈e2p/3⌉.


We prove Lemma 8.3 for the case of recursive partitioning. We write \({\Gamma }_{T} = \bigcup _{d_{\ell }=1}^{T} {\Gamma }_{T}^{(d_{\ell })}\) where \({\Gamma }_{T}^{(d_{\ell })}\) is the subset of values \(\boldsymbol {\tilde \theta }_{t}^{(\ell )}(u, v)\) that is composed of d constant valued sequences. For example, \({\Gamma }_{T}^{(d_{\ell })}\) consists of all length T sequences such that there are exactly d alternating sequences of zero and nonzero elements. So, for example, (0,0,4,0,0) and (2,0,1,1,1) might be two such sequences in \({\Gamma }_{5}^{(3)}\). Then we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{\boldsymbol{\tilde\theta}(u, v) \in {{\Gamma}_{T}^{p}}} e^{-pen(\boldsymbol{\tilde\theta}(u, v))} &=& \sum\limits_{\boldsymbol{\tilde\theta}(u, v) \in {{\Gamma}_{T}^{p}}} e^{-(3/2)\log T\{\#\mathcal{P}(\boldsymbol{\tilde\theta)}\} - \lambda\sum\limits_{\mathcal{I}\in \mathcal{P}(\boldsymbol{\tilde\theta})}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u, v) \|{~}_{2}} \\ & \leq& \sum\limits_{\boldsymbol{\tilde\theta}(u, v) \in {{\Gamma}_{T}^{p}}} e^{-(3/2)\log T\{\#\mathcal{P}(\boldsymbol{\tilde\theta)}\}}\\ & \leq& \prod\limits_{\ell=1}^{p} \sum\limits_{\boldsymbol{\tilde\theta}^{(\ell)}(u, v) \in {\Gamma}_{T}} e^{-(3/2p)\log T\{\#\mathcal{P}(\boldsymbol{\tilde\theta)}\}}\\ & =& \prod\limits_{\ell=1}^{p} \sum\limits_{d_{\ell}=1}^{T}\binom{T-1}{d_{\ell}-1} e^{-d_{\ell}(3/2p)\log T}\\ & =& \prod\limits_{\ell=1}^{p} \sum\limits_{d^{\ell^{\prime}}=0}^{T-1}\binom{T-1}{d^{\ell^{\prime}}}e^{-(d^{\ell^{\prime}}+1)(3/2p)\log T} \\ & =& \prod\limits_{\ell=1}^{p} \sum\limits_{d\prime=0}^{T-1} \frac{(T-1)!}{d^{\ell^{\prime}} ! (T-d^{\ell^{\prime}}-1)!} T^{-(d^{\ell^{\prime}}+1)(3/2p)}\\ & \leq& \prod\limits_{\ell=1}^{p} T^{-(3/2p)} \sum\limits_{d^{\ell^{\prime}}=0}^{T-1} \frac{(T-1)^{d^{\ell^{\prime}}}}{d^{\ell^{\prime}}!}\frac{1}{T^{(3/2p)d^{\ell^{\prime}}}}\\ & \leq& T^{-(3/2)} e^{p} \end{array} $$

which is bounded by 1 for any T > ⌈e2p/3⌉. The argument follows analogously for the case of recursive dyadic partitioning. □

Using the loss function and the corresponding risk function we defined before, recovering the neighborhood of node u is essentially a univariate Gaussian time series problem, and thus the KL divergence of the conditional likelihood function takes the form:

$$ \begin{array}{@{}rcl@{}} K(p_{\boldsymbol\theta},p_{\boldsymbol{\tilde\theta}}) = \mathbb{E}\left\{\log \frac{p_{\boldsymbol\theta}(\mathbf{x})}{p_{\boldsymbol{\tilde\theta}}(\mathbf{x})}\right\} = \mathbb{E}\left\{\sum\limits_{t=1}^{T} \log \frac{p_{\boldsymbol\theta}(X_{t}(u))}{p_{\boldsymbol{\tilde\theta}}(X_{t}(u))}\right\} = \sum\limits_{t=1}^{T} (\tilde\mu_{t}-\mu_{t})^{2} / (2\sigma^{2}) \end{array} $$

where each μt is the mean of Xt(u), and \(\tilde \mu _{t}\) is an approximation/estimate thereof, for a given estimator \(\boldsymbol {\tilde \theta }\). Since these means in turn are based on linear combinations of all neighborhood observations, over p lags, we have:

$$ \begin{array}{@{}rcl@{}} \tilde\mu_{t} - \mu_{t} = \sum\limits_{v\in V \backslash \{u\}}\sum\limits_{\ell=1}^{p} X_{t-\ell}(v)[\tilde\theta_{t}^{(\ell)}(u, v) - \theta_{t}^{(\ell)}(u, v)]. \end{array} $$

So the KL divergence for each neighborhood problem involves values at other nodes.

Assume without loss of generality that σ ≡ 1. From Eq. 6.21 and the fact that the K-L divergence in the Gaussian case is simply proportional to a squared 2-norm, the risk of estimating 𝜃 by \(\boldsymbol {\hat {\theta }}\) should be in the form:

$$ \begin{array}{@{}rcl@{}} \mathbb{R}(\hat{\boldsymbol\theta}, \boldsymbol\theta) &\leq& \min_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}}\left\{\frac{1}{T}K(p_{\boldsymbol\theta},p_{\boldsymbol{\tilde\theta}}) + \frac{2}{T}\sum\limits_{v=1}^{N-1} Pen(\boldsymbol{\tilde\theta}(u,v))\right\} \\ &\leq& \min_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}} \left\{\frac{1}{2T}\left\|\boldsymbol{\tilde\mu}-\boldsymbol\mu\right\|{~}_{2}^{2} + \frac{\lambda}{T}\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})}\sum\limits_{v=1}^{N-1}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u,v)\|{~}_{2} \right.\\&&\left.+ \frac{2}{T}\sum\limits_{v=1}^{N-1}(3/2)\log T \#\{\mathcal{P}(\boldsymbol{\tilde\theta})\}\right\}.\\ \end{array} $$

From Cauchy-Schwarz, we have that

$$ \begin{array}{@{}rcl@{}} \mathbb{R}(\hat{\boldsymbol\mu}, \boldsymbol\mu) \!\!\!\!\!&\leq&\!\!\!\! \min_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N\!-1)p}}\left\{\frac{1}{2T}\|{\mathbf{X}(\!-u)}^{\prime}\mathbf{X}(\!-u)\|{~}_{2} \sum\limits_{t=1}^{T}\sum\limits_{v=1}^{N-1}\sum\limits_{\ell=1}^{p}\left( \!\tilde\theta_{t}^{(\ell)}(u,v) - \theta_{t}^{(\ell)}(u,v)\!\right)^{2} \right.\!\\ && \!\!\!\!+\left. \frac{\lambda}{T}\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})}\sum\limits_{v=1}^{N-1}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u,v)\|{~}_{2} + 3(N-1)\frac{\log T}{T}\#\{\mathcal{P}(\boldsymbol{\tilde\theta})\}\right\} \\ \!\!\!\!&\leq&\!\!\!\! \min_{\boldsymbol{\tilde\theta} \in {\Gamma}_{T}^{(N-1)p}}\left\{\frac{1}{2}{\Lambda} \sum\limits_{v=1}^{N-1}\sum\limits_{\ell=1}^{p}\left\|\boldsymbol{\tilde\theta}_{t}^{(\ell)}(u,v) - \boldsymbol{\theta}_{t}^{(\ell)}(u,v)\right\|{~}_{2}^{2} \right.\\ &&\left. \!\!\!\!+ \frac{\lambda}{T}\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})}\sum\limits_{v=1}^{N-1}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u,v)\|{~}_{2} + 3(N-1)\frac{\log T}{T}\#\{\mathcal{P}(\boldsymbol{\tilde\theta})\}\right\}. \end{array} $$

The minimization of the expression (6.26) tries to find the optimal balancing of bias and variance. To bound it, the following L2 result from Donoho (1993) plays the core role.

Lemma 8.4.

Let \(\theta _{(\cdot )}^{(\ell )}(u,v) \in BV(C)\). Define \({\theta _{bd}}_{(\cdot )}^{(\ell )}(u,v)\) to be the best d-term approximant to \(\theta _{(\cdot )}^{(\ell )}(u,v)\) in the dyadic Haar basis for L2([0,1]). Then \(\|{\theta _{bd}}^{(\ell )}(u,v) - \theta ^{(\ell )}(u,v)\|{~}_{L_{2}} = \mathcal {O}(d^{-1})\).

Define \({\boldsymbol {\theta }_{bd}}^{(\ell )}(u,v)\) to be the average sampling of \({\theta _{bd}}^{(\ell )}(u,v)\) on the interval Ii, that is \({\boldsymbol {\theta }_{bd}}^{(\ell )}(u,v) = T{\int \limits }_{I_{i}}{\theta _{bd}}^{(\ell )}(u,v)(t) dt\). Then let \({\boldsymbol {\tilde \theta }_{bd}}^{(\ell )}(u,v)\) be the result of discretizing the elements of \({\boldsymbol {\theta }_{bd}}^{(\ell )}(u,v)\) to the set DT[−C, C], where C is the radius of the bounded variation ball defined in Assumption 6. We have the following by triangle inequality:

$$ \begin{array}{@{}rcl@{}} &&\left\|\boldsymbol{\tilde\theta}^{(\ell)}(u,v) - \boldsymbol{\theta}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}^{2} \\&\leq& \left\|{\boldsymbol{\theta}_{bd}}^{(\ell)}(u,v) - \boldsymbol{\theta}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}^{2} + \left\|\boldsymbol{\tilde\theta}^{(\ell)}(u,v) - {\boldsymbol{\theta}_{bd}}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}^{2} \\ &&+ 2\left\|{\boldsymbol{\theta}_{bd}}^{(\ell)}(u,v) - \boldsymbol{\theta}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}\left\|\boldsymbol{\tilde\theta}^{(\ell)}(u,v) - \boldsymbol{\theta}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}. \end{array} $$

For sequence \({\boldsymbol {\theta }_{bd}}^{(\ell )}(u,v)\) and \({\boldsymbol {\tilde \theta }_{bd}}^{(\ell )}(u,v)\) obtained from average sampling, a simple argument relating Haar function on the discrete set DT[−C, C] to the functions on the interval [0,1] is to show that

$$ \begin{array}{@{}rcl@{}} \frac{1}{T}\left\|{\boldsymbol{\tilde\theta}_{bd}}^{(\ell)}(u,v) - \boldsymbol{\theta}^{(\ell)}(u,v)\right\|{~}_{\ell_{2}}^{2} \leq \left\|\theta_{bd}^{(\ell)}(u,v) - \theta^{(\ell)}(u,v) \right\|{~}_{L2}^{2}. \end{array} $$

See equation (27) of Kolaczyk and Nowak (2005). On the right hand side of Eq. 6.27, the first resulting squared term will be of order \(\mathcal {O}(Td^{-2})\). The second term is a discretization error and by lemma (8.4) is of order \(\mathcal {O}(1)\). The third cross-term is therefore of order \(\mathcal {O}(T^{1/2}d^{-1})\).

Given these results, we have the following bound of Eq. 6.26 by bounding the bias term over each \({\Gamma }_{T}^{(d)}\), where \(d = \bigcup _{i} d_{i}\), for each di and i = 1,⋯ ,(N − 1)p. We then we optimize for d:

$$ \begin{array}{@{}rcl@{}} &&{}\min\limits_{\boldsymbol{\tilde\theta} \in {{\Gamma}_{T}^{(N-1)p}}^{(d)}} \left\{\frac{1}{2}{\Lambda} \sum\limits_{v=1}^{N-1}\sum\limits_{\ell=1}^{p}\left\|\boldsymbol{\tilde\theta}^{(\ell)}(u,v) - \boldsymbol\theta^{(\ell)}(u,v)\right\|{~}_{2}^{2} \right.\\ &&{}\quad\quad \left. + \frac{\lambda}{T}\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})}\sum\limits_{v=1}^{N-1}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u,v)\|{~}_{2} + 3(N-1)\frac{\log T}{T}\#\{\mathcal{P}(\boldsymbol{\tilde\theta})\}\right\}. \end{array} $$

The first term is dominated by the first part of expression (6.27) and is of order \(\mathcal {O}({\Lambda } Td^{-2})\). In the second term, we have \(\frac {\lambda }{T}{\sum }_{\mathcal {I} \in \mathcal {P}(\boldsymbol {\tilde \theta })}{\sum }_{v=1}^{N-1}\|\boldsymbol {\tilde \theta }_{\mathcal {I}}(u,v)\|{~}_{2}\), which are the group lasso terms. Given the fact that \(\theta _{(\cdot )}^{(\ell )}(u,v)\) is of BV (C), we have that \(1/(T^{1/2})\|\boldsymbol {\tilde \theta }_{\mathcal {I}}(u,v)\|{~}_{2}\) is of order \(\mathcal {O}(C + d^{-1})\). Note that λ is of order T− 1/2 and the number of interval \(\#\{\mathcal {P}(\boldsymbol {\tilde \theta })\}\) is proportional to d. So the second term is of order \(\mathcal {O}(T^{-1} *d * (C + d^{-1}) )\). The third term is of order \(\mathcal {O}(dT^{-1}\log T)\). Combining the above results, we have that:

$$ \begin{array}{@{}rcl@{}} && \min\limits_{\boldsymbol{\tilde\theta} \in {{\Gamma}_{T}^{(N-1)p}}^{(d)}} \left\{\frac{1}{2}{\Lambda} \sum\limits_{v=1}^{N-1}\sum\limits_{\ell=1}^{p}\left\|\boldsymbol{\tilde\theta}^{(\ell)}(u,v) - \boldsymbol\theta^{(\ell)}(u,v)\right\|{~}_{2}^{2} \right.\\ &&\quad\quad \left. + \frac{\lambda}{T}\sum\limits_{\mathcal{I} \in \mathcal{P}(\boldsymbol{\tilde\theta})}\sum\limits_{v=1}^{N-1}\|\boldsymbol{\tilde\theta}_{\mathcal{I}}(u,v)\|{~}_{2} + 3(N-1)\frac{\log T}{T}\#\{\mathcal{P}(\boldsymbol{\tilde\theta})\}\right\}\\ &&\leq \mathcal{O}({\Lambda} Td^{-2}) + \mathcal{O}(T^{-1} *d * (C + d^{-1}) ) + \mathcal{O}(dT^{-1}\log T), \end{array} $$

which is minimized for \(d \sim ({\Lambda } T^{2}/\log T)^{1/3}\). Substitution then yields the result that the risk is bounded by a quantity of order \(\mathcal {O}(({\Lambda }\log ^{2}T/T)^{1/3})\). For estimation via recursive dyadic partitioning, where \(\#\{\mathcal {P}(\tilde \theta )\}\) is proportional to \(d\log T\), the expression is minimized at \(d \sim ({\Lambda } T^{2} /\log ^{2} T)^{1/3}\), which gives the bound of the risk of order \(\mathcal {O}({\Lambda } \log ^{4}T/T)^{1/3}\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kang, X., Ganguly, A. & Kolaczyk, E.D. Dynamic Networks with Multi-scale Temporal Structure. Sankhya A 84, 218–260 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Dynamic network
  • multiscale modeling
  • vector autoregressive model.

AMS (2000) subject classification

  • Primary: 62M10
  • Secondary: 05C82
  • 62P10