An Encoding Approach for Stable Change Point Detection

Without imposing prior distributional knowledge underlying multivariate time series of interest, we propose a nonparametric change-point detection approach to estimate the number of change points and their locations along the temporal axis. We develop a structural subsampling procedure such that the observations are encoded into multiple sequences of Bernoulli variables. A maximum likelihood approach in conjunction with a newly developed searching algorithm is implemented to detect change points on each Bernoulli process separately. Then, aggregation statistics are proposed to collectively synthesize change-point results from all individual univariate time series into consistent and stable location estimations. We also study a weighting strategy to measure the degree of relevance for different subsampled groups. Simulation studies are conducted and shown that the proposed change-point methodology for multivariate time series has favorable performance comparing with currently popular nonparametric methods under various settings with different degrees of complexity. Real data analyses are finally performed on categorical, ordinal, and continuous time series taken from fields of genetics, climate, and finance.


Introduction
Upon any nonstationary time series of any dimensions, abrupt distributional changes are ubiquitous patterns of great interest found in many sciences involving with evolving systems.Change points as temporal locations of such occurrences and their multiplicity are key parts of deterministic structures of the time series under study.Nowadays, change-point analysis has well recognized in statistics literature and beyond as an essential scientific methodology that aims to detect change points on the time-ordered observations and then partition the whole time series into homogeneously distributional segments.Change-point analysis can be traced back to 1950s (Page (1954); Chernoff and Zacks (1964); Kander and Zacks (1966)).So far, it has been playing a crucial role in diverse fields including bioinformatics (Picard et al. (2005); Muggeo and Adelfio (2011)), behavioural science (Rosenfield et al. (2010); Hoover et al. (2012)), neuroimage (Bosc et al. (2003)), climate science (Robbins et al. (2011)), finance (Talih and Hengartner (2005)), and speech recognition (Malladi et al. (2013)).
In general, such an analysis can be conducted under either parametric or nonparametric settings.Parametric approaches rely heavily upon assumptions of underlying distributions belonging to a known family.Likelihood or penalized likelihood functions are generally involved (Yao (1988); Chen and Gupta (1997); Bai and Perron (2003)).In contrast, nonparametric approaches make very few assumptions regarding stochasticity underlying the time series.The likelihood principle is not directly applicable.Nonetheless, such approaches fit well in a wider variety of applications.Such an advantageous feature has popularity and a vast amount of research attention in the past decade.
In fact, the likelihood principle is still applicable under a rather mild independence assumption that, at least, approximately endorses some distributional characters upon observed or computed recurrent events occurring along with the time series.For instance, (Kawahara and Sugiyama (2011); Liu et al. (2013)) attempted to estimate the likelihood ratio using KL divergence; (Chen and Zhang (2015)) proposed a graph-based approach and applied it in multivariate non-Euclidean data.(Zou et al. (2014)) developed an empirical likelihood approach to discover an unknown number of change-points via BIC.(Matteson and James (2014)) present a U-statistic to quantify the difference between the characteristic functions of two segments.(Lung-Yut-Fong et al. (2015)) generalized Mann-Whitney rank-based statistic to multivariate settings.(Arlot et al. (2019)) improved the kernel-based method by (Harchaoui and Cappe (2007)) with a generalized model-selection penalty.
However, most of the existing nonparametric research focused on the single change-point problem and the extension of multiple change point detection is achieved via dynamic programming (Harchaoui and Cappe (2007); Lung-Yut-Fong et al. (2015); Arlot et al. (2019)) or bisection procedure (Vostrikova (1981); Olshen and Venkatraman (2004); Matteson and James (2014)).It is still scarce in the literature to efficiently discover multiple change points under multivariate settings, especially when the covariance structure changes in chronological order.
In the paper, a new nonparametric approach is proposed to detect multiple distributional changes.Our developments are anchored on independent time-ordered observations.The basic idea is to systematically select a subset of the data points at each iteration, with which we encode the continuous observations into a sequence of Bernoulli variables.The number of change points and their locations are estimated by aggregating all the dynamic information discovered from the collection of Bernoulli processes.Instead of working on the unknown distribution directly, the proposed approach takes advantage of dividing the problem into several easier tasks, so that the maximum likelihood approach can be applied to analyze the Bernoulli processes, respectively.We demonstrate that this divide-and-conquer framework is robust to any underlying distributions and can be implemented in conjunction with other parametric approaches.
Another important extension of the aggregation technique is the stability change-point detection.Such a stability selection introduced by (Meinshausen and Buhlmann ( 2010)) was designed to improve the performance of variable selection and provide control for false discoveries.We demonstrate that the idea of aggregating results by applying a procedure to subsamples of the data can be well implemented under our framework.One can aggregate the estimation from the Bernoulli sequences, and select the estimated change-point locations with votes beyond a predetermined threshold.To our limited knowledge, this could be the first method in the change-point literature that holds both asymptotic property and finite-sample control of false discoveries.
The paper is organized as follows.We start with an efficient algorithm for searching multiple change points within a change-in-parameter Bernoulli sequence in Section 2. In Section 3, we propose the main divide-and-conquer framework to analyze multivariate observations.In Section 4, the stability detection technique is applied under our change point framework.In Section 5, a strategy is provided to weighting the results from different sample sets for practical usage.Numerical experiments are shown in Section 6 to compare with other well-known nonparametric approaches.Real data applications including categorical and continuous data in univariate and multivariate settings are reported in Section 7. We note that the proposed approach can be easily generalized to multivariate categorical or ordinal time series data, though we mainly focus on continuous data under the multivariate setting in this paper.

Background
Consider a sequence of 0-1 independent Bernoulli variables {E t } N t=1 .Suppose that k change points are embedded within the sequence at locations 0 = τ * 0 < τ * 1 < ... < τ * k < τ * k+1 = N , so the observations are partitioned into k + 1 segments.Observations within segments are identically distributed but observations between adjacent segments are not.Specially, , for i = 0, ..., k.Now, given the number of change points k, one task of change point detection is to estimate the k locations.In the most general case, both number of change points and their locations need to be estimated.
Change point analysis in a Bernoulli-variable sequence was well studied when k = 1.(Hinkley and Hinkley (1970)) provided asymptotic distributions of likelihood ratio statistics for testing the existence of a change point.(Pettitt (1980)) introduced CUSUM statistics and showed its asymptotical equivalence to the maximum likelihood estimator.(Miller and Siegmund (1982)) investigated maximally selected chi-square statistics for two-sample comparison in a form of 2×2 table.Later on, (Halpern (1999)) advocated a statistic based on the minimum value of Fisher's Exact Test.When k > 1, (Fu and Curnow (1990)) firstly attempted to search for optimal change points such that the likelihood function is maximized.However, it still lacks a computationally efficient algorithm especially when k is large.
In this section, we present a new algorithm to address the problem of performing multiple change points detection within a Bernoulli-variable sequence.An exhaustive searching procedure is proposed but with relatively feasible time complexity.The idea is motivated by (HFS Hsieh et al. (2012)) which was designed to detect dynamics phase shifts from one episode to another in financial data.By tracking the recurrence of 1's in the time axis, observations are partitioned into disjoint segments with different emergence intensities in a fashion of dynamic programming.Thus, change points between adjacent segments are detected such that the likelihood or penalizedlikelihood functions are maximized.

Multiple Change Points Searching Algorithm
For simplicity of computation, we only consider the situation in which change point locates at the emergence position of 1's.Suppose that the number of 1's in the i-th segment is M i , so the total number of 1's is M = k+1 i=1 M i and the total number of 0's is N − M .By further supposing that the recurrent time can be 0 if two 1's appear consecutively, and R 1 = 0 if E 1 = 1, and R M +1 = 0 if E N = 1, the Bernoulli-variable sequence can be represent by a sequence of recurrent time between consecutive 1's, denoted as {R t } M +1 t=1 .Especially, there are M i + 1 recurrent times in the i-th segment where R t ∼ Geom(p i ).
The task then becomes to search for the change points within the recurrenttime sequence.
The searching procedure is done by iteratively taking off the smallest number R min from the rest R t 's and combine the time points within R min .For example, if R min is the recurrent time between j and j , we combine the locations from (j + 1) to j as a time window, denoted as w j→j .Here, we suppose that E j = 1, E j = 1, and E t = 0 for t ∈ (j, j ).In the next step, if the smallest R t is taken from the recurrent time between j and j , a new time window is recorded from (j + 1) to j , named w (j +1)→j .We can further combine the two consecutive time windows w (j+1)→j and w (j +1)→j into w (j+1)→j .Indeed, we iteratively merge a pair of nearest 1's at each step and update the recorded time windows according to their connectivity.The recorded time windows contain recurrent time with relatively smaller values, which corresponds to a period with high frequency of 1's.Hence, the boundaries of the time windows can be extracted as potential change point locations that partition the observations into segments with low and high Bernoulli parameters.
So far, the algorithm works very similarly to the hierarchical clustering with a single-linkage, by merging two closest single 1's or two groups from bottom to top.However, it is known that this greedy algorithm does not guarantee global optimization.Our remedy is to set a tuning parameter C * to control the minimal length of the recorded high-intensity segments.Additionally, we count the number of R t absorbed within each recorded time window.Continuing with the above example, the count of recurrent time for window w (j+1)→j and w (j+1)→j is denoted as C (j+1)→j = 1 and C (j+1)→j = 2, respectively.The recorded time window, for example, w .→.. is regarded as a high-intensity segment only if its count C .→.. is greater than the threshold C * .Hierarchical clustering with a single-linkage is just a special case that C * = 0. Another most extreme case is when C * = M , so there is no period having a count number above C * , thus no change point exists.Without any prior knowledge about the minimal length of the segments, we run over all the choice of C * starting from 0 to M to generate all possible partitions.The optimum is returned to fit the Bernoulli or Geometric observations best.
Suppose the observations are partitioned into k + 1 segments via k time window boundaries or change points τ1 , ..., τk .The Bernoulli parameter pi between τi−1 and τi can be estimated by MLE pi = {# of 1 s ∈ (τ i−1 ,τ i )} τi −τ i−1 .To measure the goodness-of-fit, model selection is done by maximizing loglikelihood function within each segment, while penalizing the number of change points k and related estimation parameters.The penalized function or loss can be written by, where Q k is the total number parameters; φ(N ) is the penalty coefficient; φ(N ) = 2 for AIC and φ(N ) = log(N ) for BIC.
Suppose that W (.) is a mapping that records the corresponding time window of R t .For example, W (R t ) = w (j+1)→j where R t is the recurrent time between j and j .It is marked that the segmentation and the loss function can be updated based on the results in the last step.After applying a big loop cycling through C * from 0 to M , the total time complexity now becomes O(M 2 ).As a result, an optimal window set is returned, so the change points locations are estimated by their boundaries.The multiple change points searching algorithm is described in Algorithm 1.

Algorithm 1
Input: unmarked recurrence time {R t } t and a threshold C * Loop: cycle R t through order statistics R (1) , R (2) , ..., R (M +1) 1. Initial an empty set W recording the high-intensity time windows 2. Consider 4 "if" conditions and obtain a new window w, a.If neither R t−1 or R t+1 is marked:

MCP for multivariate time series
A large part of change point detection literature deals with continuous observation.In this section, we firstly proposed an encoding approach to categorize continuous time series into multiple Bernoulli sequences, and then analyze change points embedded within the multivariate process.The idea of categorizing real-value observations aims to extract more relevant information and filter out noise.It is claimed that the proposed approach is robust to encode any underlying distributions and is easily generalized to either categorical or continuous observations.

Encoding continuous time series
In the analysis of single stock returns, the authors in (Hsieh et al. (2012)) utilized a pair of thresholds to mark absolutely large stock returns as 1 and 0 otherwise, then revealed the volatility pattern behind the resultant 0-1 sequence.The encoding process is written as where {E t } t is an excursion sequence by marking the stock returns.Later, authors in (Wang and Hsieh (2021b)) proposed an encoding method to explore the local dependence of observations when X t ∈ R p .Following up the idea, we partition R p space into V disjoint subarea, denoted as B (v) for v = 1, 2, ..., V , and transform the continuous observations Here, subarea B (j) plays an important role to reserve the change-point pattern into a Bernoulli process.Denote the Bernoulli parameter in the i-th where F i corresponds to the CDF of {X t } t in the i-th time segments.Consider two consecutive homogeneous time segments i and i + 1.The change point detection becomes easier if p i+1 , and vice versa.There is actually a tradeoff between the size and the total number of the subareas.Larger number of subareas with smaller size can discover the distributional difference more precisely but with sacrifice of the power of statistics due to the reduced sample size.In the following subsections, we would assume that V is fixed and B (j) are determined.The implementation of the encoding procedure is discussed in Section 5.

Single Change Point Detection
Starting with a simplest setting, let's assume that there exists a single change point at τ * .Specifically, iid ∼ F 2 where F 1 and F 2 are two unknown CDFs.The goal is to test the homogeneity between the two sample sets.Following the encoding procedure above, we obtain a multinomial process {(E 2,τ * ).(Robbins et al. (2011)) extent the multivariate CUSUM statistics with uncorrelated components to the multinomial settings and derived its asymptotic distributions under the null hypothesis.The estimators of Bernoulli parameters at a hypothesized time location τ is defined by and for j = 1, 2, ..., V .Then, a chi-square statistic proposed by (Robbins et. al 2011) is written as, Moreover, if there exists no change point under the null hypothesis, the maximally selected chi-square statistics χ 2 τ converges to a Brownian motion asymptotically.

Multiple Change Points Detection
Now, we consider multiple change point detection when number of change point k is known.Suppose the change point locations are ∼ F i for i = 0, 1, ..., k, and consecutive CDFs F i and F i+1 are different.A naive method to search for O(N k ) possible change point locations is computationally intractable.Bisection procedure as in (Vostrikova (1981); Olshen and Venkatraman (2004)), dynamic programming (Harchaoui and Cappe (2007)), or the one we proposed in Section 2 can work for the purpose.It is claimed that our searching algorithm is favorable in exploring the global optima, but it is designed only adapting to a single-dimensional Bernoulli-variable sequence.
A divide-and-concur approach is proposed as a remedy to the multivariate problem.Denote {E (j) t } N t=1 as the j-th Bernoulli process after encoding the observations via B (j) , and p (j) i as the true parameters of E (j) t defined by (4).We firstly apply Algorithm1 to estimate the change point locations within {E (j) t }, for j = 1, 2, ..., V , respectively.Suppose the estimated change point locations in the j-th sequence is 0 = τ = N .Note that the number of change points k(j) does not necessarily equals k.It should depend on the way that we encode the observations and the choice of penalty coefficient in (1).So, the observations are partitioned into k(j) + 1 segments and within-segment points are sharing the same estimator of parameter.After that, a vector of length N is generated to record the estimated Bernoulli parameter, denoted as {r i for i = 0, 1, ..., k(j) and j = 1, 2, ..., V .Thus, there are τ Repeating the above procedure through the V sequences, we can eventually obtain a sequence of V -dimensional estimated parameters, denoted as Generated by marking samples from subarea B (j) , the Bernoulli-variable sequence E (j) t partially reserves the distributional changes from the raw observations.Indeed, the switching pattern of Bernoulli-parameter recorded in r(j) t 's is relevant to the distributional changes, while some r(j) t 's or at least some subsequences may work as irrelevant noise, especially, when B (j) dF i ∼ = B (j) dF i+1 .An aggregation statistic is present to combine all pieces of information from j = 1, 2, ..., V , and weight each {E (j) t } N t=1 according to its degree of relevance.In this section, we will treat every sequence equally for theoretical purpose.The weighting procedure will be described in Section 5.
Different from the CUSUM statistics, we consider the within-group variance in {r t } t .Given k hypothesized change point locations τ 1 , τ 2 , ..., τ k , the statistic is written as, where ri = Change point locations are then estimated as the ones that minimize the within-group variance, so τ1 , τ2 , ..., τk = argmin It is shown in the next section that consistency holds for the statistic.Moreover, it is cheap in the computation when k > 1.A hierarchical clustering algorithm with k + 1 clusters obtained can be implemented to search for multiple change point locations.
Stack the estimated parameters {r t } N t=1 in a N ×V design matrix denoted as M, in other words, A time-order-kept agglomerate hierarchical clustering algorithm is applied upon M to cluster time locations (rows) with comparable V -dimensional covariables.We modify the classical hierarchical clustering algorithm in the sense that only consecutive time points or groups are agglomerated at each iteration, so the original time order is kept.A Wald's type of linkage is applied for the purpose of minimizing the within-group variance.As a result, k + 1 consecutive time point clusters get returned, so k change point locations can be detected accordingly.

Consistency
We present the consistency of the estimated change point locations obtained from our proposed procedure.It shows that if some of the likelihood-based estimators of a single Bernoulli sequence are consistent, then the estimators derived by the aggregation statistic in (7) can also converge the true change point locations.We firstly demonstrate the consistency property in the case of a single change point and then do the same for the multiple change points setting.
Suppose the true change point location is τ * , so {E where τ (j) is the estimated change point locations in {E (j) t } t .To prove the consistency, people typically assume that the size of the two half time sequence cut by τ * goes into infinity as N → ∞, and the proportion of the first half converges to a constant γ * ∈ (0, 1), a.k.a.τ * /N → γ * (N → ∞).The within-group variance of (7) at any proportion cut γ can be written as, in the finite-sample situation.
The theorem below shows that if some of the estimators τ (j) are consistent, then τ consistently converges to τ * .Assume that if p 2,τ * , then τ (j) /N converges to γ * asymptotically; otherwise, τ (j) /N converges to 0 or 1 meaning no change point exists in {E (j) t } t .We further assume that there exist at least one encoded Bernoulli sequence such that p Without loss of the generalization, we suppose that a change point exists in {E (j) t } t for j = 1, 2, ..., u, and no change point exists for j = (u + 1), ..., V where 1 ≤ u ≤ V .
The assumption ensures that τ (j) is a consistent estimator if a change point exists in {E (j) t } t .So long as u ≥ 1, the distributional discrepancy is captured by τ .For a Bernoulli-variable sequence, the change point analysis is relatively easier.One can test the existence of a single change point and plug in a consistent estimator if reject.
In the more general case of multiple change points, suppose that the observations are independent and distributed from k t } t may only reserve partial information of the distributional discrepancy, the number of change points in {E (j) t } t could be smaller than k and varies for different j.By further assuming the existence of consistent estimator in the Bernoulli-variable sequence, the theorem below shows the consistency of the aggregation statistic when the number of change point k > 1.
Theorem 2. Define that Then, for any > 0, P ( max i=1,...,k Moreover, since ζ i is consistent to 0, uniformly in i, by the assumption.So, as N goes into infinity.
The theorem requires that the estimator is consistent if it exists, and there exists at least one estimator over the V Bernoulli sequences according to a true change point.Though the assumption is strong theoretically, we actually transform the change point detection for unknown underlying distributions into an analysis of a Bernoulli-variable sequence.The task becomes easier since an explicit likelihood function exists without further assumption of the distribution family.So, parametric approaches is involved and fitted under the framework.In practice, the searching algorithm advocated in Section 2 can be employed to detect the change points for each Bernoulli process.Another advantages by applying the encoding-and-aggregation is that the error rate of change point detection can get controlled at the same time, which is present in the next section.

Stability Change Point Analysis
Finally, it comes to the most general case that the number of change points and their locations are unknown.The current approaches can be divided into two types: model selection and multi-stage testing.A searching algorithm is usually applied in conjunction with a model selection procedure to explore a possible number of change points starting from 1 to a large number.Multistage testing is conducted to test the null hypothesis of no additional change point needed by inserting another change point at each stage.However, none of the approaches provides a control for the discovery error of the change point detection.Indeed, the result is sensitive to the objective function of model selection or the significance level in multi-stage testing.

The Stability Detection Method
In this section, we borrow the idea of stability variable selection and propose a robust change point detection framework, named stability detection.
Stability selection was firstly advocated by (Meinshausen and Buhlmann (2010)) to enhance the robustness and control the false discovery rate of variable selection.Half of the samples are randomly selected to feed into a base model at each iteration.The relevant variables are ultimately discovered based on the votes aggregated over all the variable selection results.Later on, (Beinrucker et al. (2016)) extent the stability selection by sampling disjoint subsets of samples.
Similar to the strategy of subsampling, we select but not randomly a subset of samples in B (j) to generate a Bernoulli sequence, and then estimate the number and locations of change points within each of the Bernoulli sequences, respectively.By treating each time location as a variable, the stability selection framework can be employed here to aggregate the estimated change points over B (j) for j = 1, 2, ..., V .The successive change points are the ones with votes or selected probability above a pre-determined threshold.However, it could be unrealistic to break down the chronological order and treat each time point as separate from others.The locations near the true change points are considered as acceptable results.
Denote that S (j) is a set of change points detected based on Bernoulli sequence {E (j) t } t , and p (j) (t) is the probability that a time point t is selected, i.e. p (j) (t) = P (t ∈ S (j) ).After aggregating all the change points sets S (j) for j = 1, 2, ..., V , the probability of selection for time point t is defined by Then, we can obtain the output of stability change point detection by thresholding the quantity with a threshold π ∈ (0, 1),

Error Control
To evaluate the false discovery rate, we need to define the noisy time points that we should exclude from the admissive set.Especially, we believe that time points around the truth change point τ * are admissive, but time points far away from τ * should get excluded.Define A = {t : t ∈ (τ * i − w A , τ * i + w A ) i = 1, 2, ...} as a set of admissive change points including true change points and their close neighbors.Here, w A is an admissive window width and it can change over i.Similarly, define N = {t : t / ∈ (τ * i − w N , τ * i + w N ) i = 1, 2, ...} as a set of noisy time points which is outside from the neighbors of the true change points where w N is a noisy window width.Note the window width w A can be narrower than w N , such that A ⊂ N C .Suppose that the following assumptions hold for appropriate w A and w N .
V j=1 p (j) (t)/V are identical for any t ∈ A.
Here, we assume that the noisy time points have the same expected probability to be selected, and so do the admissive time points.Under these assumptions, the next theorem is shown to bound the expectation of false positive rate or false negative rate of change point detection, depending on the choice of threshold π.
For any Proof.For any The last inequality holds based on Markov's inequality and the condition that π > π N .
Moreover, 1{t ∈ S (j) } are independent for j = 1, 2, ..., V .It holds because that we select disjoint samples to make up {E (j) t } t so for a fixed time t, its selection does not reply on the iteration index j.The resultant probability can be further bounded via Chernoff upper bound, Hence, By further assuming identical V j=1 p (j) (t) for t ∈ N , we can cancel N for both numerator and denominator, so the inequality ( 14) is obtained.Inequality (15) can be proved similarly via the lower bound of Chernoff's.
As the bound of false positive rate or false negative rate decays with V , we are tempting to choose the number of iterations as large as possible.But it will significantly harm the power of change point detection due to the reductive sample size of the recurrent times.In order to control the false discovery rate from both sides, one should increase the signal-selection rate p V A and decrease the noise-selection rate p V N .It is ideal to set threshold in between, Recall the definition of the selection set S (j) := {τ (j) i , i = 1, ..., k(j) } where τ (j) i is the i-th estimator of the j-th Bernoulli sequence.Then, p V A can be simplified by . Thus, with a fixed width of w A , a good estimator τ (j) i is favored so that it is close the true change point location with a higher probability.
Another way to increase p V A is to sightly expand the selection set S (j) , that is to say, selecting the estimators and their neighbors.So, S (j) = {t : t ∈ neig(τ (j) i ), i = 1, ..., k(j) }.A wider neighbor set neig() is better in adsorbing admissive change points but endures the risk of involving noise.In the change point analysis of a sequence of Bernoulli variables, it is illustrated in Section 2 that a change point is estimated at the locations of 1's.A conservative way of expanding the selection set is to involve the estimator and the locations between the last and the next 1's.

Subsampling and Weighting Strategy
From an application perspective, there are still two real problems to be addressed.Firstly, how to generate a series of subarea {B (j) } j=1,...,V in the encoding phase.Secondly, how to weight the contribution for each encoded Bernoulli sequence {E (j) t } t based on its degree of relevance.A follow-up question is that how to measure the goodness-of-fit for each {E (j) t } t and weighting their contributions accordingly.In this section, we resolve both problems via a subsampling weighting technique.
To address the first one, a natural way is to apply clustering analysis to obtain V disjoint clusters as {B (j) }.But it raises another problem related to the robustness of a different number of clusters and the second question becomes even hard due to the unbalanced cluster size.Model selection criterion in (1) can be used to measure the goodness-of-fit if the cluster sizes are balanced.To ensure robustness and efficiency, we attempt to generate a larger number of clusters but with fixed cluster size, so overlappings are present here.Our numerical experiments show that the method is not sensitive to the choice of V .It is advocated to choose a larger number V = 50 with a fixed subsampling proportion M/N = 0.1, so a sample is expected to be selected 5 times.
Denote X = [X 1 , X 2 , ..., X N ] as a N × p matrix recording the time series {X t } N t=1 where X t ∈ R p .The subsampling algorithm is described as follows.We firstly apply K-Means upon X to get V cluster centroids.Then, cycle through every centroid to search for its M nearest neighbors in X.We mark the M samples as 1 and the other N − M as 0 at each iteration, so V Bernoulli sequences get returned.If without confusion, let's denote the M marked samples in the j-th step as B (j) .
Since the weight is inversely proportional to the model selection criterion values or loss in (1), one can consider a mapping function F : R → R to scale the quantity, so, the weight w (j) measuring the importance of j-th Bernoulli sequence is defined by, where clustering algorithm mentioned in Section 3.3, Another weighting technique is based on the iterative weighting algorithm proposed in (Wang and Hsieh (2021b)).In the simple case that only one change point exists within a Bernoulli process, it is hard or even impossible to detect the parameter change if the two Bernoulli parameters are too close.Indeed, one can qualify the goodness-of-fit via the difference between p (j) 1,τ * and p (j) 2,τ * or the estimated delta |p If we assume that the size of the two segments is equal, the estimated delta is then simplified by measuring the proportion of the two recovered segments in B (j) .The more purity of B (j) , the better E (j) t can be fitted.It enlightens us to measure the Shannon entropy in B (j) as an approximation when k > 1.
Denote the weight of the j-th sequence at the current step as w c and the entropy of set B (j) at the current step as H c (B (j) ).We can iteratively apply clustering algorithm upon the weighted matrix in ( 17) and update the the entropy H c (B (j) ) based on the recovered segments in the current step.So, the weight in the next step can be updated by w (j) c+1 = 0.5 w (j)  c + 0.5 iteratively until convergence.Here the 0.5 is set to smooth the learning curve and make the sum of weights equal 1.

Numerical Experiment
In this section, we simulate various univariate and multivariate distributions with a known and unknown number of change points to illustrate our model performance.
When k is known, the time-order-kept hierarchical clustering algorithm is implemented with the weighting techniques proposed in ( 16) and (18).To differentiate the two weighting techniques, we denote (16) as 'simple weighting' and (18) as 'iterative weighting'.We compare the performance of our approaches with other nonparametric methods: E-Divisive by (Matteson and James (2014)), Kernel Multiple Change Point(KernelMCP) by (Arlot et al. (2019)), and MultiRank by (Lung-Yut-Fong et al. (2015)).For the fairness of the comparison, all the procedures are conducted with the number of change point k known.The results are reported in Section 6.1 and Section 6.2 for univariate and multivariate settings, respectively.When k is unknown, since it is hard to quantify the false discovery rate, only stability detection is applied in Section 6.3.
Our method was implemented with V = 50, cluster proportion M/N = 0.1, and φ(N ) = 2 (AIC).In the iterative weighting, we further set iteration number R = 150 and stop criteria when the weights do not change for 10 steps.E-Divisive was implemented via ecp package with the tuning parameter α = 1, R = 499 which was advocated in the paper.KernelMCP was implemented by Python package named Chapydette using the default setting (Gaussian kernel with the Euclidean distance, bandwidth = 0.1, and α = 2).For MultiRank, we implement R codes provided in the supplementary file of (Matteson and James (2014)).
To quantify the performance of a change point detection result, we calculate the Adjusted Rand Index(ARI) (Hubert and Arabie (1985)) between the recovered segments and the true segments.Rand Index(RI) (Rand (1971)) was originally used to measure the similarity between two data clustering results.As a corrected version of RI, ARI was designed to adjust for the chance of grouping elements.An ARI value of 1 corresponds to a perfect result, while negative or 0 values imply that the recovered segments are different from the underlying segments.

Univariate Simulation
In this section, we simulate univariate distributions with different variance or tailedness.Three data segments are generated sequentially with distributions N (0, 1), G, N (0, 1), respectively.For changes in variance, G ∼ N (0, σ 2 ); and for changes in tailedness, G ∼ t df (0, 1).Unbalanced segments are generated with the sample size n, 2n, n, respectively.The size n is also varied n = 100, 200, 300 while the proportion for the three segments are kept the same.
Given the number of change points, ARI values as recovery accuracy are compared for the proposed approaches, E-Divisive, KernelMCP, and RankMCP in Table1 and Table2.Results show that E-Divisive outperforms others in the setting of changes in variance.The overall performance is worse in the setting of changes in tailedness, but the iterative weighting approach performs slightly better than others.KernelMCP takes advantage if G is Gaussian distributed but fails otherwise.It shows that the kernel-type method is very sensitive to the choice of kernel.As a nonparametric approach designed for changes in mean, RankMCP consistently fails in both settings.
It is remarked that our approach is implemented under unfavorable con-ditions since we attempt to clustering univariate observations with large cluster numbers.Indeed, the encoding phase can be modified accordingly by applying quantile thresholds and marking extreme observations below or above the thresholds.More discussions about encoding a single-dimensional process are referred to Wang and Hsieh (2021a).

Multivariate Simulation
Following the generation step above, we simulate multivariate observations in this section.The observations are distributed from N d (0, I), N d (0, Σ), N d (0, I), respectively.In the first part, we consider binormal distributions, in which Σ = 1 ρ ρ 1 with different correlation ρ.The ARI values for E-Divisive and KernelMCP are compared in Table3.Given a moderate ρ value, it shows that the weighting procedures have comparable ARI values and outperform E-Divisive and KernelMCP.When ρ is extremely large and the sample size is greater, the binormal distribution actually degrades to an univariate Gaussian, which explains why the ARIs of E-Divisive and KernelMCP come from behind at ρ = 0.9 and n = 300.
In the second part, we simulate observations with dimension d = 3, 5, 10.Since KernelMCP is not easily adoptive when the dimension is more than 2, we only compare the performance of simple weighting, iterative weighting, and E-Divisive.Two types of Σ are imposed for the generation.Σ 1 is set with diagonal elements 1 and off-diagonal elements ρ; Σ 2 is set with diagonal elements 1 and ±1-off-diagonal elements ρ.Table4 shows that the simple weighting is more favorable in identifying the change point locations in the case of Σ 1 ; in the more complicated case of Σ 2 , the iterative weighting performs the best.

Simulation for Stability Detection
Stability detection is applied when the number of change points is unknown.Suppose we encode the continuous observations into V Bernoulli sequence components, so there are V change point sets obtained in total.Instead of weighting each voting set equally, we involve the simple weighting techniques and weight the votes according to the goodness-of-fit.
Denote a binormal distribution N 2 (0, 1 0.7 0.7 1 ) as G 2 .In the first scenario, 3 binormal distributions are generated by N 2 (0, I), G 2 , N 2 (0, I) with sample size 300, 600, 300, respectively.In the change point analysis of each Bernoulli sequence, model selection criterion is applied with a different penalty coefficient φ(N ).φ(N ) is set from 2, a value corresponding to AIC, to log(N ), a value corresponding to BIC.We partition the time axis into disjoint time bins with the same length.The probability of selection is then calculated based on the summed votes falling in each bin.It shows that the results are not sensitive to the choice of penalty term.There are 6 or 7 curves always above the others regardless of the penalty coefficient.

Genome Data
CpG dinucleotide clusters or 'CpG islands' are genome subsequences with a relatively high number of CG dinucleotides (a cytosine followed by a guanine).They are observed close to transcription start sites (Sxonov et al. (2006)) and play a crucial role in gene expression regulation and cell differentiation (Bird (2002)).There were developed many computational tools for CpG island identification.A sliding window is typically employed to scan the genome sequence to figure out CpG islands based on some filtering criteria.However, the criteria are set with subjective choice (G+c proportion, observation versus expectation ratio, etc) and it has evolved over time.It commonly happens that different CpG island finders would provide various results.
In this section, we implement our change point detection approach in the categorical nucleotide sequence.We believe that the proposed algorithm is able to detect an abrupt change in C-G patterns, and the estimated change point locations may help researchers to identify potential CpG islands.A contig (accession number NT_000874.1) on human chromosome 19 was taken as an example for CpG island searching.The dataset is available on the website of National Center for Biotechnology Information(NCBI).
Denote the genome sequence as {X t } N t=1 with X t ∈ {A, G, T, C}.In the encoding phase, a 0-1 sequence {E t } t is generated such that E t = 1 if X t = C&X t+1 = G and E t = 0 otherwise, for t = 1, ..., N − 1. Algorithm1 is implemented to search for multiple change points in the Bernoulli sequence.Results from a CpG island searching software CpGIE (Wang and Leung (2004)) are shown as a benchmark for comparison.Criteria advocated by the authors are employed in the usage of CpGIE (length ≥ 500 bp, G + C content ≥ 50% and CpG O/E ratio ≥ 0.60).Note that our algorithm does not need any assumption or tuning parameter.The result in Figure4 shows that there is a high proportion of overlapping segments between ours and CpGIE's.Our approach can also find extra genome subsequence with a higher number of C-Gs which are misspecified by CpGIE.

Hurricane Data
It was widely recognized that the global temperature has risen due to anthropogenic factors, such as increased carbon dioxide emissions and other human activities.According to NOAA's 2020 global climate report, the annual temperature has increased globally at an average rate of 0.14 degrees Fahrenheit per decade since 1880 and over twice that rate (0.32 degrees Fahrenheit) since 1981.It was argued by climatologists that the warmer sea surface leads to an increasing number of stronger tropical cyclones (Emanuell (2005); Saunders and Lee (2008)).However, (Landsea et al. (2010)) believes that the warmer sear surface increases only weak cyclones which are short and even hard to be detected.In this section, we studied the number of cyclones between 1851 and 2019.We are interested to detect potential change points embedded within the tropical cyclone history.
The dataset HURDAT2 recording the activities of cyclones in the Atlantic basin is available on the website of National Oceanic Center(NHC).NHC tracked the intensity of each tropical cyclone per 6 hours every day (at 0, 6, 12, and 18).The intensity level is categorized based on wind strength in knots, such as hurricane (intensity greater than 64 knots), tropical storm (intensity between 34 and 63 knots), tropical depression (intensity less than 34 knots).Different from (Robbins et al. (2011)) in categorizing cyclones, we summarize the number of time units that a category is observed, so the count is at most 4 × 31 in a month.The monthly frequency of tropical storm-level and higher-level cyclones is reported in Figure5(A).If we apply 5 change points which is detected by the local maxima of stability detection in Figure5(B), the time range is then partitioned based on the variation of storm count.Figure5(A) shows that storms are more active in the 1880s, 1960s and after 2000.Though the global temperature trends to go upward since 1980, the storms are relatively sparse between 1980 and 2000.Thus, we tend to believe that no firm conclusion can be made yet that higher temperatures would increase the number of hurricanes.

Financial Data
Lastly, the proposed approach is applied to detect the abrupt time-varying dependence within bivariate stock log returns.CTSH and IBM are chosen as representative of IT Consulting subcategories of S&P500 based on Global Industrial Classification Standard(GICS).The first and last hours in the transaction time are filtered out (so it is from 10am to 4pm), and the hourly price returns are calculated in the business days of the year 2006.A constant is added to the returns of CTSH for a better visualization in Figure6(A), but the raw return series are analyzed.It was noted that the lagged correlation statistics are not significant based on the sample autocorrelation function of stock returns.Conditional heteroskedasticity can be studied by a more complicated time series model, like GARCH, but it is out of our concentration.We encode the bivariate time series and apply stability detection techniques.Figure6(B) shows that there exist 3 or 4 change points within the returns.The top3 change point locations with the highest probability are marked by vertical lines in Figure6(A).It shows that the returns are partitioned into segments with different volatility levels.If we further look into the scatterplot between CTSH and IBM under different time partitions (left, middle, right segments) in Figure7, both returns in the middle phase are relatively high, and their correlation is even stronger.

Conclusion
In the paper, we have established a framework to encode a sequence of continuous observations into several Bernoulli processes and proposed approaches for change point detection in univariate and multivariate settings with or without a known number of change points.Theoretical work shows that the proposed method can hold both asymptotic property and finite-sample error control.Numerical and real experiments show that the approach is able to detect any type of distributional changes and can be applied to categorical, ordinal, and continuous data.Furthermore, the computational expense is reasonable with time complexity at the most expensive part O(V M 2 ) or O(V N 2 ), and parallel programming is applicable to decrease the complexity to O(N 2 ).

Figure 1 :
Figure 1: probability of selection with different penalty coefficient φ(N ); different time bins are plotted in different curves

Figure 3 :
Figure 3: (A) probability of selection with n = 400; (B) probability of selection with n = 500

Figure 4 :
Figure 4: encoded DNA sequence-CG dinucleotides are marked in black; the CpG islands discovered by CpGIE are marked in green; the estimated change point locations are marked in red

Figure 5 :
Figure 5: (A) monthly hurricane counts in Atlantic basin from year 1851 to 2019; estimated change points are plotted in vertical lines.(B) probability of selection for all the time points; local maximas are plotted in vertical lines

Figure 6 :
Figure 6: (A) hourly index returns of CTSH and IBM in 2006; top3 change points with the highest probability of selection are plotted in vertical lines.(B) probability of selection for all time points Update the recorded window set W with w and mark R t .

Table 1 :
ARI values in univariate Gaussian setting

Table 2 :
ARI values in univariate student-t setting

Table 3 :
ARI values in 2-dim Gaussian setting