Skip to main content

Modeling Structural Breaks in Disturbances Precision or Autoregressive Parameter in Dynamic Model: A Bayesian Approach

Abstract

The focus of this paper is the examination of dynamic models in the presence of structural changes either due to disturbances precision or autoregressive parameter under the Bayesian framework. The Bayesian analysis of the dynamic model has been carried out under the mixture of prior distributions for the parameters. The posterior distribution of parameters is derived to obtain Bayes estimators under quadratic loss function ignoring the possibility of structural breaks in regression coefficients. The posterior odds ratio has been developed under the assumption that disturbance precision leads to structural change as against the autoregressive parameter. The theoretical framework is also empirically tested employing data set of Indian companies considering financial variables like debt, profitability, investment etc.; covering the global financial crisis (GFC) period. The empirical exercise carried out highlights 2008–09 as the major structural breakpoint when many Indian companies suffered losses.

Introduction

Dynamic models are widely used in communications by control engineers to monitor and control the state of a system, as it evolves through time. The state of the system may be random as the physical systems often follow a random distribution. The study of structural changes is important in various economic, social, physical, and biological processes that lead to rapidly developing statistical techniques. The instability of the process is represented either by a change point parameter or a transition function and the posterior analysis focuses on the posterior mass function of the change point or the posterior density of the two parameters of the transition function. For time-series processes and dynamic models, the examination of structural change needs more emphasis, and forecasting methods that incorporate structural change need to pay much attention. Bayesian ideas have played an important role in the development of estimation, identification, and control of linear dynamic models. Zellner (1971) was one of the pioneered researchers who contributed significantly in estimating and detecting the structural shifts of linear, dynamic, time series, and distributed lag models under the Bayesian framework. The structural change in linear models and time series models have been considered by Poirier (1976), Holbert and Broemeling (1977), Choy and Broemeling (1980), Salazar et al. (1981), and Tsurumi (1980).

The earliest proponents of Bayesian switching regression and shifting normal sequence were Broemeling (1972) and Quandt (1972, 1974). Subsequently, Holbert (1973), Holbert and Broemeling (1977), Chin Choy (1977), Yu-Ming (1979), Salazar (1980, 1981), Smith (1975, 1977, 1980), and Tsurumi (1977, 1978) among others who contributed to the Bayesian structural change literature profusely. Analysis of Bayesian linear model in presence of autocorrelation was carried by Broemeling and Tsurumi (1987). Ng (1990) further worked on it incorporating mean shift and disturbances precision. An analysis of multiple change-points in independent observations was performed by Inclan and Tiao (1994). Wang and Zivot (2000) examined the dynamic time series model with multiple structural breaks explicitly. Lately, dynamic factor modelling has been developed to account for structural breaks. The counterfactual analysis of structural change employing Bayesian structure has been developed by Kim et al. (2004) with applications to volatility in US real GDP growth. Recently, Slama and Saggou (2017) considered Bayesian analysis of possible change in AR(p) process and determined an unconditional Bayesian test based on HPD region for detecting the change in parameter assuming change point to be unknown. A briefing on vital advances in the modeling of structural change is provided by Hansen (2001) covering varied structural break methodologies, estimation for breaks, and distinguishing unit-roots of discontinuous time series with illustrations on US labor productivity.

The estimation and detection of structural breaks in dynamics have been researched recently by many authors. Stock and Watson (2002) argued that the factor models can capture either the breaks in factor loading or the parameter drift in series. del Negro and Otrok (2008) have suggested a model where the factor loadings are modeled as random walks. Banerjee and Marcellino (2009) have investigated the consequences of time-variation in the factor loadings for forecasting based on Monte Carlo simulation and find the worse forecast for small sample observation. Breitung and Eickmeier (2011) discussed the structural break in the dynamic factor model and showed that structural breaks severely inflate the number of factors identified by the usual information criteria. Chaturvedi and Shrivastava (2016) also discussed the structural shifts in the linear model involving structural changes in either regression parameters or disturbances precision using the Bayesian approach. Dadashova et al. (2014) studied and proposed a Bayesian model selection methodology which was based on Zellner’s (1971) explanatory model with autoregressive errors. The paper contributes to the literature by detecting and modelling structural breaks points in the dynamic models when breaks happen either in disturbance precision or in autoregressive parameters. In order to account for the structural change in the model, a combination of prior distributions is considered. The posterior odds ratio has been developed under the assumption that disturbance precision leads to structural change as against the autoregressive parameter. The theoretical framework is also empirically tested employing data set of Indian companies considering financial variables like debt, profitability, investment, etc. covering the global financial crisis (GFC) period. The simulation exercise carried out highlights 2008–09 as the structural breakpoint validating the framework.

The paper is organized as follows. Description of structural change autoregressive parameter and disturbance precision prior modelling is done in Sect. 2, followed by the derivation of the posterior distribution and Bayes estimator of the parameter in Sect. 3. Elaboration of the posterior odds ratio is performed in Sect. 4 followed by a discussion of methodology to capture breakpoints in Sect. 5. Thereafter, an empirical application and validation of the framework is illustrated in Sect. 6 followed by a summary of findings in the concluding section.

Dynamic Model and Prior Distribution

Consider the following dynamic model involving structural break in autoregressive parameter at known break point \({n}_{1}+1\):

$${y}_{t}-{\mu }_{t}={\rho (y}_{t-1}-{\mu }_{t-1})+{u}_{t}; t=\mathrm{1,2},\dots ,{n}_{1}$$
$${y}_{t}-{\mu }_{t}=\left({\rho +\vartheta )(y}_{t-1}-{\mu }_{t-1}\right)+{u}_{t}; t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}\left(=n\right)$$
(1)
$$E\left({y}_{t}\right)={\mu }_{t}={x}_{t}^{^{\prime}}\beta$$
(2)

where \({\mathrm{y}}_{\mathrm{t}}\) is the tth observation of the dependent variable, the disturbance term \({\mathrm{u}}_{\mathrm{t}}\mathrm{^{\prime}}\) s are iid random variables following \(\mathrm{N}\left(0,{\uptau }^{-1}\right)\), \({x}_{t}\) is a \(k\times 1\) vector of \(k\) explanatory variables, for \(\mathrm{t}=\mathrm{1,2},\dots ,\mathrm{n}\) and \(\beta\) is the \(k\times 1\) vector regression coefficients. Here \({\mathrm{n}}_{1}+1\) is the break point which is considered to be known. The autoregressive coefficient \(\rho\) changes to \(\rho +\vartheta\) after the break at \({n}_{1}+1\).

The model (1) can be written as

$${y}_{t}={\rho y}_{t-1}+{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta +{u}_{t}; t=\mathrm{1,2},\dots ,{n}_{1}$$
$${y}_{t}=({\rho +\vartheta )y}_{t-1}+{\left({x}_{t}-(\rho +\vartheta ){x}_{t-1}\right)}^{^{\prime}}\beta +{u}_{t}; t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}\left(=n\right)$$
(3)

or

$${y}_{t}={{x}_{t}}^{^{\prime}}\beta +{\left(1-\rho L\right)}^{-1}{u}_{t}; t=\mathrm{1,2},\dots ,{n}_{1}$$
$${y}_{t}={x}_{t}^{^{\prime}} \beta +{\left(1-\left(\rho +\vartheta \right)L\right)}^{-1} {u}_{t}; t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}\left(=n\right)$$
(4)

Here “L” denotes the lag operator defined as \(L{y}_{t}={y}_{t-1}.\) In formulation (4), the break occurs in the autoregressive parameter but it is reflected in the variance (or precision) of \({u}_{t}\). Thus, any structural shift present in autoregressive parameter can be easily mistaken as due to break in disturbances variance/precision or vice-versa. However, both the structural breaks have different implications and interpretations and, for empirical econometrician, it is imperative to distinguish between them.

We consider the model which has structural shift either in autoregressive parameter (with probability \(1-\epsilon\)) or in disturbances precision parameter (with probability \(\epsilon\)):

$${y}_{t}={\rho y}_{t-1}+{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta +{u}_{t}; t=\mathrm{1,2},\dots ,{n}_{1}$$
$${y}_{t}=({\rho +\vartheta )y}_{t-1}+{\left({x}_{t}-\left(\rho +\vartheta \right){x}_{t-1}\right)}^{^{\prime}}\beta +{u}_{t}; t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}\left(=n\right)$$
(5)

or

$${y}_{t}={\rho y}_{t-1}+{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta +{u}_{t}; t=\mathrm{1,2},\dots ,{n}_{1}$$
$${y}_{t}={\rho y}_{t-1}+{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta +{\delta }^{-\frac{1}{2}}{u}_{t}; t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}\left(=n\right)$$
(6)

The model considers the presence of structural break points at some known time point but whether it is actually due to parametric shift in autoregressive or disturbances precision, is not known.

The proper prior selection for parameters plays an important role in the derivation of precise posterior. The experience of the researcher about the knowledge of prior has a significant contribution in the calculation of the posterior distribution. Generally, the calculation of explicit expression for posterior density becomes quite complex and often non-feasible due to the presence of high-order integrals. If exact integration is not possible, a sufficiently large number of observations simulated from the posterior distribution of the unknown parameter and approximating the mean or the other useful statistic through the simulated observation is useful for the inferences. In such scenarios, Markov Chain Monte Carlo (MCMC) method is widely employed to derive marginal densities. In the case of known and unknown conditional posterior densities, one may use Gibbs Sampling and Metropolis-Hasting sampler scheme respectively to get marginal posterior densities of the parameters.

If we set \({x}_{t}=1;\mathrm{t}=\mathrm{1,2},\dots ,\mathrm{n}\) we get the following AR(1) model:

$${y}_{t}=\left(1-\rho \right)\beta +{\rho y}_{t-1}+{u}_{t} ;\mathrm{t}=\mathrm{1,2},\dots ,{\mathrm{n}}_{1}$$
$${y}_{t}=\left\{\begin{array}{c}\begin{array}{c}\left(1-\left(\rho +\vartheta \right)\right)\beta +{\left(\rho +\vartheta \right)y}_{t-1}+{u}_{t}\\ or \end{array}\\ (1-\rho ){\beta +\rho y}_{t-1}+{\delta }^{-\frac{1}{2}}{u}_{t }\end{array}\right.;t={n}_{1}+1,{n}_{1}+2,\dots ,n$$
(7)

Further for \({x}_{t}=\left(\begin{array}{c}1\\ t\end{array}\right)\), and \(\beta =\left(\begin{array}{c}{\beta }_{1}\\ {\beta }_{2}\end{array}\right)\) we get the AR(1) model with linear trend and shift in parameters:

$${y}_{t}=\left(1-\rho \right){\beta }_{1}+\left(1-\rho \right){\beta }_{2}t{+\rho y}_{t-1}+{u}_{t} ;\mathrm{t}=\mathrm{1,2},\dots ,{\mathrm{n}}_{1}$$
$${y}_{t}=\left\{\begin{array}{c}\begin{array}{c}\left(1-\left(\rho +\vartheta \right)\right){\beta }_{1}{{+\left(1-\left(\rho +\vartheta \right)\right)\beta }_{2}t+\left(\rho +\vartheta \right)y}_{t-1}+{u}_{t}\\ or \end{array}\\ {{\left(1-\rho \right)\beta }_{1}+\left(1-\rho \right){\beta }_{2}t+\rho y}_{t-1}+{\delta }^{-\frac{1}{2}}{u}_{t}\end{array}\right.;t={n}_{1}+1,\dots ,{n}_{1}+{n}_{2}=n$$
(8)

When \(\vartheta =0\), i.e., the shift is due to change in disturbances precision (variance), the pdf of \(y=({y}_{1},\dots ,{y}_{n})\) is given by

$${p}_{0}\left(y|\vartheta =0,\delta ,\beta ,\tau ,\rho \right)$$
$$={\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}{\delta }^{\frac{{n}_{2}}{2}}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}+\delta \sum_{t={n}_{1}+1}^{n}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}\right\}\right]$$
(9)

Further, when shift is due to change in autoregressive parameter, the pdf of \(y\) can be expressed as

$${p}_{1}\left(y|\vartheta ,\delta =1,\beta ,\tau ,\rho \right)$$
$$={\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}+\sum_{t={n}_{1}+1}^{n}{\left\{{y}_{t}-{\left(\rho +\vartheta \right)y}_{t-1}-{\left({x}_{t}-\left(\rho +\vartheta \right){x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}\right\}\right]$$
(10)

Hence the likelihood function is given by

$$p\left(y|X,\beta ,\tau ,\delta ,\rho ,\vartheta \right)$$
$$=\epsilon {p}_{0}\left(y|\vartheta =0,\delta ,\beta ,\tau ,\rho \right)+\left(1-\epsilon \right){p}_{1}\left(y|\vartheta ,\delta =1,\beta ,\tau ,\rho \right)$$
$$=\epsilon {\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}{\delta }^{\frac{{n}_{2}}{2}}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}+\delta \sum_{t={n}_{1}+1}^{n}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}\right\}\right]$$
$$+\left(1-\epsilon \right){\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{y}_{t}-{\rho y}_{t-1}-{\left({x}_{t}-\rho {x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}+\sum_{t={n}_{1}+1}^{n}{\left\{{y}_{t}-{\left(\rho +\vartheta \right)y}_{t-1}-{\left({x}_{t}-\left(\rho +\vartheta \right){x}_{t-1}\right)}^{^{\prime}}\beta \right\}}^{2}\right\}\right]$$
(11)

Posterior Distributions and Bayes Estimators

For the posterior analysis, we require the following lemma, the proof of which is straight forward:

Lemma 1

Let \(y\) be a random vector having pdf \({p}_{1}(y\left|\theta \right)\) under model \({M}_{1}\) with probability \((1-\epsilon )\) and pdf \({p}_{2}(y\left|\theta \right)\) under model \({M}_{2}\) with probability \(\epsilon , \theta \in \Theta\). Then the distribution of \(y\) is mixture of distribution \(f\left(y|\theta \right)=(1-\epsilon ){p}_{1}(y\left|\theta \right)+\epsilon {p}_{2}(y|\theta )\). Further, the prior distribution of \(\theta\) is \(\pi (\theta )\). Then the posterior distribution of \(\theta\) is given by

$${\pi }^{*}\left(\theta |y\right)=\lambda \left(y\right){\pi }_{1}\left(\theta |y\right)+\left(1-\lambda \left(y\right)\right){\pi }_{2}\left(\theta |y\right)$$
(12)

where \({\pi }_{i}\left(\theta |y\right), i=\mathrm{1,2}\) is the posterior distribution for \(\theta\) under the model \({M}_{i};i=\mathrm{1,2}\), and

$$\lambda \left(y\right)=\frac{\left(1-\epsilon \right){m}_{1}\left(y\right)}{\left(1-\epsilon \right){m}_{1}\left(y\right)+\epsilon {m}_{2}\left(y\right)}.$$
(13)

Here \({m}_{i}\left(y\right)={\int }_{\theta \in \Theta }{p}_{1}(y\left|\theta \right)\pi \left(\theta \right)d\theta\) is the predictive density under the model \({M}_{i};i=\mathrm{1,2}\).

The prior distribution for \(\beta\) is a multivariate normal distribution with mean \({\beta }_{0}\) and variance \({\tau }^{-1}\) V with probability density function given by

$$p\left(\beta |\tau \right)={\left(\frac{\tau }{2\pi }\right)}^\frac{k}{2}{\left|V\right|}^\frac{1}{2}exp\left[-\frac{\tau }{2}{\left(\beta -{\beta }_{0}\right)}^{^{\prime}}V\left(\beta -{\beta }_{0}\right)\right]$$
(14)

Here hyper parameter \({\upbeta }_{0}\) is a \(\mathrm{k}\times 1\) vector and \(\tau \mathrm{V}\) represents the prior precision matrix with V known.

The prior distribution for \(\uptau\) is

$$p\left(\tau \right)\propto \frac{1}{\tau }; 0<\tau <\infty$$
(15)

For \(\delta =1,\) the joint prior distributions for \((\rho ,\vartheta )\) is taken as

$$p\left(\rho ,\vartheta \right)=2; 0<\vartheta +\rho <1, \rho ,\vartheta >0$$
(16)

The rationale behind taking \(0<\vartheta +\rho <1\) is that we assume the model to be stationary before and after the shift. Then, the conditional prior density of \(\rho\) given \(\vartheta\) is

$$p\left(\rho |\vartheta \right)=\frac{1}{1-\vartheta }; 0<\rho <1-\vartheta$$
(17)

Similarly, the conditional prior for \(\vartheta\) given \(\rho\) is

$$p\left(\vartheta |\rho \right)=\frac{1}{1-\rho }; 0<\vartheta <1-\rho$$
(18)

Further, for \(\vartheta =0\), the prior distributions for \(\rho\) and \(\delta\) are

$$p\left(\rho \right)\propto 1; 0<\rho <1,$$
(19)
$$p\left(\delta \right)\propto 1 ; 0<\delta <1$$
(20)

Since \(\epsilon\) is the probability that shift is due to change in disturbances precision and \(\left(1-\epsilon \right)\) is the probability that shift is due to change in auto regressive parameter, we have

$$\left\{\begin{array}{c}\epsilon =P\left(\vartheta =0, 0<\delta <1\right)\\ \\ 1-\epsilon =P\left(0<\vartheta <1-\rho , \delta =1\right)\end{array}\right.$$
(21)

One may refer to Appendix 1 for detailed derivations and mathematical notations of the posterior distributions given in the following Theorems (1), (2), (3), (4), and (5).

Let us write

$${\pi }_{1}\left(\rho |\beta ,\delta ,\vartheta \right)={C}_{1\rho }^{-1}\frac{1}{1-\vartheta }\frac{1}{{\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}}^\frac{n}{2}}$$
$${C}_{1\rho }=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{\left(1-\vartheta \right){\phi }_{1}{\left(\beta ,\vartheta \right)}^{\frac{n-1}{2}}{A}_{1}^\frac{1}{2}}\left[{F}_{n-1}\left(\left(1-\vartheta -\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}\right)+{F}_{n-1}\left(\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}\right)-1\right]$$
$${\pi }_{2}\left(\rho |\beta ,\delta \right)={\mathrm{C}}_{2\uprho }^{-1}\frac{1}{{\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}}^\frac{n}{2}}$$
$${C}_{2\rho }=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{{\phi }_{2}{\left(\beta ,\delta \right)}^{\frac{n-1}{2}}{A}_{2}^\frac{1}{2}}\left[{F}_{n-1}\left(\left(1-\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}\right)+{F}_{n-1}\left(\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}\right)-1\right].$$
$${\lambda }_{\rho }\left(y\right)=\frac{\left(1-\epsilon \right){C}_{1\rho }}{\left(1-\epsilon \right){C}_{1\rho }+\epsilon {C}_{2\rho }},$$

\({F}_{n-1}(t)\) denotes the cdf of t-distribution with (n − 1) degrees of freedom.

Theorem 1

The posterior density of \(\uprho\) given \(\left(\upbeta ,\mathrm{\vartheta },\updelta \right)\) is given by.

$${\pi }^{*}\left(\rho |\beta ,\vartheta ,\delta \right)={\lambda }_{\rho }\left(y\right){\pi }_{1}\left(\rho |\beta ,\vartheta \right)+\left(1-{\lambda }_{\rho }\left(y\right)\right){\pi }_{2}\left(\rho |\beta ,\delta \right)$$
(22)

We define

$${\pi }_{1}\left(\beta |\rho ,\vartheta \right)={C}_{1\beta }^{-1}\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n+k}{2}\right)}{{{\pi }^\frac{k}{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}(\rho ,\vartheta )+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}}^{\frac{n+k}{2}}}$$
$${C}_{1\beta }\equiv {C}_{1\beta } (\rho ,\vartheta )=\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n}{2}\right)}{{\left|{A}_{3}(\rho ,\vartheta )+V\right|}^\frac{1}{2}{{\phi }_{3}\left(\rho ,\vartheta \right)}^\frac{n}{2}}$$
$${\pi }_{2}\left(\beta |\rho ,\delta \right)={C}_{2\beta }^{-1}\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n+k}{2}\right)}{{{\pi }^\frac{k}{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{^{\prime}}\left({A}_{4}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}}^{\frac{n+k}{2}}}$$
$${C}_{2\beta }\equiv {C}_{2\beta } (\rho ,\delta )=\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n}{2}\right)}{{\left|{A}_{4}\left(\rho ,\delta \right)+V\right|}^\frac{1}{2}{{\phi }_{4}\left(\rho ,\delta \right)}^\frac{n}{2}}$$
$${\lambda }_{\beta }\left(y\right)\equiv {\lambda }_{\beta }\left(y|\rho ,\vartheta ,\delta \right)=\frac{\left(1-\epsilon \right){C}_{1\beta }}{\left(1-\epsilon \right){C}_{1\beta }+\epsilon {C}_{2\beta }}$$

Theorem 2

The posterior distribution of \(\beta\) given \((\rho ,\delta ,\vartheta )\) is obtained as

$${\pi }^{*}\left(\beta |\rho ,\vartheta ,\delta \right)={\lambda }_{\beta }\left(y\right){\pi }_{1}\left(\beta |\rho ,\vartheta \right)+\left(1-{\lambda }_{\beta }\left(y\right)\right){\pi }_{2}\left(\beta |\rho ,\delta \right)$$
(23)

We write

$${\pi }_{1}\left(\tau |\rho ,\vartheta \right)=\frac{{\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}{\tau }^{\frac{n}{2}-1}}{{2}^\frac{n}{2}\Gamma \left(\frac{n}{2}\right)}exp\left[-\frac{\tau }{2}{\phi }_{3}\left(\rho ,\vartheta \right)\right]$$
$${\pi }_{2}\left(\tau |\rho ,\delta \right)=\frac{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}{\tau }^{\frac{n}{2}-1}}{{2}^\frac{n}{2}\Gamma \left(\frac{n}{2}\right)}exp\left[-\frac{\tau }{2}{\phi }_{4}\left(\rho ,\delta \right)\right]$$
$${\lambda }_{\tau }\left(y\right)= \frac{\left(1-\epsilon \right){\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}}{\left(1-\epsilon \right){\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}+\epsilon {\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}}$$

Theorem 3

The conditional posterior distribution of \(\tau\) given \(\left(\rho ,\delta ,\vartheta \right)\) is given by

$${\pi }^{*}\left(\tau |\rho ,\vartheta ,\delta \right)={\lambda }_{\tau }\left(y\right){\pi }_{1}\left(\tau |\rho ,\vartheta \right)+\left(1-{\lambda }_{\tau }\left(y\right)\right){\pi }_{2}\left(\tau |\rho ,\delta \right)$$
(24)

We define

$${\Upsilon }_{1}={\int }_{0}^{1}{\int }_{0}^{1}\frac{{\delta }^{\frac{{n}_{2}}{2}}}{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}{\left|{A}_{4}\left(\rho ,\delta \right)+V\right|}^\frac{1}{2}}d\rho d\delta$$
$${\Upsilon }_{2}={\int }_{0}^{1}{\int }_{0}^{1-\vartheta }\frac{2}{{{\phi }_{3}\left(\rho ,\vartheta \right)}^\frac{n}{2}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^\frac{1}{2}}d\rho d\vartheta$$
$${p}^{*}\left(\vartheta |y\right)=\frac{\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{{\phi }_{3}\left(\rho ,\vartheta \right)}^{-\frac{n}{2}}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^{-\frac{1}{2}}d\rho }{{\int }_{0}^{1}\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{{\phi }_{3}\left(\rho ,\vartheta \right)}^{-\frac{n}{2}}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^{-\frac{1}{2}}d\rho d\vartheta };\left(0<\vartheta <1\right)$$
$${p}^{*}\left(\delta |y\right)=\frac{{\int }_{0}^{1}{\delta }^{\frac{{n}_{2}}{2}}{{\phi }_{4}\left(\rho ,\delta \right)}^{-\frac{n}{2}}{\left|{A}_{4}\left(\delta ,\rho \right)+V\right|}^{-\frac{1}{2}}d\rho }{{\int }_{0}^{1}{\int }_{0}^{1}{\delta }^{\frac{{n}_{2}}{2}}{{\phi }_{4}\left(\rho ,\delta \right)}^{-\frac{n}{2}}{\left|{A}_{4}\left(\delta ,\rho \right)+V\right|}^{-\frac{1}{2}}d\rho d\delta };(0<\delta <1)$$
$${\lambda }_{{M}_{1}\left(y\right)}=\frac{{\Upsilon }_{1}\left(1-\epsilon \right)}{{\Upsilon }_{1}\left(1-\epsilon \right)+{\Upsilon }_{2}\epsilon }=1-{\lambda }_{{M}_{2}\left(y\right)}$$

Theorem 4

The posterior distribution of \(\vartheta\) is a mixture of discrete and continuous distribution and given by

$${\pi }^{*}\left(\vartheta |y\right)=\left\{\begin{array}{c}{\lambda }_{{M}_{1}\left(y\right)}; if \vartheta =0\\ \left(1-{\lambda }_{{M}_{1}\left(y\right)}\right){p}^{*}\left(\vartheta |y\right); if 0<\vartheta <1\end{array}\right.$$
(25)

Theorem 5

The posterior distribution of \(\delta\) is a mixture of discrete and continuous distribution and given by

$${\pi }^{*}\left(\delta |y\right)=\left\{\begin{array}{c}1-{\lambda }_{{M}_{1}\left(y\right)}; if \delta =1\\ {\lambda }_{{M}_{1}\left(y\right)}{p}^{*}\left(\delta |y\right); if 0<\delta <1\end{array}\right.$$
(26)

Posterior Odds Ratio

The Bayesian framework also handles the comparison of two competing models using posterior odds ratio which has been illustrated by many researchers. Typically, the posterior odds ratio is defined as the product of prior odds and the Bayes factor. Further, the Bayes factor between two competing models is the mathematical ratio of the likelihoods integrated out with the corresponding priors and compares how one model is preferred over another. The calculated value of the posterior odds ratio above one indicates that the data prefer model assumptions taken in the numerator as compared with the model taken in the denominator.

In this section we will examine the problem of testing the hypothesis that structural change is due to shift in disturbances precision, i.e. \({H}_{0}:\vartheta =0, \delta <1\) as against the alternative that change is due to shift in autoregressive parameter, i.e. \({H}_{1}:\vartheta \ne 0, \delta =1\).

The posterior odds ratio in favor of \({H}_{0}\) is defined as

$${\uplambda }_{\mathrm{0,1}}=\left(\frac{\upepsilon }{1-\upepsilon }\right)\frac{\uppi \left({\mathrm{H}}_{0}\right)}{\uppi \left({\mathrm{H}}_{1}\right)}=\left(\frac{\upepsilon }{1-\upepsilon }\right)\frac{{\int }_{0}^{1}{\int }_{0}^{1}{\int }_{0}^{\boldsymbol{\infty }}{\int }_{{{\varvec{R}}}^{{\varvec{k}}}}p\left({H}_{0}\right)p\left(\beta \right)p\left(\tau \right)p\left(\rho \right)p\left(\delta \right) d\beta d\tau d\rho d\delta }{{\int }_{0}^{1}{\int }_{0}^{1-\boldsymbol{\vartheta }}{\int }_{0}^{\boldsymbol{\infty }}{\int }_{{{\varvec{R}}}^{{\varvec{k}}}}p\left({H}_{1}\right)p\left(\beta \right)p\left(\tau \right)p\left(\rho ,\vartheta \right)p\left(\rho \right) d\beta d\tau d\rho d\vartheta }$$
(27)

where \(p\left({H}_{0}\right)\) is the likelihood function under \({H}_{0}:\vartheta =0, 0<\delta <1\), and \(p\left({H}_{1}\right)\) is the likelihood function under \({H}_{1}:\vartheta \ne 0,\delta =1\).

After simplification (see Appendix 1), the posterior odds ratio reduces to

$${\lambda }_{\mathrm{0,1}}=\left(\frac{\upepsilon }{1-\upepsilon }\right)\frac{{\Upsilon }_{1}}{{\Upsilon }_{2}}$$
(28)

Hence, we reject \({H}_{0}\) if \({\lambda }_{\mathrm{0,1}}<1 ,\) otherwise we accept it.

Here we have assumed the breakpoint to be known as our main focus is to develop model and posterior odds ratio for examining whether the shift in model accounts for the shift in auto regressing coefficient or shift in variance of error terms. However, the proposed framework can be easily extended for the cases when breakpoint is unknown and lies in some known time interval by considering it as an unknown parameter and working out its posterior distribution by assuming an appropriate prior distribution for the breakpoint. Another possibility one may encounter is, instead of a sudden shift at a time point, there is a gradual shift in parameters spreading over an interval. We also assume the prior probabilities \(\epsilon\) and \(1-\epsilon\) for the competing models to be known and used as mixing probabilities while defining the prior distributions for parameters. One may also consider the case when ϵ is an unknown parameter with appropriate hyper prior density having support on the interval (0,1) and derive its conditional posterior. Then replace it with its Bayes estimator while evaluating the posterior odds ratio. We leave these extensions for future work.

Approximation of Bayes Factor

Friel and Pettitt (2008), and Han and Carlin (2001) discussed the approximation methods for calculating the posterior odds ratio or Bayes factor by constructing a path to link two comparative models along with the Bayes factor estimation as a ratio of normalizing constants. The path sampling approach has been considered, where one may choose the values of \(s \epsilon [\mathrm{0,1}\)] in such a way that links models \({M}_{0}\) and \({M}_{1}\) can be explained as follows

$$\mathrm{Model }m\left(0\right): {y}_{t}={{\rho y}_{t-1}+x}_{t}^{\mathrm{^{\prime}}}\beta +{u}_{t}$$
(29)
$$\mathrm{Model }m\left(1\right): {y}_{t}={{\rho y}_{t-1}+x}_{t}^{\mathrm{^{\prime}}}\beta +{\delta }^{-\frac{1}{2}}{u}_{t}\mathrm{ or}$$
$$\mathrm{Model }m\left(1\right): {y}_{t}={{\left(\rho +\vartheta \right)y}_{t-1}+x}_{t}^{^{\prime}}\beta +{u}_{t}$$
(30)

The intermediate model \(m(s)\) joining the above two models is defined as

$$\mathrm{Model }m\left(s\right): {y}_{t}=(1+{{s\left(\rho +k\Delta \right))y}_{t-1}+x}_{t}^{\mathrm{^{\prime}}}\beta +\{1+s{(k+\left(1-k\right){\delta }^{-\frac{1}{2}})\}u}_{t}$$
(31)

Let \(\mathrm{Z}(\mathrm{s})\) = \(\mathrm{p}(\mathrm{y}|\mathrm{m}(\mathrm{s}))\) be the marginal density of model \(m(s)\), so that \(\mathrm{Z}\left(1\right)=\mathrm{p}\left(\mathrm{y}|\mathrm{m}\left(1\right)\right)\), \(\mathrm{Z}\left(0\right)=\mathrm{p}(\mathrm{y}|\mathrm{m}(0))\) and \(\Gamma =(\beta ,\rho ,\Delta ,\delta )\) is a parameter space. Using Bayes formula posterior distribution of parameter \(\Gamma\) given (\(y,m(s))\) is defined as

$$\mathrm{p}\left[\Gamma |y,m\left(s\right)\right]=\frac{\mathrm{p}\left[\mathrm{y},\Gamma |m\left(s\right)\right]}{\mathrm{p}\left(\mathrm{y}|\mathrm{m}\left(\mathrm{s}\right)\right)}.$$

Taking logarithm of both sides of above equation we have

$${\mathrm{log}}_{10}\mathrm{p}[\Gamma |y,m(s)]={\mathrm{log}}_{10}\mathrm{p}[\mathrm{y},\Gamma |m(s)]-{\mathrm{log}}_{10}[\mathrm{Z}(\mathrm{s})]$$

Following Gelman and Meng (1998) the logarithm of the Bayes Factor (BF) is obtained as

$${\mathrm{log}}_{10}\left[\mathrm{BF}\right]=\mathrm{log}\left[\frac{\mathrm{Z}\left(1\right)}{\mathrm{Z}\left(0\right)}\right]={\int }_{0}^{1}R\left(\Gamma ,u\right) du$$

and \(R\left(\Gamma ,s\right)=\frac{d}{ds}\mathrm{log}[\mathrm{p}\left(\mathrm{y}\right|\Gamma ,\mathrm{m}\left(\mathrm{s}\right))].\)

The above integral can be estimated via trapezoid rule by defining the grid \({s}_{0}=0, {s}_{1}<{s}_{2}<{s}_{3}\dots <{s}_{G}<{s}_{G+1}=1\) as follows

$${\mathrm{log}}_{10}\left[\mathrm{BF}\right]=0.05\sum_{j=0}^{G}[{\overline{R} }_{j+1}+{\overline{R} }_{j}][{s}_{j+1}-{s}_{j}]$$

where \({\overline{R} }_{j}=\sum_{t=1}^{T}R({\Gamma }^{\left(t\right)},{s}_{j})/T\) is an average over \(T\) iteration from an MCMC chain of parameters \({\Gamma }^{\left(t\right)}\) sampled from \(\mathrm{p}\left(\mathrm{y}\right|\Gamma ,\mathrm{m}\left({s}_{j}\right))\).

In our model

$$R\left(\Gamma ,s\right)=\frac{d}{ds}\mathrm{log}\left[\mathrm{p}\left(\mathrm{y}\right|\Gamma ,\mathrm{m}\left(\mathrm{s}\right))\right]=\sum_{i=1}^{n}\tau \left(s\right)\left[{y}_{t}-{s{\left(\rho +k\Delta \right)y}_{t-1}+x}_{t}^{^{\prime}}\beta \right][-{\Delta y}_{t-1}]$$
(32)

where precision is defined as \(\left(s\right)={\{1+s(k+\left(1-k\right){\delta }^{-\frac{1}{2}})\}}^{2}{\uptau }^{-1}\).

Data and Results

To assess the validity of the proposed model, it is tested on the real data set of Indian corporate sector. The data used here has been sourced from the publicly available aggregate level balance sheet data (annual financial variable information) of Non-Government Non-Financial Public Limited Companies published by Reserve Bank of India on their website. The annual financial variables comprises of profit margin, debt to equity ratio (Leverage), sales growth, gross fixed asset (GFA) growth and internal sources to total sources of fund ratio (Internal Fund Sources). In the Indian scenario, generally sales growth, GFA growth and internal sources of funds are the variables that positively affect corporate profits, however, debt is negatively related to the profit margin of a company. Here, gross profit to sales ratio is defined as the profit margin. Debt to equity ratio is defined as long term debt to total equity ratio. Gross fixed asset is the investment made by companies in corresponding years in land, building, plant and machinery, etc. Profit margin is the endogenous variable, whereas all other parameters constitute the exogenous variables. Chart 1 shows the movement of financial variables viz. Profit Margin, Leverage, Sales growth, GFA growth and internal sources of fund during 1992–2014. Chart 2 and Table 1(A) and (B) portray the posterior odds ratios at different time points. It is evidenced from Table 1(A) and (B) that global financial crises (GFC) period 2007–2008 lead to strong structural break in the profit of Indian companies during the year 2008–09 as posterior odds ratio is greater than one for all probabilities (epsilon). Indian companies suffered losses evident by low profit margin during this period. The GFC period is a significant period where the model experienced structural shifts in disturbances precision as against the autoregressive parameter. Moreover, year 1997–98 also witnessed strong parametric shift in the model due to East Asian crisis leading to lower sales growth and subdued profit margins.

Fig. 1
figure 1

Movement of financial variables

Fig. 2
figure 2

Posterior odd ratio

Table 1 Posterior odds ratio

Conclusions and Findings

In this paper, a Bayesian approach to capture the structural shift points in the dynamic model is developed with an assumption of the probability of occurrences of a shift either in disturbances precision or in autoregressive parameter. The marginal posterior densities and Bayes estimator of the various model parameters have been worked out under the appropriate mixture of prior distributions. The posterior odds ratio-based test procedure which compares the two competing models based on their model fitting performance has been developed and an approximation of the same has been presented using the path sampling approach. A real data set on Indian companies spanning from 1992 to 2014 comprising of financial variables based on the company’s balance sheet and profit and loss account has been utilized to validate the proposed model and its performance for capturing the structural breakpoints empirically. The posterior odds ratio test clearly indicates that the model detects two major significant structural shift points which are 1997–98 and 2008–2009. The structural shift is observed in the year 1997–1998 and year 2008–09 due to East Asian crises and global financial crises (GFC) respectively. Both the crises have affected adversely the profit margins of Indian firms in the corresponding periods.

The paper considers the Bayesian approach for analyzing the model involving a single known break point. In several applications the break point is unknown. Considering the model with unknown breakpoint, the analysis of the threshold effects and tipping points using Bayesian approach in under progress. The work done in the paper can also be extended for the cases when the model has multiple break points or gradual structural shift over a known period.

References

  • Banerjee A, Marcellino M (2009) Factor augmented error correction models. In: Castle JL, Shephard N (eds) The methodology and practice of econometrics-a Festschrift for David Hendry. Oxford University Press, pp 227–254

    Chapter  Google Scholar 

  • Breitung J, Eickmeier S (2011) Testing for structural breaks in dynamic factor models. J Econom 163:71–84

    MathSciNet  Article  Google Scholar 

  • Broemeling LD (1972) Bayesian procedures for detecting a change in a sequence of random variables. Metron 30:1–14

    Google Scholar 

  • Broemeling LD, Tsurumi H (1987) Econometrics and structural change. Marcel Dekker, New York

    MATH  Google Scholar 

  • Chaturvedi A, Shrivastava A (2016) Bayesian analysis of a linear model involving structural changes in either regression parameters or disturbances precision. Commun Stat Theory Methods 45:307–320

    MathSciNet  Article  Google Scholar 

  • Choy JHC, Broemeling LD (1980) Some Bayesian inferences for a changing linear model. Technimetrics 22:71–78

    MathSciNet  Article  Google Scholar 

  • Choy JHC (1977) A Bayesian analysis of a changing linear model. Doctoral dissertation, Oklahoma State University, Stillwater, Oklahoma

  • Dadashova B, Arena B, José J, Aparicio F (2014) Bayesian model selection of structural explanatory models: application to road accident data. Proc Soc Behav Sci 160:55–63

    Article  Google Scholar 

  • del Negro M, Otrok C (2008) Dynamic factor models with time-varying parameters: measuring changes in international business cycles. Federal Reserve Bank of New York Staff Report No. 326, May 2008

  • Friel N, Pettitt AQN (2008) Marginal likelihood estimation via power posteriors. J R Stat Soc B 70:589–607

    MathSciNet  Article  Google Scholar 

  • Gelman A, Meng X (1998) Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Stat Sci 13:163–185

    MathSciNet  Article  Google Scholar 

  • Han C, Carlin BP (2001) Markov Chain Monte Carlo methods for computing Bayes factors: a comparative review. J Am Stat Assoc 96:1122–1132

    Article  Google Scholar 

  • Hansen BE (2001) The new econometrics of structural change: dating breaks in U.S. labor productivity. J Econ Perspect 15:117–128

    Article  Google Scholar 

  • Holbert D, Broemeling LD (1977) Bayesian inference related to shifting sequences and two-phase regression. Commun Stat Theory Methods 6:265–275

    MathSciNet  Article  Google Scholar 

  • Holbert D (1973) A Bayesian analysis of shifting sequences with applications to two phase regression. Doctoral dissertation, Oklahoma State University, Stillwater, Oklahoma

  • Inclan C, Tiao GC (1994) Use of cumulative sums of squares for retrospective detection of changes in variance. J Am Stat Assoc 89:913–923

    MATH  Google Scholar 

  • Kim J, Morley J, Piger J (2004) A Bayesian approach to counterfactual analysis of structural change. Working Paper 2004-014D, July 2004, revised June 2006, Federal Reserve Bank of St. Louis. http://research.stlouisfed.org/wp/2004/2004-014.pdf

  • Ng VM (1990) Bayesian analysis of linear models exhibiting changes in mean and precision at an unknown time point. Commun Stat Theory Methods 19:111–120

    MathSciNet  Article  Google Scholar 

  • Poirier DH (1976) The econometrics of structural change. North-Holland Publishing Co., Amsterdam

    MATH  Google Scholar 

  • Quandt RE (1972) A new approach to estimating switching regressions. J Am Stat Assoc 67:306–310

    Article  Google Scholar 

  • Quandt RE (1974) A comparison of methods for testing non-nested hypotheses. Rev Econ Stat 56:92–99

    Article  Google Scholar 

  • Salazar D, Broemeling L, Chi A (1981) Parameter changes in a regression model with auto correlated errors. Commun Stat Theory Methods 10:1751–1758

    Article  Google Scholar 

  • Salazar D (1980) The analysis of structural changes in time series and multivariate linear models. Doctoral dissertation. Oklahoma State University, Stillwater, Oklahoma

  • Slama A, Saggou H (2017) A Bayesian analysis of a change in the parameters of autoregressive time series. Commun Stat Comput Simul 46:7008–7021

    MathSciNet  Article  Google Scholar 

  • Smith AFM (1975) A Bayesian approach to inference about a change point in a sequence of random variables. Biometrika 62:407–416

    MathSciNet  Article  Google Scholar 

  • Smith AFM (1980) Change-point problems: approaches and applications. In: Bernardo JM (ed) Bayesian statistics. University Press, Valencia, pp 83–98

    Google Scholar 

  • Smith AFM (1977) A Bayesian analysis of sometime-varying models. In: Barra JR et al (eds) Recent developments in statistics. North-Holland Publishing Company, Amsterdam, pp 257–267

    Google Scholar 

  • Stock JH, Watson MW (2002) Macroeconomic forecasting using diffusion indexes. J Bus Econ Stat 20:147–162

    MathSciNet  Article  Google Scholar 

  • Tsurumi H (1977) A Bayesian test of a parameter shift with an application. J Econom 6:371–380

    MathSciNet  Article  Google Scholar 

  • Tsurumi H (1978) A Bayesian test of a parameter shift in a simultaneous equation with an application to a macro savings function. Econ Stud Q 24:216–230

    Google Scholar 

  • Tsurumi H (1980) A Bayesian estimation of structural shifts by gradual switching regression with an application to the US gasoline market. In: Zellner A (ed) Bayesian analysis in econometrics and statistics: essays in honor of Harold Jeffreys. North-Holland Publishing Company, Amsterdam, pp 213–240

    Google Scholar 

  • Wang J, Zivot E (2000) A Bayesian time series model of multiple structural changes in level, trend and variance. J Bus Econ Stat 18:374–386

    Google Scholar 

  • Yu-Ming CA (1979) The Bayesian analysis of structural change in linear models. Doctoral dissertation, Oklahoma State University, Stillwater, Oklahoma

  • Zellner A (1971) An introduction to Bayesian inference in econometrics. Wiley, New York

    MATH  Google Scholar 

Download references

Funding

The research is not supported by any research grant from funding agencies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anoop Chaturvedi.

Ethics declarations

Conflict of interest

The authors declare that there is no potential conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Derivation of Conditional posterior distribution of \({\varvec{\uprho}}\) given \(\left({\varvec{\upbeta}},{\varvec{\upvartheta}},{\varvec{\updelta}}\right)\) :

Let us write

$${\mathfrak{z}}_{t}\left(\beta \right)={y}_{t}-{x}_{t}^{^{\prime}}\beta .$$

Further, we define

$${\widehat{\rho }}^{*}\equiv {\widehat{\rho }}^{*}\left(\beta ,\vartheta \right)=\frac{{B}_{1}}{{A}_{1}},$$
$${A}_{1}\equiv {A}_{1}(\beta )=\sum_{t=1}^{n}{\mathfrak{z}}_{t-1}^{2}\left(\beta \right)$$
$${B}_{1}\equiv {B}_{1}(\beta ,\vartheta )=\sum_{t=1}^{{n}_{1}}{\mathfrak{z}}_{t}\left(\beta \right){\mathfrak{z}}_{t-1}\left(\beta \right)+\sum_{t={n}_{1}+1}^{n}\left({\mathfrak{z}}_{t}\left(\beta \right)-\vartheta {\mathfrak{z}}_{t-1}\left(\beta \right)\right){\mathfrak{z}}_{t-1}\left(\beta \right)$$
$${\phi }_{1}\left(\beta ,\vartheta \right)=\sum_{t=1}^{{n}_{1}}{{\mathfrak{z}}_{t}}^{2}\left(\beta \right)+\sum_{t={n}_{1}+1}^{n}{\left({\mathfrak{z}}_{t}\left(\beta \right)-\vartheta {\mathfrak{z}}_{t-1}\left(\beta \right)\right)}^{2}-\frac{{B}_{1}^{2}}{{A}_{1}}$$
$$\widehat{\rho }\equiv \widehat{\rho }\left(\beta ,\delta \right)=\frac{{B}_{2}}{{A}_{2}},$$
$${A}_{2}\equiv {A}_{2}\left(\beta ,\delta \right)=\sum_{t=1}^{{n}_{1}}{\mathfrak{z}}_{t-1}^{2} \left(\beta \right)+\delta \sum_{t={n}_{1}+1}^{n}{\mathfrak{z}}_{t-1}^{2} \left(\beta \right),$$
$${B}_{2}\equiv {B}_{2}\left(\beta ,\delta \right)=\sum_{t=1}^{{n}_{1}}{\mathfrak{z}}_{t}\left(\beta \right){\mathfrak{z}}_{t-1}\left(\beta \right)+\delta \sum_{t={n}_{1}+1}^{n}{\mathfrak{z}}_{t}\left(\beta \right){\mathfrak{z}}_{t-1}\left(\beta \right)$$
$${\phi }_{2}\left(\beta ,\delta \right)=\sum_{t=1}^{{n}_{1}}{{\mathfrak{z}}_{t}}^{2}\left(\beta \right)+\delta \sum_{t={n}_{1}+1}^{n}{{\mathfrak{z}}_{t}}^{2}\left(\beta \right)-\frac{{B}_{2}^{2}}{{A}_{2}},$$

The likelihood function (9) can be written as

$$p\left(y|X,\beta ,\tau ,\delta ,\rho ,\vartheta \right)$$
$$=\left(1-\epsilon \right){\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}+\sum_{t={n}_{1}+1}^{n}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\left(\rho +\vartheta \right){\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}\right\}\right]$$
$$+\epsilon {\delta }^{\frac{{n}_{2}}{2}}{\left(\frac{\tau }{2\pi }\right)}^\frac{n}{2}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}+\delta \sum_{t={n}_{1}+1}^{n}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}\right\}\right]$$
(33)

Notice that the first part of the likelihood function with \(\left(1-\epsilon \right)\) gives the likelihood under model \({M}_{1}\), whereas the second part with \(\epsilon\) gives the likelihood function under model \({M}_{2}\). For obtaining the conditional \(\uprho\) given \(\left(\upbeta ,\updelta ,\mathrm{\vartheta }\right)\), combining the likelihood under the model \({M}_{1}\) with the prior distributions \(p\left(\rho |\vartheta \right), p\left(\tau \right)\), and integrating with respect to \(\tau\) we obtain

$${\pi }_{1}\left(\rho |\beta ,\delta ,\vartheta \right)$$
$$\propto \frac{1}{1-\vartheta }{\int }_{0}^{\infty }{\tau }^{\frac{n}{2}-1}exp\left[-\frac{\tau }{2}\left\{\sum_{t=1}^{{n}_{1}}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}+\sum_{t={n}_{1}+1}^{n}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\left(\rho +\vartheta \right){\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}\right\}\right]d\tau$$

We observe that

$$\sum_{t=1}^{{n}_{1}}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}+\sum_{t={n}_{1}+1}^{n}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\left(\rho +\vartheta \right){\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}$$
$$={\rho }^{2}\sum_{t=1}^{n}{\mathfrak{z}}_{t-1}{\left(\beta \right)}^{2}-2\rho \left\{\sum_{t=1}^{{n}_{1}}{\mathfrak{z}}_{t}\left(\beta \right){\mathfrak{z}}_{t-1}\left(\beta \right)+\sum_{t={n}_{1}+1}^{n}\left({\mathfrak{z}}_{t}\left(\beta \right)-\vartheta {\mathfrak{z}}_{t-1}\left(\beta \right)\right){\mathfrak{z}}_{t-1}\left(\beta \right)\right\}+\sum_{t=1}^{{n}_{1}}{\mathfrak{z}}_{t}{\left(\beta \right)}^{2}+\sum_{t={n}_{1}+1}^{n}{\left({\mathfrak{z}}_{t}\left(\beta \right)-\vartheta {\mathfrak{z}}_{t-1}\left(\beta \right)\right)}^{2}$$
$$={\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}(\beta ,\vartheta )$$

Hence, we obtain

$${\pi }_{1}\left(\rho |\beta ,\vartheta \right)$$
$$\propto \frac{1}{1-\vartheta }{\int }_{0}^{\infty }{\tau }^{\frac{n}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}\right]d\tau$$
$$\propto \frac{1}{1-\vartheta }\frac{1}{{\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}}^\frac{n}{2}}.$$

Therefore

$${\pi }_{1}\left(\rho |\beta ,\delta ,\vartheta \right)={C}_{1\rho }^{-1}\frac{1}{1-\vartheta }\frac{1}{{\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}}^\frac{n}{2}},$$
(34)

where

$${C}_{1\rho }=\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }\frac{1}{{\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}}^\frac{n}{2}}d\rho$$
$$=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{\left(1-\vartheta \right){\phi }_{1}{\left(\beta ,\vartheta \right)}^{\frac{n-1}{2}}{A}_{1}^\frac{1}{2}}{\int }_{-\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}}^{\left(1-\vartheta -\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}}{f}_{n-1}\left(t\right)dt$$
$$=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{\left(1-\vartheta \right){\phi }_{1}{\left(\beta ,\vartheta \right)}^{\frac{n-1}{2}}{A}_{1}^\frac{1}{2}}\left[{F}_{n-1}\left(\left(1-\vartheta -\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}\right)+{F}_{n-1}\left(\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{1}}{{\phi }_{1}\left(\beta ,\vartheta \right)}}\right)-1\right],$$
(35)

where \({f}_{n-1}\left(t\right)\) and \({F}_{n-1}(t)\) denote, respectively, the pdf and cdf of t-distribution with (n − 1) degrees of freedom.

Further, we have

$$\sum_{t=1}^{{n}_{1}}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}+\delta \sum_{t={n}_{1}+1}^{n}{\left\{{\mathfrak{z}}_{t}\left(\beta \right)-\rho {\mathfrak{z}}_{t-1}\left(\beta \right)\right\}}^{2}={A}_{2}{\left(\rho -\widehat{\rho }\right)}^{2}+{\phi }_{2}\left(\beta ,\delta \right)$$

Hence, under model \({M}_{2}\) the posterior density of \(\rho\) given \((\beta ,\delta )\) is

$${\pi }_{2}\left(\rho |\beta ,\delta \right)\propto {\int }_{0}^{\infty }{\tau }^{\frac{n}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}\right]d\tau$$
$$\propto \frac{1}{{\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}}^\frac{n}{2}} ,$$

so that,

$${\pi }_{2}\left(\rho |\beta ,\delta \right)={\mathrm{C}}_{2\uprho }^{-1}\frac{1}{{\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}}^\frac{n}{2}}$$
(36)

with

$${C}_{2\rho }={\int }_{0}^{1}\frac{1}{{\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}}^\frac{n}{2}}d\rho$$
$$=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{{\phi }_{2}{\left(\beta ,\delta \right)}^{\frac{n-1}{2}}{A}_{2}^\frac{1}{2}}{\int }_{-\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}}^{\left(1-\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}}{f}_{n-1}\left(t\right)dt$$
$$=\frac{\mathrm{\rm B}\left(\frac{1}{2},\frac{n-1}{2}\right)}{{\phi }_{2}{\left(\beta ,\delta \right)}^{\frac{n-1}{2}}{A}_{2}^\frac{1}{2}}\left[{F}_{n-1}\left(\left(1-\widehat{\rho }\right)\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}\right)+{F}_{n-1}\left(\widehat{\rho }\sqrt{\frac{\left(n-1\right){A}_{2}}{{\phi }_{2}\left(\beta ,\delta \right)}}\right)-1\right].$$
(37)

Further

$${\lambda }_{\rho }\left(y\right)=\frac{\left(1-\epsilon \right){m}_{1\rho }\left(y\right)}{\left(1-\epsilon \right){m}_{1\rho }\left(y\right)+\epsilon {m}_{2\rho }\left(y\right)},$$

where

$${m}_{1\rho }\left(y\right)=\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{\int }_{0}^{\infty }\frac{{\tau }^{\frac{n}{2}-1}}{{\left(2\pi \right)}^\frac{n}{2}}exp\left[-\frac{\tau }{2}\left\{{\left(\rho -{\widehat{\rho }}^{*}\right)}^{2}{A}_{1}+{\phi }_{1}\left(\beta ,\vartheta \right)\right\}\right]d\tau d\rho =\frac{\Gamma \left(\frac{n}{2}\right)}{{\pi }^\frac{n}{2}}{C}_{1\rho }$$
$${m}_{2\rho }\left(y\right)={\int }_{0}^{1}{\int }_{0}^{\infty }\frac{{\tau }^{\frac{n}{2}-1}}{{\left(2\pi \right)}^\frac{n}{2}}\mathrm{exp}[-\frac{\tau }{2}\left\{{\left(\rho -\widehat{\rho }\right)}^{2}{A}_{2}+{\phi }_{2}\left(\beta ,\delta \right)\right\}]d\tau d\rho =\frac{\Gamma \left(\frac{n}{2}\right)}{{\pi }^\frac{n}{2}}{C}_{2\rho }.$$

Thus

$${\lambda }_{\rho }\left(y\right)=\frac{\left(1-\epsilon \right){C}_{1\rho }}{\left(1-\epsilon \right){C}_{1\rho }+\epsilon {C}_{2\rho }}$$
(38)

Then the posterior density of \(\uprho\) given \(\left(\upbeta ,\mathrm{\vartheta },\updelta \right)\) is

$${\pi }^{*}\left(\rho |\beta ,\vartheta ,\delta \right)={\lambda }_{\rho }\left(y\right){\pi }_{1}\left(\rho |\beta ,\vartheta \right)+\left(1-{\lambda }_{\rho }\left(y\right)\right){\pi }_{2}\left(\rho |\beta ,\delta \right)$$
(39)

Derivation of Conditional Posterior Density of \({\varvec{\beta}}\) given \(\left({\varvec{\uprho}},{\varvec{\upvartheta}},{\varvec{\updelta}}\right)\) :

For deriving the conditional posterior density of \(\beta\) given \(\left(\uprho ,\mathrm{\vartheta },\updelta \right)\), we define

$$\mathcal{y}\left(\rho \right)={y}_{t}-\rho {y}_{t-1};t=1,\dots ,n$$
$${\mathcal{y}}_{t}\left(\rho +\vartheta \right)={y}_{t}-\left(\rho +\vartheta \right){y}_{t-1};t={n}_{1}+1,\dots ,n; \left(\mathrm{under model }{\mathrm{M}}_{1}\right)$$
$${\mathcal{x}}_{t}\left(\rho \right)={x}_{t}-\rho {x}_{t-1};t=1,\dots ,{n}_{1}$$
$${\mathcal{x}}_{t}\left(\rho +\vartheta \right)={x}_{t}-\left(\rho +\vartheta \right){x}_{t-1};t={n}_{1}+1,\dots ,n; \left(\mathrm{under model }{\mathrm{M}}_{1}\right)$$
$${A}_{3}\left(\rho ,\vartheta \right)\equiv {A}_{3}=\left(\sum_{t=1}^{{n}_{1}}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{x}}_{t}{\left(\rho \right)}^{\mathrm{^{\prime}}}+\sum_{t={n}_{1}+1}^{n}{\mathcal{x}}_{t}\left(\rho +\vartheta \right){\mathcal{x}}_{t}{\left(\rho +\vartheta \right)}^{\mathrm{^{\prime}}}\right)$$
$${A}_{4}\left(\rho ,\delta \right)\equiv {A}_{4}=\left(\sum_{t=1}^{{n}_{1}}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{x}}_{t}{\left(\rho \right)}^{\mathrm{^{\prime}}}+\delta \sum_{t={n}_{1}+1}^{n}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{x}}_{t}{\left(\rho \right)}^{\mathrm{^{\prime}}}\right)$$
$${\mathcal{w}}_{3}\left(\rho ,\vartheta \right)=\left(\sum_{t=1}^{{n}_{1}}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{y}}_{t}\left(\rho \right)+\sum_{t={n}_{1}+1}^{n}{\mathcal{x}}_{t}\left(\rho +\vartheta \right){\mathcal{y}}_{t}\left(\rho +\vartheta \right)\right)$$
$${\mathcal{w}}_{4}\left(\rho ,\delta \right)=\left(\sum_{t=1}^{{n}_{1}}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{y}}_{t}\left(\rho \right)+\delta \sum_{t={n}_{1}+1}^{n}{\mathcal{x}}_{t}\left(\rho \right){\mathcal{y}}_{t}\left(\rho \right)\right)$$
$$\widehat{\beta }\left(\rho ,\vartheta \right)={\left({A}_{3}+V\right)}^{-1}\left({\mathcal{w}}_{3}\left(\rho ,\vartheta \right)+V{\beta }_{0}\right)$$
$$\widehat{\beta }\left(\rho ,\delta \right)={\left({A}_{4}+V\right)}^{-1}\left({\mathcal{w}}_{4}\left(\rho ,\delta \right)+V{\beta }_{0}\right)$$
$${\phi }_{3}\left(\rho ,\vartheta \right)=\sum_{t=1}^{{n}_{1}}{{\mathcal{y}}_{t}\left(\rho \right)}^{2}+\sum_{t={n}_{1}+1}^{n}{{\mathcal{y}}_{t}\left(\rho +\vartheta \right)}^{2}+{\beta }_{0}^{^{\prime}}V{\beta }_{0}-\widehat{\beta }{\left(\rho ,\vartheta \right)}^{^{\prime}}\left({A}_{3}+V\right)\widehat{\beta }\left(\rho ,\vartheta \right)$$
$${\phi }_{4}\left(\rho ,\delta \right)=\sum_{t=1}^{{n}_{1}}{{\mathcal{y}}_{t}\left(\rho \right)}^{2}+\delta \sum_{t={n}_{1}+1}^{n}{{\mathcal{y}}_{t}\left(\rho \right)}^{2}+{\beta }_{0}^{^{\prime}}V{\beta }_{0}-\widehat{\beta }{\left(\rho ,\delta \right)}^{^{\prime}}\left({A}_{4}+V\right)\widehat{\beta }\left(\rho ,\delta \right)$$

Then, under the model \({M}_{1}\), combining the likelihood with the prior distributions of \((\beta ,\tau )\), gives the posterior distribution of \(\beta\) given (\(\rho ,\vartheta )\) as

$${\pi }_{1}\left(\beta |\rho ,\vartheta \right)$$
$$={C}_{1\beta }^{-1}\frac{1}{{\left(2\pi \right)}^\frac{k}{2}}{\int }_{0}^{\infty }{\tau }^{\frac{n+k}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\tau$$
$$={C}_{1\beta }^{-1}\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n+k}{2}\right)}{{{\pi }^\frac{k}{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}}^{\frac{n+k}{2}}}$$
(40)

where

$${C}_{1\beta }\equiv {C}_{1\beta } (\rho ,\vartheta )$$
$$={\int }_{0}^{\infty }\frac{1}{{\left(2\pi \right)}^\frac{k}{2}}{\int }_{{R}^{k}}{\tau }^{\frac{n+k}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\beta d\tau$$
$$=\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n}{2}\right)}{{\left|{A}_{3}+V\right|}^\frac{1}{2}{{\phi }_{3}\left(\rho ,\vartheta \right)}^\frac{n}{2}}$$
(41)

Further, under model \({M}_{2}\), the posterior distribution of \(\beta\) given (\(\rho ,\delta )\) is obtained as

$${\pi }_{2}\left(\beta |\rho ,\delta \right)={C}_{2\beta }^{-1}\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n+k}{2}\right)}{{{\pi }^\frac{k}{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{\mathrm{^{\prime}}}\left({A}_{4}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}}^{\frac{n+k}{2}}}$$
(42)

where

$${C}_{2\beta }\equiv {C}_{2\beta } (\rho ,\delta )$$
$$={\int }_{0}^{\infty }\frac{1}{{\left(2\pi \right)}^\frac{k}{2}}{\int }_{{R}^{k}}{\tau }^{\frac{n+k}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{^{\prime}}{\left({A}_{4}+V\right)}^{-1}\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}\right]d\beta d\tau$$
$$=\frac{{2}^{\frac{\mathrm{n}}{2}}\Gamma \left(\frac{n}{2}\right)}{{\left|{A}_{4}+V\right|}^\frac{1}{2}{{\phi }_{4}\left(\rho ,\delta \right)}^\frac{n}{2}}$$
(43)

Then the posterior density of \(\upbeta\) given \(\left(\uprho ,\mathrm{\vartheta },\updelta \right)\) is

$${\pi }^{*}\left(\beta |\rho ,\vartheta ,\delta \right)={\lambda }_{\beta }\left(y\right){\pi }_{1}\left(\beta |\rho ,\vartheta \right)+\left(1-{\lambda }_{\beta }\left(y\right)\right){\pi }_{2}\left(\beta |\rho ,\delta \right)$$
(44)

where

$${\lambda }_{\beta }\left(y\right)\equiv {\lambda }_{\beta }\left(y|\rho ,\vartheta ,\delta \right)=\frac{\left(1-\epsilon \right){C}_{1\beta }}{\left(1-\epsilon \right){C}_{1\beta }+\epsilon {C}_{2\beta }}.$$

Derivation of conditional posterior distribution of \({\varvec{\tau}}\) given \(\left({\varvec{\rho}},\boldsymbol{\vartheta },{\varvec{\delta}}\right):\)

Under model \({M}_{1}\), the conditional posterior distribution of \(\tau\) given \(\left(\rho ,\vartheta ,\delta =1\right)\) is

$${\pi }_{1}\left(\tau |\rho ,\vartheta \right)$$
$$\propto \frac{1}{{\left(2\pi \right)}^\frac{k}{2}}{\int }_{{R}^{k}}{\tau }^{\frac{n+k}{2}-1}exp\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\beta$$
$$\propto {\tau }^{\frac{n}{2}-1}exp\left[-\frac{\tau }{2}{\phi }_{3}\left(\rho ,\vartheta \right)\right]$$

Hence

$${\pi }_{1}\left(\tau |\rho ,\vartheta \right)=\frac{{\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}{\tau }^{\frac{n}{2}-1}}{{2}^\frac{n}{2}\Gamma (\frac{n}{2})}exp\left[-\frac{\tau }{2}{\phi }_{3}\left(\rho ,\vartheta \right)\right]$$
(45)

Similarly, under model \({M}_{2}\), the conditional posterior distribution of \(\tau\) given \(\left(\rho ,\delta ,\vartheta =0\right)\) is obtained as

$${\pi }_{2}\left(\tau |\rho ,\delta \right)=\frac{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}{\tau }^{\frac{n}{2}-1}}{{2}^\frac{n}{2}\Gamma (\frac{n}{2})}exp\left[-\frac{\tau }{2}{\phi }_{4}\left(\rho ,\delta \right)\right]$$
(46)

Further, the conditional posterior distribution of \(\tau\) given \(\left(\rho ,\delta ,\vartheta \right)\) is given by

$${\pi }^{*}\left(\tau |\rho ,\vartheta ,\delta \right)={\lambda }_{\tau }\left(y\right){\pi }_{1}\left(\tau |\rho ,\vartheta \right)+\left(1-{\lambda }_{\tau }\left(y\right)\right){\pi }_{2}\left(\tau |\rho ,\delta \right)$$
(47)

where

$${\lambda }_{\tau }\left(y\right)=\frac{\frac{\left(1-\epsilon \right)}{{\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}}}{\frac{\left(1-\epsilon \right)}{{\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}}+\frac{\epsilon }{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}}}$$
$$= \frac{\left(1-\epsilon \right){\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}}{\left(1-\epsilon \right){\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}+\epsilon {\phi }_{3}{\left(\rho ,\vartheta \right)}^\frac{n}{2}}$$
(48)

Derivation of posterior distributions of \(\boldsymbol{\vartheta }\) and \({\varvec{\delta}}\) :

We have

$$P\left(\vartheta =0|y\right)=\frac{p\left(y|\vartheta =0\right)P\left(\vartheta =0\right)}{p\left(y|\vartheta =0\right)P\left(\vartheta =0\right)+p\left(y|\delta =1\right)P\left(\delta =1\right)}$$
$$=\frac{p\left(y|\vartheta =0\right)\left(1-\epsilon \right)}{p\left(y|\vartheta =0\right)\left(1-\epsilon \right)+p\left(y|\delta =1\right)\epsilon }$$
(49)

and

$$P\left(\delta =1|y\right)=\frac{p\left(y|\delta =1\right)P\left(\delta =1\right)}{p\left(y|\vartheta =0\right)P\left(\vartheta =0\right)+p\left(y|\delta =1\right)P\left(\delta =1\right)}$$
$$=\frac{p\left(y|\delta =1\right)\epsilon }{p\left(y|\vartheta =0\right)\left(1-\epsilon \right)+p\left(y|\delta =1\right)\epsilon }$$
(50)

Now

$$p\left(y|\vartheta =0\right)={\int }_{0}^{1}{\int }_{0}^{1}{\int }_{0}^{\infty }{\int }_{{R}^{k}}\frac{{\tau }^{\frac{n+k}{2}-1}{\delta }^{\frac{{n}_{2}}{2}}{\left|V\right|}^\frac{1}{2}}{{\left(2\pi \right)}^{\frac{n+k}{2}}}\mathrm{exp}\left[-\frac{\tau }{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{^{\prime}}\left({A}_{4}\left(\rho ,\delta \right)+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}\right]d\beta d\tau d\rho d\delta$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\int }_{0}^{1}{\int }_{0}^{1}\frac{{\delta }^{\frac{{n}_{2}}{2}}}{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}{\left|{A}_{4}\left(\rho ,\delta \right)+V\right|}^\frac{1}{2}}d\rho d\delta$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\Upsilon }_{1}$$
(51)

Further

$$p\left(y|\delta =1\right)={\int }_{0}^{1}{\int }_{0}^{1-\vartheta }{\int }_{0}^{\infty }{\int }_{{R}^{k}}2\frac{{\tau }^{\frac{n+k}{2}-1}{\left|V\right|}^\frac{1}{2}}{{\left(2\pi \right)}^{\frac{n+k}{2}}}\mathrm{exp}\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}\left(\rho ,\vartheta \right)+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\beta d\tau d\rho d\vartheta$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\int }_{0}^{1}{\int }_{0}^{1-\vartheta }\frac{2}{{{\phi }_{3}\left(\rho ,\vartheta \right)}^\frac{n}{2}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^\frac{1}{2}}d\rho d\vartheta$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\Upsilon }_{2}$$
(52)

Here

$${\Upsilon }_{1}={\int }_{0}^{1}{\int }_{0}^{1}\frac{{\delta }^{\frac{{n}_{2}}{2}}}{{\phi }_{4}{\left(\rho ,\delta \right)}^\frac{n}{2}{\left|{A}_{4}\left(\rho ,\delta \right)+V\right|}^\frac{1}{2}}d\rho d\delta$$
$${\Upsilon }_{2}={\int }_{0}^{1}{\int }_{0}^{1-\vartheta }\frac{2}{{{\phi }_{3}\left(\rho ,\vartheta \right)}^\frac{n}{2}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^\frac{1}{2}}d\rho d\vartheta$$

Hence

$$P\left(\vartheta =0|y\right)=\frac{{\Upsilon }_{1}\left(1-\epsilon \right)}{{\Upsilon }_{1}\left(1-\epsilon \right)+{\Upsilon }_{2}\epsilon }$$
$$={\lambda }_{{M}_{1}\left(y\right)} (\mathrm{say})$$
$$P\left(\delta =1|y\right)=\frac{{\Upsilon }_{2}\epsilon }{{\Upsilon }_{1}\left(1-\epsilon \right)+{\Upsilon }_{2}\epsilon }$$
$$=1-{\lambda }_{{M}_{1}\left(y\right)}={\lambda }_{{M}_{2}\left(y\right)} (\mathrm{say})$$

The posterior density of \(\vartheta\), when \(\delta =1\), is given by

$${p}^{*}\left(\vartheta |y\right)$$
$$\propto \frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{\int }_{0}^{\infty }{\int }_{{R}^{k}}{\tau }^{\frac{n+k}{2}-1}\mathrm{exp}\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}\left(\rho ,\vartheta \right)+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\beta d\tau d\rho$$
$$\propto \frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }\frac{1}{{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^\frac{1}{2}}{\int }_{0}^{\infty }{\tau }^{\frac{n}{2}-1}\mathrm{exp}\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)\right\}\right]d\tau \mathrm{d\rho }$$
$$\propto \frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{{\phi }_{3}\left(\rho ,\vartheta \right)}^{-\frac{n}{2}}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^{-\frac{1}{2}}d\rho$$

Hence

$${p}^{*}\left(\vartheta |y\right)=\frac{\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{{\phi }_{3}\left(\rho ,\vartheta \right)}^{-\frac{n}{2}}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^{-\frac{1}{2}}d\rho }{{\int }_{0}^{1}\frac{1}{1-\vartheta }{\int }_{0}^{1-\vartheta }{{\phi }_{3}\left(\rho ,\vartheta \right)}^{-\frac{n}{2}}{\left|{A}_{3}\left(\rho ,\vartheta \right)+V\right|}^{-\frac{1}{2}}d\rho d\vartheta };\left(0<\vartheta <1\right)$$
(53)

Therefore, the posterior distribution of \(\vartheta\) is a mixture of discrete and continuous distribution and given by

$${\pi }^{*}\left(\vartheta |y\right)=\left\{\begin{array}{c}{\lambda }_{{M}_{1}\left(y\right)}; if \vartheta =0\\ \left(1-{\lambda }_{{M}_{1}\left(y\right)}\right){p}^{*}\left(\vartheta |y\right); if 0<\vartheta <1\end{array}\right.$$
(54)

The posterior density of \(\delta\), when \(\vartheta =0\), is obtained as

$${p}^{*}\left(\delta |y\right)\propto {\delta }^{\frac{{n}_{2}}{2}}{\int }_{0}^{1}{\int }_{0}^{\infty }{\int }_{{R}^{k}}{\tau }^{\frac{n+k}{2}-1}\mathrm{exp}\left[-\frac{\tau }{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{^{\prime}}{\left({A}_{4}(\delta ,\rho )+V\right)}^{-1}\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}\right]d\beta d\tau d\rho$$
$$\propto {\int }_{0}^{1}{\delta }^{\frac{{n}_{2}}{2}}{{\phi }_{4}\left(\rho ,\delta \right)}^{-\frac{n}{2}}{\left|{A}_{4}\left(\delta ,\rho \right)+V\right|}^{-\frac{1}{2}}d\rho$$

Hence

$${p}^{*}\left(\delta |y\right)=\frac{{\int }_{0}^{1}{\delta }^{\frac{{n}_{2}}{2}}{{\phi }_{4}\left(\rho ,\delta \right)}^{-\frac{n}{2}}{\left|{A}_{4}\left(\delta ,\rho \right)+V\right|}^{-\frac{1}{2}}d\rho }{{\int }_{0}^{1}{\int }_{0}^{1}{\delta }^{\frac{{n}_{2}}{2}}{{\phi }_{4}\left(\rho ,\delta \right)}^{-\frac{n}{2}}{\left|{A}_{4}\left(\delta ,\rho \right)+V\right|}^{-\frac{1}{2}}d\rho d\delta }$$
(55)

Again, the posterior distribution of \(\delta\) is a mixture of discrete and continuous distribution and given by

$${\pi }^{*}\left(\delta |y\right)=\left\{\begin{array}{c}1-{\lambda }_{{M}_{1}\left(y\right)}; if \delta =1\\ {\lambda }_{{M}_{1}\left(y\right)}{p}^{*}\left(\delta |y\right); if 0<\delta <1\end{array}\right.$$
(56)

Derivation of Posterior Odds Ratio

For obtaining the posterior odds, let us consider the numerator of the Eq. (27)

$$\pi \left({H}_{0}\right)$$
$$={\int }_{0}^{1}{\int }_{0}^{1}{\int }_{0}^{\infty }{\int }_{{R}^{k}}\frac{{\tau }^{\frac{n+k}{2}-1}{\left|V\right|}^\frac{1}{2}{\delta }^{\frac{{n}_{2}}{2}}}{{\left(2\pi \right)}^{\frac{n+k}{2}}}exp\left[-\frac{\tau }{2}\left\{{\phi }_{4}\left(\rho ,\delta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)}^{^{\prime}}\left({A}_{4}\left(\rho ,\delta \right)+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\delta \right)\right)\right\}\right]d\beta d\tau d\delta d\rho$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\Upsilon }_{1}$$
(57)

Further, the denominator of (27) is given by

$$\pi \left({H}_{1}\right)$$
$$={\int }_{0}^{1}{\int }_{0}^{1-\vartheta }{\int }_{0}^{\infty }{\int }_{{R}^{k}}2\frac{{\tau }^{\frac{n+k}{2}-1}{\left|V\right|}^\frac{1}{2}}{{\left(2\pi \right)}^{\frac{n+k}{2}}}exp\left[-\frac{\tau }{2}\left\{{\phi }_{3}\left(\rho ,\vartheta \right)+{\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)}^{^{\prime}}\left({A}_{3}\left(\rho ,\vartheta \right)+V\right)\left(\beta -\widehat{\beta }\left(\rho ,\vartheta \right)\right)\right\}\right]d\beta d\tau d\rho d\vartheta$$
$$=\frac{\Gamma \left(\frac{n}{2}\right){\left|V\right|}^\frac{1}{2}}{{\pi }^\frac{n}{2}}{\Upsilon }_{2}$$
(58)

Utilization of (57) and (58) in (27) leads to the required expression (28) for the posterior odds ratio.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Shrivastava, A., Chaturvedi, A. & Srivastava, A. Modeling Structural Breaks in Disturbances Precision or Autoregressive Parameter in Dynamic Model: A Bayesian Approach. J Indian Soc Probab Stat 23, 129–154 (2022). https://doi.org/10.1007/s41096-022-00115-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41096-022-00115-8

Keywords

  • Dynamic model
  • Structural changes
  • Posterior density, posterior odds ratio