• ByoungSeon Choi
Part of the Springer Series in Statistics book series (SSS)


Consider the autoregressive moving-average (ARMA) model of orders p and q,
$$\phi \left( B \right){y_t} = \theta \left( B \right){v_t}$$
where \(\phi \left( B \right) = - {\phi _0} - {\phi _1}B - \cdots - {\phi _p}{B^p},\theta \left( B \right) = - {\theta _0} - {\theta _1}B - \cdots - {\theta _q}{B^q},{\theta _0} = {\theta _0} = - 1,{\phi _p} \ne 0,{\theta _q} \ne 0\), B is the backshift operator, and {v t } is a sequence of independent and identically distributed random variables with means 0 and variances σ2 (> 0). The sequence {v t } is called either a white noise process or an innovation process. In some time series books, the white noise process is defined as a sequence of uncorrelated random variables instead of that of independent random variables. In practical time series analysis, there is not as much difference between the two definitions. We assume that the model is stationary and invertible, i.e., the equations ϕ(z) = 0 and θ(z) = 0 have all the roots outside the unit circle. We assume that the two equations have no common root. This assumption is sometimes called coprimal. The stationarity and the invertibility conditions have been discussed by several authors. Interested readers may consult the references in Section 1.6. In statistical literature, the white noise process is frequently assumed to be Gaussian, i.e., normally distributed. There are some references about non-Gaussian ARMA processes in Section 1.6. In this book, we assume that the coefficients ϕ1…, ϕ p , ϕ1…, ϕ q , and the white noise variance σ2 are constants, i.e., they do not depend on time.


Ordinary Little Square Unit Circle ARMA Model Ordinary Little Square Estimate Nonstationary Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Additional References

  1. (Section 1.1)
    For the stationarity and the invertibility conditions, refer to Wise (1956), Barndorff-Nielsen and Schou (1973), Pagano (1973, 1974), Ramsey (1974), O. D. Anderson (1975b, 1977b, 1978), Granger and Andersen (1978), Hallin (1980, 1981), Piccolo (1982), Monahan (1984), Findley (1986), Hallin, Lefevre, and Puri (1988), and the references therein. Particularly, Piccolo (1982) presented the hypervolume of the stationarity and the invertibility regions for the ARMA model and commented on the severity of the constraints for higher order models.Google Scholar
  2. (Section 1.1)
    For non-Gaussian ARMA processes, refer to Granger (1979), Davies, Spedding, and Watson (1980), Lawrance and Lewis (1980), Pierce (1985), Martin and Yohai (1985), Findley (1986), Hallin, Lefevre and Puri (1988), Priestley (1988), Harvey (1989, pp. 348–362), Damsleth and El-Shaarawi (1989), Diggle and Zeger (1989), and the references therein.Google Scholar
  3. (Section 1.1)
    For varying coefficient ARMA models, refer to Nicholls and Pagan (1985), Hallin and Ingenbleek (1983), Hallin (1984, 1986), and the references therein.Google Scholar
  4. (Section 1.1)
    There are some related works to Equation (1.2) by McLeod (1975) and Mittnik (1990).Google Scholar
  5. (Section 1.2)
    For the history of using the ARMA model, refer to Wold (1938, 1966), Rudra (1954), Brillinger (1981, pp. 9–13), and Lauritzen (1981).Google Scholar
  6. (Section 1.2)
    There are more survey papers of recent developments in time series analysis such as Jenkins (1965), Kailath (1974), Makridakis (1976, 1978), O. D. Anderson (1977a), Chatfield (1977), Hopwood and Newbold (1980), Kay and Marple (1981), Cox (1981), Abraham and Ledolter (1986), and Pino, Morettin and Mentz (1987).Google Scholar
  7. (Section 1.2)
    There are more books of collected papers in time series analysis edited by Rosenblatt (1963), Harris (1967), Parzen (1967), Mehra and Lainiotis (1976), Zellner (1978), Childers (1978), Findley (1978, 1981), Haykin (1979), Makridakis and Wheelwright (1979), Brillinger and Tiao (1980), O. D. Anderson (1980b, 1980c, 1982a, 1982b, 1983, 1984a, 1984b, 1985a, 1985b), O. D. Anderson and Perryman (1981,1982), Haykin and Cadzow (1982), Box, Leonard, and Wu (1983), Mandrekar and Salehi (1983), Parzen (1983b), Franke, Hardle, and Martin (1984), Wegman and Smith (1984), Gani and Priestley (1986), Kesler (1986), C. H. Chen (1989), and Bittanti (1989).Google Scholar
  8. (Section 1.3)
    For more details about Algorithm 1.2, refer to Bed- nar and Roberts (1985), Takemura (1984), Franke (1985a), and Choi (1991a). Algorithm 1.2 has been generalized to a vector ARMA process by Whittle (1963), Wiggins and Robinson (1965), Rissanen (1973), Watson (1973), Akaike (1973b), Trench (1974), Jong (1976), Morf, Vieira, and Kailath (1978), and Choi (1990c). Also, refer to Section 4.2 of this book.Google Scholar
  9. (Section 1.4)
    For more details about the asymptotic properties of the sample ACVF and the sample ACRF, readers may refer to some standard textbooks of time series analysis such as T. W. Anderson (1971, Chapter 8), Fuller (1976, Chapter 6), Priestley (1981, Section 5.3), and Brockwell and Davis (1987, Chapter 7). For derivations of their distributions, refer to R. L. Anderson (1942), Anderson and Rubin (1964), Ramasubban (1972), Hannan and Heyde (1972), Hannan (1976), Roy (1989), and the references therein. For uniform convergence of the sample ACVF and the sample ACRF, refer to Hannan (1974), An, Chen, and Hannan (1982), Hannan and Kavalieris (1983b), Hannan and Deistler ( 1988, Section 5.3), and the references therein. Their results will be useful for studying asymptotic properties of the orders selected by penalty function methods, which will be discussed in Chapter 3.Google Scholar
  10. (Section 1.4) For the lattice method, refer to Cybenko (1983) and Caines (1988, pp. 194–198). Actually there are a lot of lattice algorithms and Burg’s algorithm is one of them. For more details, readers may refer to survey papers by Friedlander (1982a, 1982b). Also, refer to Lee, Morf, and Friedlander (1981), Makhoul (1981), Lee, Friedlander, and Morf (1982), Ensor and Newton (1990), and the special volume of IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-29, No. 3 (1981).
    Barrodale and Erickson (1980), Marple (1980), Huang (1990b), and Strobach (1990, Chapter 7) studied Levinson-Durbin type algorithms for AR model fitting of stationary and nonstationary processes.Google Scholar
  11. (Section 1.4)
    For more details about biases of the YW estimates, Burg’s estimates, and the OLS estimates, refer to White (1961), Shenton and Johnson (1965), Tanaka (1984), Yamamoto and Kunitomo (1984), Kunitomo and Yamamoto (1985), Nicholls and Pope (1988), Shaman and Stine (1988), Stine and Shaman (1990), Pope (1990), and the references therein. About convergence of the OLS estimates, refer to Shibata (1977).Google Scholar
  12. (Section 1.4)
    For the ML estimates and the related topics, refer to Whittle (1951, 1952a, 1953b, 1962), Walker (1964), Reeves (1972), Tunnicliffe-Wilson (1973), Akaike (1973a), Astrom and Soderstrom (1974), Newbold (1974), T. W. Anderson (1975, 1977), Dunsmuir and Hannan (1976), Box and Jenkins (1976), Nicholls (1976, 1977), Osborn (1976, 1977), Godolphin (1977, 1978, 1980a, 1984), Ali (1977), Cooper and Thompson (1977), Kohn (1977), McLeod (1975), Deistler, Dunsmuir, and Hannan (1978), Phadke and Kedem (1978), Pham (1978, 1979, 1984a, 1986, 1987 ), Ansley (1979), McDunnough (1979), Harvey and Phillips (1979), Ljung and Box (1979), Nicholls and Hall (1979), Pearlman (1980), T. W. Anderson and Mentz (1980), Kabaila (1980, 1983), Cooper and Wood (1981), Cryer and Ledolter (1981), Ljung (1982), Godolphin and Gooijer (1982), Ansley and Kohn (1983), Spliid (1983), Godolphin and Unwin (1983), Tanaka (1984, 1986), Kulperger (1985), Wincek and Reinsel (1986), Potscher (1987), Shea (1987), Cernuschi-Frias and Rogers (1988), Dahlhaus and Potscher (1989), and the references therein. Recently some methods have been presented to calculate the likelihood function via the Kalman filter, which was proposed by Kalman (1960, 1963) and Kalman and Bucy (1961). For more details, refer to Hannan and Deistler ( 1988, Chapter 6).Google Scholar
  13. (Section 1.4)
    There are some other ARM A estimation methods. Walker(1962) proposed to maximize the likelihood function of the sample ACVF. Box and Jenkins (1976) proposed to estimate the parameters using the method of moments and Wilson’s algorithm (1969) together. Konvalinka and Matausek (1979) presented a method based on least squares input-output analysis. Some estimation methods using spectral analysis were proposed by Hannan (1970a, 1979), Parzen (1971), Nicholls (1972, 1973), and T. W. Anderson (1975). For robust estimation methods, refer to Denby and Martin (1979), Martin (1980, 1981), Martin and Yohai (1985), Bustos and Yohai (1986), Masarotto (1987), Li and Hui (1989), and the references therein.Google Scholar
  14. (Section 1.5)
    The explosive ARM A model has been studied by Rao (1961), Stigum (1974), Fuller and Hasza (1981), O. D. Anderson (1990) and Huang (1990a, 1990b). For unstable ARMA processes, refer to O. D. Anderson (1975a), Fuller (1976, pp. 366–385; 1985), Roy (1977), Dickey and Fuller (1979, 1981), Hasza and Fuller (1979), Kawashima (1980), Hasza (1980), Fuller and Hasza (1981), Fuller, Hasza, and Goebel (1981), Evans and Savin (1981, 1984), Sargan and Bhargava (1983), Kay (1983), Ahtola and Tiao (1984, 1987a, 1987b), Solo (1984), Yajima (1985), Said and Dickey (1984, 1985), Parzen (1986), Bhaxgava (1986), Phillips (1987a, 1987b), Stoica and Nehorai (1986,1987), Chan and Wei (1987,1988), Sims (1988), Cressie (1988), Chan (1988, 1990), Hall (1989), Perron (1989), Pukkila (1989), Schwert (1989), Porter-Hudak (1990), and the references therein. Tsay and Tiao (1990) considered the asymptotic properties of multivariate nonstationary ARMA processes.Google Scholar

Copyright information

© Applied Probability Trust 1992

Authors and Affiliations

  • ByoungSeon Choi
    • 1
  1. 1.Department of Applied StatisticsYonsei UniversitySeoulKorea

Personalised recommendations