Illuminating ARIMA model-based seasonal adjustment with three fundamental seasonal models

Our starting place is the first order seasonal autoregressive model. Its series are shown to have canonical model-based decompositions whose finite-sample estimates, filters, and error covariances have simple revealing formulas from basic linear regression. We obtain analogous formulas for seasonal random walks, extending some of the results of Maravall and Pierce (J Time Series Anal, 8:177–293, 1987). The seasonal decomposition filters of the biannual seasonal random walk have formulas that explicitly reveal which deterministic functions they annihilate and which they reproduce, directly illustrating very general results of Bell (J Off Stat, 28:441–461, 2012; Center for Statistical Research and Methodology, Research Report Series, Statistics #2015-03, U.S. Census Bureau, Washington, D.C. https://www.census.gov/srd/papers/pdf/RRS2015-03, 2015). Other formulas express phenomena heretofore lacking such concrete expression, such as the much discussed negative autocorrelation at the first seasonal lag quite often observed in differenced seasonally adjusted series. An innovation that is also applied to airline model seasonal decompositions is the effective use of signs of lag one and first-seasonal-lag autocorrelations (after differencing) to indicate, in a formal way, where smoothness is increased by seasonal adjustment and where its effect is opposite.

(2) See Chapter 9 of Box and Jenkins (1976) for example. Hence the autocorrelations are We only consider 0 < < 1 in order to have positive correlation at the seasonal lags q, 2q, . . . . For large enough , (3) shows that Z t has the fundamental characteristics of a strongly seasonal time series, namely a strong tendency for year-to-year movements in the same direction, with magnitudes (relative to the underlying level, e.g. its mean zero) that change gradually more often than not. For the monthly case q = 12, Fig. 1 shows that when = 0.95, then even after 12 years the correlation is greater than 0.5. By contrast, when = 0.70, after 5 years the correlation is negligible. Graphs of such a Z t (not shown) do not clearly indicate seasonality. Figure 3 in Sect. 3.2 shows a simulated = 0.95 monthly SAR(1) series Z t of length 144 displaying quite seasonal features. It also shows the residual serieŝ N t = Z t −Ŝ t resulting from removal of the estimateŜ t of the unobserved signal component S t of a signal plus noise decomposition, with uncorrelated components, E S t N t− j = 0, −∞ < j < ∞. The signal S t is specified to have the smallest variance γ S 0 < γ 0 compatible with having the same nonzero-lag autocovariances as Z t , γ S j = γ j , j = 0. This is equivalent to specifying N t as white noise with the largest variance possible for a w.n. component in an uncorrelated decomposition (4) of Z t . This variance, γ 0 − γ S 0 , is the minimum value of the spectral density (s.d.) of Z t . It has a simple formula in the SAR(1) case, as does the s.d., see (10) and (11) in Sect. 3.2.
The graph ofN t in Fig. 3 appears less smooth than Z t , and this will be established in a formal way in Sect. 13.2. The signal estimateŜ t is graphed by calendar month in Fig. 4. TheŜ t visibly smooth each of the 12 annual calendar month series of Z t , a property connected to the fact that the lag 12k, k ≥ 1, autocorrelations ofŜ t are larger than those of Z t , see Sect. 13.
For any stationary Z t with known autocovariances γ j , typically from an ARMA model for Z t , the first step toward obtaining linear estimates of an uncorrelated component decomposition (4) is the determination or specification of an appropriate autocovariance decomposition, γ j = γ S j + γ N j , j = 0, 1, . . . . The SAR(1) estimated decomposition is detailed in Sect. 3.2.
For a vector of observations Z = (Z 1 , . . . , Z n ) , the autocovariances at lags 0 to n − 1 furnish a corresponding n × n autocovariance matrix decomposition, This decomposition enables simplified linear regression formulas (reviewed in Sect. 4) to yield a decomposition Z =Ŝ +N , with minimum mean square error (MMSE) linear estimates (or estimators),Ŝ = β S Z andN = β N Z , of the unobserved components. Such estimates are also called minimum variance estimates. Another standard formula provides the variance matrix of the estimation errors. Everything is illustrated for the two-component SAR(1) decomposition. Our most extensive analyses are for the simplest seasonal ARIMA model, the seasonal random walk or SRW, obtained by setting = 1 in (1), The MMSE two-component decomposition filter formulas for such nonstationary Z t can be obtained simply by setting = 1 in the stationary SAR(1) formulas. This follows from results of Bell (1984) which expand to difference-stationary series the Wiener-Kolmogorov (W-K) filter formulas presented in Sect. 6. The original W-K formulas provide MMSE component estimates from bi-infinite stationary data Z t , −∞ < t < ∞ and immediately reproduce the SAR(1) formulas of Sect. 5.1 for the intermediate times between the first and last years, q + 1 ≤ t ≤ n − q.
In Sect. 7, after formally defining the pseudo-spectral density (pseudo-s.d.) of an ARIMA model, we illustrate the kinds of non-stationary W-K calculations that are done with pseudo-s.d.'s in TRAMO-SEATS (Gómez and Maravall 1996), hereafter T-S, and in its implementations in TSW (Caporello and Maravall 2004), X-13ARIMA-SEATS (U.S. Census Bureau 2015), hereafter X-13A-S, and JDemetra+ (Seasonal Adjustment Centre of Competence 2015), hereafter JD+. We derive the simple formulas of all of the filters associated with the three-component seasonal, trend, and irregular decomposition of the q = 2 SRW. We proceed a little more directly than the tutorial article Maravall and Pierce (1987), which develops fundamental properties of this model's decomposition estimates with somewhat different goals. In Sect. 7.2, we obtain the forecast and backcast results which are required to derive the asymmetric filters for initial and final years of a finite sample as well as the error variances of their estimates. These illustrate in a simple way the fundamental role of Bell's Assumption A. In Sect. 10, we provide an extended and corrected version of Maravall and Pierce's Table 1 giving variances and autocorrelations of the stationary transforms of the estimates.
For the Box-Jenkins airline model, Sect. 12 provides graphs of the MMSE filters determined by small, medium and large values of the seasonal moving average parameter . Graphs also display the quite different visual smoothing effects of filters from such on the monthly International Airline Passenger totals series for which the model is named. This is a prelude to Sect. 13, which has the most experimental material. In a formal way based on autocorrelations of the component estimates of the different types of models considered (fully differenced in the nonstationary case), it shows where smoothness is enhanced, and where an opposite result occurs, among the seasonal decomposition components. Same-calendar-month subseries are the main setting and are considered separately from the monthly series. Complete results are presented, first for the two-component SAR(1) decomposition, next for the q = 2 SRW's threecomponent decomposition. Thereafter, smoothness results are presented for airline model series over an illustrative set of coefficient pairs.
There are many formulas. In most cases, more useful for readers than their derivations or details will be to study how the formulas are used.

Some conventions and terminology
A generic primary time series X t , stationary or not, will be assumed to have q ≥ 2 observations per year, with the j-th observation for the k -th year having the time index t = j +(k − 1) q, 1 ≤ j ≤ q. For simplicity, the series of j-th values from all available years of is called the j-th calendar month subseries of X t even when q = 12. When q = 12, these are the series of January values, the series of February values, etc., 12 series in all. Some seasonal adjustment properties, especially those of seasonal component estimates, are best revealed by the calendar month subseries. When X t is stationary (mean E X t = 0 assumed), the lag k autocorrelation of a calendar month subseries is the lag kq, or k-th seasonal autocorrelation, of X t . Because some formulas simplify when q/2 is an integer, we only consider even q. In our examples q = 2 or 12. (In practice, q = 3, 4, 6 also occur.) Some basic features of canonical ARIMAmodel-based seasonal adjustment (AMBSA for short) will be related to smoothing of the calendar month subseries or detrended versions thereof, see Sect. 11. The definition of canonical is given in Sect. 3.2.
Features of SEATS referred to are also features of the implementations of SEATS in X-13A-S and JD+.

The general stationary setting
Seasonal adjustment is an important example of a time series signal extraction procedure. In the simplest setting, the observed series Z t is treated as the sum (4) of two not directly observable components, the "signal" S t of interest and an obscuring component, the "noise" N t . In the case of stationary Z t with known autocovariances, γ j , j = 0, ±1, . . ., typically from an ARMA model, estimates of both components can be obtained from an autocovariance decomposition when γ S j and γ N j have properties suitable for S t and N t . Effectively, the additive decomposition (7) implies uncorrelatedness of the signal and noise, see Findley (2012), which we always assume. As a consequence, for a given finite sample Z t , 1 ≤ t ≤ n, simplified linear regression formulas (21) summarized in Sect. 4 provide a decomposition Z t =Ŝ t +N t , 1 ≤ t ≤ n with MMSE estimates.

Fig. 2
The q = 12 SAR(1) spectral densities for = 0.70 (darker line) and = 0.95 with σ 2 a = 1 − 2 , which results in γ 0 = 1. So the area under each graph is 1/2 (in the units of the graph). The peaks are at the trend frequency λ = 0 and at each seasonal frequency, λ = k/12 cycles per year, 1 ≤ k ≤ 6, always with amplitude σ 2 The peaks for = 0.70 are broader and much lower. The minimum value σ 2 a (1 + ) −2 occurs midway between each pair of peaks The second and third formulas arise from γ − j = γ j and cos 2π jλ = 1 2 e i2π jλ + e −i2π jλ . White noise is characterized by having a constant s.d. equal to its variance. In the AMBSA paradigm of Hillmer and Tiao (1982) implemented in SEATS, spectral densities play an essential role in specifying the canonical components, as will be demonstrated shortly.
An s.d. is nonnegative, g (λ) ≥ 0 always, see (50) for the ARMA formula. It is an even function, g (−λ) = g (λ), so it is graphed only for 0 ≤ λ ≤ 1/2. See the SAR(1) example in Fig. 2. It is integrable and for any j the autocovariance γ j can be recovered from g (λ) as (7) is equivalent to the s.d. decomposition

The SAR(1) canonical signal + white noise decomposition
Conceptually attractive and unique decompositions result from the following restriction, introduced by Tiao and Hillmer (1978). An s.d. decomposition with two or more component s.d.'s is called canonical if at most one of the components, usually a constant (white noise) s.d., has a non-zero minimum. A nonconstant s.d. (or pseudo-s.d. as defined in Sect. 7) is called canonical if its minimum value is zero. The two-component SAR(1) case provides the simplest seasonal example.
By calculation from (2) or from the general ARMA formula (50) below, for a series Z t with model (1), the s.d.
A canonical two-component s.d. decomposition of (10) is achieved by separating g (λ) from its minimum value, which occurs at frequencies in −1/2 ≤ λ ≤ 1/2 where e i2πqλ = cos 2πqλ = −1, such as λ = ± (2q) −1 . The resulting decomposition prescribes a matrix decomposition (5) for any sample size n ≥ 1: With Z Z = 1 − 2 −1 σ 2 a | j−k| j,k=1,...,n and I the identity matrix of order n, where SS and N N have the formulas indicated. Substitution into the regression formulas (21) yields estimated signal factorsŜ t and noise factorsN t that are exemplified in Figs. 3 and 4 for the simulated SAR(1) Z 1 , . . . , Z 144 shown. The function g S (λ) = g (λ) − σ 2 a (1 + ) −2 , being nonnegative, even and integrable, is the s.d. of a stationary process S t , a SARMA(1,1) q as a more informative formula (19) will reveal. Since g S (λ) differs from g (λ) by a constant, it too has a peak at λ = 0. This is the trend frequency, the lower limit of the long-term cycle frequencies. This peak is identical in shape, and therefore contributes as much to the integral of g S (λ) over [0, 1/2], as one-half of each seasonal peak at λ = k/12, 1 ≤ k ≤ 6 and the same as the seasonal peak at λ = 1/2. Hence S t has a trend component that is not negligible compared to its seasonal component. So S t is not the seasonal component  (21). The series Z t shows the consistent prominent variations by calendar month seen with quite seasonal time series. The oscillations ofN t are considerably smaller yetN t can be considered somewhat less smooth than Z t also after the difference of scale is taken into account, see Sect. 13.2 TheŜ t closely follow all but the most rapid movements of the series, but with fewer changes of direction over the 12 years. Autocorrelation properties help to explain why they evolve somewhat more smoothly than the Z t subseries. Their slightly reduced standard deviation explains why they have slightly reduced extremes, see Sect. 13.3 of Z t . The decomposition we will obtain from (13) can be regarded as a smoothnonsmooth decomposition, see Sect. 13. (SEATS and its implementations calculate an algebraically more complex canonical seasonal, trend, and irregular decomposition for an SAR(1) model to estimate its seasonal and nonseasonal components, see Findley et al. 2015 for the derivation.) Further insight into the properties of S t come from a formula for g S (λ) that displays the autocovariances of S t explicitly: Thus Key features of S t with s.d. g S (λ) are that γ S 0 < γ 0 and γ S j = γ j , j = 0. Hence S t has autocorrelations γ S kq /γ S 0 proportionately greater than those of Z t at all seasonal lags, This S t has the smallest variance compatible with these properties. Because the s.d. of N t is constant, N t is specified as white noise with autocovariances and autocovariance matrix The s.d.'s g S (λ) and g N (λ) from (12) prescribe a signal+noise decomposition of g (λ). Because g S (λ) has minimum value zero, S t is said to be white noise free. N t has the largest variance possible for a white noise component. This white noise free plus white noise decomposition has filter formulas for the MMSE linear estimates of S t and N t that are especially simple and revealing, as will be seen in Sect. 5.1. Figure 3 shows the graph of a series Z t of length 144 simulated from (1) with q = 12 and = 0.95, along with its noise component estimateN t from Sect. 5. (All SAR(1) simulations use σ 2 a = 1 − 2 ). The earliest values are assigned the date January, 2002. Now we derive a compact formula for g S (λ) to show a type of calculation that is regularly needed for nonconstant canonical spectral densities. It will be used to identify the component's ARMA model.
It follows from (50) below that S t has a noninvertible SARMA(1,1) q model

Regression formulas for two-component decompositions
Given a column vector of data Z = (Z 1 , . . . , Z n ) , where denotes transpose, let S = (S 1 , . . . , S n ) and N = (N 1 , . . . , N n ) denote the unobserved uncorrelated components of a decomposition Z = S + N . From a specified decomposition of the covariance matrix Z Z = E Z Z , standard linear regression formulas provide MMSE linear estimatesŜ of S andN of N .

The estimated decomposition
Because SN = 0, with 0 denoting the zero matrix of order n, we have S Z = E S Z = SS . Similarly N Z = N N . Thus the usual regression coefficient formulas It is also easy to directly verify that the coefficient formulas result in (22), the fundamental linear MMSE property, i.e., the uncorrelatedness of the errors with the data regressor Z , see Wikipedia 2013.
The final formula in (21) shows that the estimates yield a decomposition, For 1 ≤ t ≤ n, the t-th row of β S provides the filter coefficients for the estimateŜ t and correspondingly with β N forN t , as will be illustrated in Sect. 5.
In summary, regression based on (20) provides an observable decomposition of Z in terms of MMSE linear estimates. Such a decomposition, with N t specified as white noise, exists for any stationary Z t whose s.d. has a positive minimum.

Variance and error variance matrix formulas
Thus both estimates have the same error covariance matrix, (assuming no specification or estimation error in the model for Z t ). There are the usual variance decompositions, the first following from the decomposition S =Ŝ + S −Ŝ , whose components are uncorrelated by (22), and analogously for the second. In some regression literature SS is called the total variance of S, ŜŜ the variance of S explained by Z , and is the residual variance. Similarly for N with the same residual variance, from (26), which shows that where, from (21), Being ARMA autocovariance matrices, Z Z , SS and N N are invertible. It follows that all matrices in (28) and (29) are positive definite. In particular, for each t,Ŝ t andN t are positively correlated EŜ tNt > 0 (a property that is not generally true of differenced estimates in the ARIMA case as we will show). From (27), the estimates are less variable than their components, precisely due to estimation error. Seasonal economic indicator series Z t are generally modeled with nonstationary models, e.g., ARIMA models rather than ARMA models. Then AMBSA uses pseudospectral density decompositions, discussed in Sect. 7. For finite-sample estimates in the ARIMA case, McElroy (2008) provides matrix formulas forŜ,N and which involve matrices implementing differencings and autocovariance matrices of the differenced S and N . We will be able to easily convert the SAR(1) formulas developed next to obtain the same finite-sample results as McElroy's formulas for the two-component decomposition of the ARIMA SRW model (6).

SAR(1) Decomposition estimation formulas
For the SAR(1) model, the entries of the inverse matrix −1 Z Z have known, relatively simple formulas, see Wise (1955) and Zinde-Walsh (1988). For example, with q = 2, n = 7, For all q ≥ 2 and n ≥ q, as (30) indicates, −1 Z Z has a tridiagonal symmetric form, with nonzero values only on the main diagonal and the q-th diagonals above and below it. The sub-and superdiagonals have the entries − σ −2 a . The first and last q entries of the main diagonal are σ −2 a and the rest are σ −2 Z Z , one has, when q = 2, n = 7, Further, from β S = I − β N ,

The general filter formulas
For general q and n ≥ 2q + 1, the −1 Z Z formula of Wise (1955) yields the filter formulas forN andŜ = Z −N shown in (33)-(37) and (38)-(40), generalizing from the special cases (31) and (32). For the intermediate times q + 1 ≤ t ≤ n − q, the noise component estimateN t is given by the symmetric filter The filters for the initial and final years are asymmetric. For the initial year 1 ≤ t ≤ q, The filter for the final year n − q + 1 ≤ t ≤ n is the time-reverse of the initial year filter,N In comparison with (33), the value ( Z t ) in the re-expression (35) appears as the MMSE SAR(1) backcast of the missing Z t−q and, in (37), as the MMSE SAR(1) forecast of the missing Z t+q . For the signal component estimatesŜ t = Z t −N t , at intermediate times q + 1 ≤ t ≤ n − q, the filter formula is again symmetric, a downweighted 2 × 2 seasonal moving average, with weight 4 (1 + ) −2 tending to 1 when does.
As withN t , for the initial and final years, theŜ t filters 1 are asymmetric.
and for n − q + 1 ≤ t ≤ n, the time reverse of the initial year filter, The role of ( Z t ) in (39) and (40) is as in (35) and (37). Because the coefficients in (38)-(40) are positive, as are also the autocovariances of Z t at lags that are multiples of q, it follows thatŜ t andŜ t±kq are positively correlated, more strongly than Z t and Z t±kq it will be shown.

Filter re-expressions and filter terminology
The coefficient sets in the formulas above all apply for more than one value of t when n ≥ 2q + 2 (Recall that q ≥ 2). To reveal this better, let B denote the backshift or lag operator defined as usual: for any time series X t and integer j ≥ 0, B j X t = X t− j and B − j X t = X t+ j (a forward shift if j = 0). Since B 0 X t = X t , one sets B 0 = 1. A constant-coefficient sum j c j B j is a (linear, time-invariant) filter, a symmetric filter if the same filter results when B is replaced by B −1 , as with the intermediate time filters (33) and (38). Their backshift operator formulas reveal factorizations like others that will be useful: These formulas apply for all t such that q + 1 ≤ t ≤ n − q and to all t when the conceptually important case of bi-infinite data Z τ , −∞ < τ < ∞ is considered. The one-sided filter that produces the MMSE estimate for final time t = n is called the concurrent filter. In our finite-sample context, the concurrentN t andŜ t filters, (1 + ) −2 {− B q + 1} and (1 + ) −2 {B q + ( + 2)} respectively, could be applied to all Z t after the first year, q + 1 ≤ t ≤ n, but are only MMSE in the final year.

The error covariances of the SAR(1) estimates
For the SAR(1) model, the formulas (27), (18) and (21) Hence, for q = 2 and n = 7, from (18) and (32), which reveals the general pattern. The error variances of the initial and final years are larger than the error variance 2σ 2 a (1 + ) −4 at intermediate times by the amount σ 2 a 2 (1 + ) −4 . This is the mean square error 2 of using (34) and (37), since from (2) we have which is approximately 0.4997 for = 0.95 and therefore quite substantial. By contrast, for S t we have (15). This is approximately 0.013 for = 0.95. The fact that the intermediate-time mean square error has the same positive value for all n ≥ 5 reminds us that the error does not become negligible with large n.
The two-component SAR(1) decomposition derived above has exceptional pedagogical value because of the simplicity of its filter and error variance formulas derived above. With the aid of results of Bell (1984), a rederivation of the filter formulas via the Wiener-Kolmogorov formulas, presented next, makes possible a quick transition to the nonstationary SRW model (6) and its canonical MMSE two-component decomposition filter and error autocovariance formulas, all obtained by setting = 1 in the SAR(1) formulas above.

Wiener-Kolmogorov formulas applied to SAR(1) and SRW models
Initially for the two-component case (4) with bi-infinite data, we consider a fundamental and relatively simple approach to obtaining MMSE decomposition estimates. It also applies to the ARIMA case under a productive assumption of Bell (1984) discussed and applied in Sect. 7.

Filter transfer functions and the input-output spectral density formula
Not only finite filter formulas but also general bi-infinite filter formulas see (4.4.3) of Brockwell and Davis (1991). The function β e i2πλ is called the transfer function of the filter β (B) and β e i2πλ 2 is its squared gain. When a filter's transfer function β e i2πλ is known, then the filter coefficients can be obtained from it, in general by integration but in practice, for ARMA or ARIMA related transfer functions, by algebraic/numerical algorithms such as those encoded in SEATS. For example, the transfer function ofŜ t in (42) is Hence from (46), the spectral density ofŜ t is A stationary ARMA series Z t has a representation with AR and MA polynomials ϕ where a t is white noise with variance denoted σ 2 a . (ϑ is script θ, ϕ is script φ.) The general ARMA s.d. formula, follows from two applications of (46) as in Brockwell and Davis (1991, p. 123). Conversely, if (50) and (49) hold, then so does (48) for some w.n. process a t with variance σ 2 a . This fact can be used to identify ARMA models for bi-infinite data component estimates. For example, from (47) and (50),Ŝ t has the noninvertible SARMA(1,2) q model Note from (50) that an ARMA model is noninvertible, i.e. ϑ e i2πλ = 0 for some λ, if and only if its spectral density is zero at some λ.

The W-K formulas
For a stationary series Z t with a spectral density decomposition (9) specifying a two-uncorrelated-component decomposition Kolmogorov (1939) and Wiener (1949) independently derived the formulas of the transfer functions of each component's MMSE linear estimatê from bi-infinite data, Z τ , −∞ < τ < ∞. For decompositions with more components, the same ratio form applies: each component estimate's transfer function is the ratio of its spectral density to g (λ). The filters are finite only when Z t has an AR model but at most one component also does. The W-K formulas are implemented in SEATS using an algorithm of Tunnicliffe Wilson, presented in Burman (1980), which yields finite-sample MMSE estimates. From a moderate number of forecasts and backcasts, the algorithm efficiently calculates the result of using backcasts and forecasts for the infinitely many missing past and future ARMA or ARIMA data. This has led to all finite-sample MMSE component estimates being confusingly called Y-K estimates in some of the literature. The bi-infinite error processes t = S t −Ŝ t and − t = N t −N t are stationary with spectral density see Whittle (1963) or Appendix A of the research report Findley et al. (2015) for derivations of (52) and (54). The formulas (52) and (54) are analogues of the matrix formulas of (21) and (29). Finite-sample mean square errors variances and covariances cannot be obtained from g (λ). In their place, the measures of uncertainty output by SEATS for its finite-sample estimates are measures for semi-infinite data estimates, used as approximations. JD+ obtains exact finite-sample measures from state space algorithms.

SAR(1) intermediate-time filters again and an alternate model form
As a simple W-K application, for the SAR(1), from (10) and (17), the intermediate-timê N t filter has the transfer function Substituting B j for e i2π jλ and B − j for e −i2π jλ yields (41), and (42) follows similarly using β S (B) = 1 − β N (B). We enter new territory by substituting Z t = (1 − B q ) −1 a t into (41). This yields the model formulâ the forward-time form of a seasonal MA(1). This form is used by SEATS for revision variance calculations illustrated in Maravall and Pierce (1987). It will also be the form provided by the most direct derivations of seasonal random walk component models in later sections.
Remark A formula of the usual backward-time form,N t = (1 + ) −2 (c t − c t−2 ), can be obtained for (56) with c t = 1 − B 2 −1 1 − B −2 a t . Complex conjugation preserves magnitude, so 1 − e i2π 2λ −2 1 − e −i2π 2λ 2 = 1 for all λ. Thus, from (46), the spectral densities satisfy g c (λ) = g a (λ) = σ 2 a showing that c t is white noise with variance σ 2 a . The expanded formula c t = 1 − B −2 ∞ j=0 j a t−2 j , shows that c t is a function of future and past a t , specifically of a t+2−2 j , 0 ≤ j < ∞. Analogues of these results hold for all forward-time moving average models derived below.

Going nonstationary with the seasonal random walk
The ARIMA W-K formula generalization of Bell (1984), discussed in detail in the next section, shows that setting = 1 in (41)-(42) yields the MMSE estimates of the two-component decomposition Z t =Ŝ t +N t of the SRW model (6) witĥ This is the result obtained with = 1 in (42) and (41). From (57), the forecasting and backcasting results of Sect. 7.2 will yield that the initial year and final year estimates ofN t andŜ t are obtained by setting = 1 in (34) and (36) and in (39) and (40), respectively. Also, error variances and covariances of the estimates are obtained by setting = 1 in (44) and in the general matrix result described below (44).
The formulas (57) start from the directly verifiable decomposition of given by which also follows from setting = 1 in the formulas associated with the component s.d.'s of (12). The functions g (λ) and g S (λ) are pseudo-spectral densities, see (61), of the model (6) and the SARIMA (1,1) q model ofŝ t obtained by setting = 1 in (51). In Sect. 8, we will detail the seasonal, trend, irregular decompositions of (58) and the SRW with q = 2.

ARIMA component filters from pseudo-spectral density decompositions
For an ARIMA Z t with degree d ≥ 1 differencing operator δ (B) = 1 − δ 1 B − · · · − δ d B d and model the pseudo-spectral density (pseudo-s.d.) is defined by Its integral is infinite because of the λ at which δ e i2πλ = 0.
In the nonstationary signal plus nonstationary noise case of interest, δ (B) = δ S (B) δ N (B), and δ S e i2πλ and δ N e i2πλ have no common zero. In the seasonal plus nonseasonal case, δ S e i2πλ has zeroes only at seasonal frequencies λ = k/q, k = ±1, . . . , q/2, and δ N e i2πλ = 0 only for λ = 0, as with δ S (B) = 1 + B + · · · + B q−1 and δ N ( of the airline model. The pseudo-s.d. g (λ) must be decomposed into a sum of seasonal and nonseasonal pseudo-s.d.'s associated with δ S (B) and δ N (B), respectively.
Under mild assumptions described below, Bell (1984) established the MMSE optimality of the pseudo-spectral generalization of the W-K transfer function formulas for ARIMA component signal extraction. Hillmer (1978), Burman (1980), and Hillmer and Tiao (1982) developed the canonical approach used with extensions and refinements in SEATS and its implementations. The last reference provides a number of examples of canonical seasonal, trend and irregular pseudo-s.d. decompositions as well as examples of ARIMA models whose pseudo-s.d. does not admit such a decomposition (e.g. airline models with certain parameter values), the inadmissible case. Stationary components in addition to the irregular occur with s.d.'s for cyclical or other "transitory" components, see Gómez and Maravall (1996) and Kaiser and Maravall (2001). SEATS has options for several. Generalizing the stationary case definition, a pseudo-s.d. decomposition is canonical if, with at most one exception, its component pseudo-s.d.'s and s.d.'s have minimum value zero, as in (59).

Bell's assumptions
In addition to the requirements on δ S (B) and δ N (B), Bell (1984) also requires that the series δ S (B) S t and δ N (B) N t be uncorrelated. This can be obtained "automatically" from the implied s.d. decomposition g δ(B)Z (λ) = g δ(B)S (λ)+ g δ(B)N (λ), see Findley (2012). From here, Bell's Assumption A provides MMSE optimality for finite-sample component estimates from the matrix formulas of McElroy (2008), for bi-infinite data estimates from pseudo-spectral W-K formulas, and for semi-infinite data estimates like those considered in Bell and Martin (2004 An assumption is necessary. Inference about the initial values is impossible because they are nonstationary and there is one observation of each. Assumption A provides both signal extraction optimality and, as we demonstrate next, a more fundamental MMSE optimality, that of ARIMA forecasts as commonly produced.

Assumption A yields MMSE forecasts and backcasts
Forecasts of ARIMA Z t are traditionally obtained as in Box and Jenkins (1976, Ch. 5), namely by generating future Z t , t > d from initial Z 1 , . . . , Z d and subsequent w t = δ (B) Z t , t > d , replacing unobserved w t by their MMSE ARMA forecasts to obtain desired forecasts of Z t . The MMSE optimality of such forecasts follows somewhat straightforwardly from the forecasting results obtained under Assumption A in the third text paragraph on p. 652 of Bell (1984). For backcasts the reverse recursion in (63) is used.
We illustrate this procedure by deriving results we require for Z t from the biannual SRW model, which is analyzed in detail in the next section. The white noise variates a t are all uncorrelated with one another and, under Assumption A, also with the initial values Z 1, Z 2 of any n ≥ 2 observations Z 1 , . . . , Z n . For t > 2, repeated application of the recursion Z t = Z t−2 + a t yields all Z t , t ≥ 3 from the left hand formulas of (65) and (66) below: odd "months" Z 2(m+k)−1 and even "months" Z 2(m+k) m ≥ 1, k ≥ 1 are generated independently. The right most white noise sum in (65) shows the forecast error when Z 2m−1 is the forecast of Z 2(m+k)−1 from data Z 1 , Z 2 , . . . , Z 2m−1 or from Z 1 , Z 2 , . . . , Z 2m−1 , Z 2m . The right most white noise sum in (66) shows the error of the forecast Z 2m of Z 2(m+k) , k ≥ 1 from Z 1 , Z 2 , . . . , Z 2m−1 , Z 2m : For any k ≥ 1, Z 2m−1 is the MMSE forecast of Z 2(m+k)−1 because each term of the error m+k i=m+1 a 2i−1 is uncorrelated with the data, as explained above. Similarly Z 2m is the MMSE forecast of Z 2(m+k) , k ≥ 1, because its error m+k i=m+1 a 2i−1 is uncorrelated with Z 1 , Z 2 , . . . , Z 2m−1 , Z 2m . In both cases the mean square error is kσ 2 a . For backcasts the result that, for all k ≥ 1, Z 1 and Z 2 are the MMSE optimal backcasts of Z −1−2(k−1) and Z −2(k−1) respectively, follows from Assumption A and the analogues of (65) and (66) yielded by the backward form Z t−2 = Z t − a t of the recursion used above.

The estimates' symmetric filters and ARIMA models
For translations from squared gains j α j e i2πqλ 2 to filters, we adopt a useful convention of Maravall and Pierce (1987) which uses replacement of e ±i2π jλ by B ± j to obtain the symmetric filter formula, For example, |1 + B| 2 = (1 + B) 1 + B −1 = B + 2 + B −1 , so from (69), the trend filter is To obtain the (forward-time) moving average polynomial of each estimates' model, we use the result that if β (B) = δ (B)β (B), then the substitution Z t = δ (B) −1 a t is allowed in β (B) Z t and results in β (B) Z t =β (B) a t , see Theorem 4.10.1 of Brockwell and Davis (1991). Applying this to (1 − B) Sop t has an ARIMA (0,1,3) model, noninvertible due to the factors (1 + B) and 1 + B −1 2 = B −2 (1 + B) 2 . Similar calculations from (69) provide the symmetric filters and ARIMA or ARMA models of the other estimates of three-component q = 2 SRW decomposition shown below. The factored filter formulas reveal the differencing factor(s) latent in each filter, causing each model to be noninvertible. Further consequences are described in Sect. 9. Also, we difference each estimate appropriately to obtain its model's stationary moving average component, for calculation of autocorrelations of its stationary transform for tables in later sections. The MA order is noted after each model.
Apart from the seasonal adjustment formulas, the preceding formulas are equivalent to canonical filter and model formulas in Maravall and Pierce (1987), who in addition consider non-canonical MMSE estimates. Maravall (1994, p. 169ff) shows how the forward-time form of the MA polynomials can facilitate economic analyses.

The estimates' asymmetric filter formulas
As in Sect. 5.1, we can use MMSE forecasts and backcasts, now from Sect. 7.2, to obtain the filters for the MMSE estimates from the initial and final years. We give finalyear examples for a series of odd length n = 2m + 1. (Their time reverses provide the initial year filters). The factored formulas reveal the differencing operator factor, often of lower degree than in the symmetric filter. Starting with the concurrent filters, the asymmetric seasonal and seasonal adjustment filters for the last year are: Those for trend and irregular component estimates arê Without deriving the forecasts needed for MMSE concurrent estimates of the q = 2 SRW, Maravall and Pierce (1987) derive formulas for the mean square revisions of the concurrent trend and seasonal estimates,p 2m+1 andŝ 2m+1 , from when Z 2m+2 and Z 2m+3 become available to replace the forecasts, also for non-canonical estimates.  (76) and (77), the coefficients are largest at the estimation time t, staying positive until a year's remove, when the final nonzero value is negative and substantial. Short filters like this smooth heavily and can result in large revisions

Symmetric and concurrent SWP filter coefficient features
Consider the coefficients of the symmetric midpoint (also intermediate time) and the one-sided concurrent seasonal adjustment filters of a span of length 2m + 1. The mid-point adjustment is given by and the concurrent adjustment by Figure 5 shows the similar but more elaborate coefficient patterns of the midpoint filter and the concurrent seasonal adjustment filter of the monthly random walk model Z t = Z t−12 + a t . All of these filters are very localized, ignoring data more than a year way from the time point of the estimate, which gets the largest weight, always positive, contrasting with the next largest coefficients, negative for the closest samecalendar-month data. They are therefore very adaptive in a way that is appropriate for series with such erratic trend and seasonal movements.

What seasonal decomposition filters annihilate or preserve
The occurrence of the trend and seasonal differencing factors of δ (B) and δ B −1 , sometimes squared, in the filter formulas of the Sect. 8.1 reveal which fixed sea-sonal or trend components the filters annihilate and which their complementary filters preserve. Regarding symmetric filters, Differencing lowers the degree of a polynomial by one, e.g., (1 − B) t 3 = t 3 − (t − 1) 3 = 3t 2 − 3t + 1. Hence this β s (B) will annihilate a cubic component at 3 and the seasonal adjustment filter β sa (B) = 1 − β s (B) will reproduce it. β sa (B) has (1 + B) 2 as a factor. A q = 2 stable seasonal component a (−1) t has the property that Hence β u (B) and β sa (B) annihilate such a component (and also an unrealistic linearly increasing at (−1) t ) and β s (B) preserves it. This is also true for their concurrent filters. The reader can further observe that β p (B) a (−1) t = 0 and that the irregular filter β u (B) = − 1 8 B −2 (1 + B) 2 (1 − B) 2 will eliminate both a linear trend and a stable seasonal component.
These are quite general results, applying to AMBSA filters, finite or infinite, symmetric or asymmetric, from any ARIMA model whose differencing operator has 1 − B q = (1 − B) U (B) , q ≥ 2, with U (B) = 1 + B + · · · B q−1 as a factor. The tables of Bell (2012) cover more general differencing operators and also several generations of symmetric and asymmetric X-11 filters. In many cases, linear functions in t, and in exceptional cases, such as (71) and others covered in Bell (2015), even higher degree polynomials in t are eliminated by β s (B) and preserved by β sa (B).

Auto-and cross-correlations of stationary transforms of q = 2 SRW components
We continue the exploration of properties of estimated components, including how they differ from properties of the unobserved components, now for the ARIMA case, using the q = 2 SRW 3-component decomposition. We examine the most accessible properties, those of autocorrelations and cross-correlations of the minimally differenced components estimates, with numerical results in Table 1. These can be used as diagnostics. We begin with an important difference between the ARIMA case and the stationary case after setting the scene. Consider an ARIMA Z t whose pseudo-s.d. has a 3-component decomposition associated with a seasonal-nonseasonal differencing polynomial factorization This makes clear how each estimate is being overdifferenced: δ s (B)ŝ t , δ n (B)p t , δ n (B) sa t andû t are already stationary. In SEATS' output, each of these correctly (minimally) differenced estimates is called the stationary transformation of its estimate, and the same term is used with unobserved unobserved components. The right hand sides of the Sect. 8.1 model formulas provide examples we will use. The important difference from the stationary case: From (72) and (73), in contrast to EŜ tNt > 0 for stationary SAR(1) decompositions, we have a /16 2 show otherwise. SEATS has diagnostics, illustrated in Maravall (2003) and Maravall and Pérez (2012), to detect when sample-moment correlation and cross-correlation estimates, calculated from the stationary transform series of the estimates, differ significantly from the correlations and cross-correlations expected from the MA models of the stationary transforms. SEATS does not yet provide theoretical model-based crosscorrelations between the stationary transforms ofŝ t and sa t . Its method for calculating cross-correlations is shown in the Appendix of Maravall (1994). For components other than sa t , Table I of Maravall and Pierce (1987) provides lag 1-3 autocorrelation results for the stationary transforms of the components and estimates of the q = 2 SRW model (64). It has typographical errors (no minus signs). Table 1 is a corrected table extended with autocorrelations of the stationary transforms of sa t and sa t , all calculated from the MA formulas in Sect. 8.1 and Appendix 2.

Reciprocal smoothing properties of seasonal and trend estimates
Consider a finite detrended series, Z −p =ŝ +û in vector notation. In the X-11 method,ŝ is estimated from the detrended series, see Ladiray and Quenneville (2001). With AMBSA estimates, Theorem 1 and Remark 4 of McElroy and Sutcliffe (2006) for ARIMA Z t show that, in addition to being the MMSE linear estimate of s from Z ,ŝ is also the MMSE linear function of Z −p for estimating s. Further, reciprocally, the estimated trendp is the MMSE linear estimate of p from the seasonally adjusted series Z −ŝ (a parallel to how the "final" X-11 trend is estimated). Under correct model assumptions, their paper also provides convergence results to the MMSE estimates of trend and seasonal for iterations starting with a non-MMSE estimate of trend (or seasonal).

Airline model results: the role of
We continue with (79) and special cases thereof to demonstrate important aspects of AMBSA. Hillmer and Tiao (1982) show that, when ≥ 0, the airline model is admissible (i.e., has a pseudo-s.d. decomposition) for all −1 ≤ θ < 1. (Admissible decompositions exist for negative ≥ −0.3 for a -dependent interval of θ values.) We display the influence of ≥ 0 on the effective length of the filter through the rate of decay of its largest coefficients away from the time point of estimation. In general, for the estimate at time t, the observation Z t gets the largest coefficient. The coefficients of the same-calendar-month values Z t±12k in the observation interval 1 ≤ t ≤ n, decrease effectively exponentially in k. The dominating effect of is most clearly observed with the concurrent seasonal adjustment filters.
Decay rate differences lead to a frequently observed feature of seasonal adjustment filters of all seasonal ARIMA models with a seasonal moving average factor: The greater the value of is, the less localized and adaptive but more stable the estimate will be. When is small, the filters are quite localized and adaptive but more likely to have large revisions when future data at next future times n + 12, n + 24, etc. are first adjusted. Since β s (B) = 1 − β sa (B), at non-zero lags the magnitude effect of on seasonal filter coefficients is the same. Figures 7, 8, 9 show the calendar month seasonal factor Fig. 9 When θ = 0.9 and = 0.9, the effective length of the filter is the length of the data span (also for somewhat larger n > 97 ). The filter resists domination by rapid changes of Z t values. However, revisions from Z n+12 , Z n+24 , . . . need not diminish in size over time and could cumulatively be large    Fig. 10, whose text explains why revisions from future data are likely to be large. By contrast, the nearly stable = 0.9 same-calendarmonth factors of Fig. 11 produce a much less smooth adjustment, with residual seasonality visible in the later years against the background of the original series

Simplistic autocorrelation comparison criteria for smoothness and nonsmoothness
We begin our autocorrelation-based consideration of the smoothing properties of estimates. The simplistic definitions used will support a systematic analysis that provides insight regarding seasonal decompositions.
Given two stationary series X t and Y t , we say that X t is smooth if ρ X 1 > 0. If also ρ X 1 > ρ Y 1 , then X t is smoother than Y t . If ρ Y 1 < 0, the series Y t is nonsmooth. A smooth series is therefore smoother than all white noise series and all nonsmooth series. If Y t is nonsmooth and ρ Y 1 < ρ X 1 holds, then Y t is more nonsmooth than X t . A nonsmooth series is thus more nonsmooth than all white noise series and all smooth series. To examine if visual impressions of smoothness or nonsmoothness align with the conclusions of these formal criteria, differences of scale must be accounted for, see Sect. 13.2 and 13.3 and associated figures for illustrations.
We make smoothness/nonsmoothness comparisons between the series Z t and its intermediate-time/bi-infinite-data component estimates. This is done for the two SAR(1) component estimates in Sect. 13.2 and 13.3. In the nonstationary cases, the comparisons are made for the stationary decompositions These represent MMSE optimal three-and two-component decompositions of δ (B) Z t . For example, the MMSE property ofŝ t , thatŝ t − s t is uncorrelated with Z τ for all t and τ , immediately yields that δ (B) ŝ t − s t is uncorrelated with δ (B) Z τ for all t and τ . When a series considered is a calendar month subseries, the autocorrelations considered are the seasonal autocorrelations in the time scale of the original Z t . We sometimes say that the conclusion of smoother is strengthened if the relevant autocorrelation at lag 2 (and perhaps consecutive higher lags) is also positive. This calls attention to the stronger smoothness properties of monthly trends and calendar month seasonal factors or their stationary transforms. Other intensifiers are only suggestive and will not be formally defined.

SAR(1): increased nonsmoothness ofN t
By direct calculation from (33) or from its seasonal MA(1) model (56), the autocorrelations of intermediate-timeN t are Whereas calendar month series of Z t are smooth because ρ q = > 0, ρN q < 0 soN t has more nonsmooth calendar month series than Z t . Of course, the calendar month subseries ofN t are less variable than those of Z t in the sense that, from (56), γN 0 = (1 + ) −4 1 + 2 σ 2 a = (1 − ) 1 + 2 (1 + ) −3 γ 0 is less than γ 0 . Consequently, the scale of theN t is related to that of the Z t through The scale reduction factor (1 − ) 1 + 2 (1 + ) −3 is approximately 0.113 for = 0.95. This factor quantifies the diminished scale of oscillations about the level value 0 seen for the intermediate years in Fig. 3. Figure 13 shows calendar month graphs of the nonsmooth componentN t and the scale-reduced Z t , the latter downscaled to haveN t 's standard error.N t is visibly less smooth, in alignment with the formal conclusion.

SAR(1): greater calendar month smoothness ofŜ t
Directly calculating the seasonal lag autocovariances of Z t−q + 2Z t + Z t+q and (3), we obtain from (38) that the variance and the nonzero autocovariances γŜ kq , k ≥ 1 of Division by γŜ 0 yields the intermediate-time autocorrelations (84).

Smoothness properties of q = 2 SRW component estimates
For q = 2 SRW smoothness results, we require autocorrelations of the fully differenced components of (80): These formulas yield the autocorrelations of Table 2. Table 2, ρ 2 > 0 indicates a differenced estimate with smoother calendar month series than δ (B) Z t , whereas ρ 1 < 0 indicates a differenced estimate whose monthly series is more nonsmooth than monthly δ (B) Z t , etc. Regarding the monthly series: as determined by ρ 1 , δ (B) sa t and δ (B)p t are smoother than δ (B) Z t with strengthened smoothness. The seasonal's δ (B)ŝ t is more nonsmooth than δ (B) Z t . Regarding calendar month series, the seasonal adjustment's, δ (B) sa t and especially the irregular's δ (B)û t ' s are more nonsmooth than δ (B) Z t . The seasonal's δ (B)ŝ t and the trend's δ (B)p t are smoother than δ (B) Z t , with strengthened smoothness.

Empirical lag 12 autocorrelation results of McElroy (2012) for seasonal adjustments
With a set of 88 U.S. Census Bureau economic indicator series for which the airline model was selected over alternatives, McElroy (2012) found that all but one had negative lag 12 sample autocorrelation in the fully differenced seasonally adjusted series, δ (B) sa t in our notation. This negative autocorrelation is statistically significant at the 0.05 level for 46 of the series. From the perspective of detrended calendar month series, which seem always to be visually smoothed by seasonal factor estimates (in logs when appropriate for AMBSA), this should not a be surprising result-removal of a smooth component causes loss of smoothness, which negative autocorrelation can formally identify.

Correct model results for various θ, and component estimates
For any choices of −1 < θ < 1 and 0 ≤ < 1, SEATS outputs the coefficients of the ARIMA or ARMA models ofŝ t , sa t = Z t −ŝ t ,p t , andû t , with innovation variances given in units of σ 2 a (so σ 2 a = 1). With this information, W-K formulas can be used to obtain models for δ (B)ŝ t , δ (B)p t and δ (B)û t . From these models, the autocorrelations needed for smoothness analysis like those presented below can be calculated. The simplest differenced estimate's model, that of δ (B)û t , is derived in Appendix 1 as an illustration, after that ofû t . We start with the calendar month series.

Seasonal autocorrelations and calendar month smoothness
Results are presented in the Tables 4, 5, 6, 7, 8 for comparison with the autocorrelations of δ (B) Z t in Table 3 from (86). Our formal relative smoothness criterion only < 0 for > 0, so δ (B) Z t will have nonsmooth calendar month subseries always. δ (B)ŝ t has substantially smoother calendar month subseries than δ (B) Z . The calendar month. smoothing indicated in Table 4 is reinforced, moderately to strongly at 2 years remove involves first seasonal lag autocorrelations. Some higher lag results are presented for a broader perspective. Here is a summary of the tabled seasonal lag results. Tables 4, 5,6 show that, in contrast to δ (B) Z t from Table 3, the series δ (B)ŝ t from the seasonal estimatesŝ t is positively correlated at all seasonal lags considered, 12, 24 and 36, often strongly, indicating that the calendar month subseries of δ (B)ŝ t will often be substantially smoother than δ (B) Z t . Table 7 shows that the opposite holds for the seasonally adjusted series δ (B) sa t = δ (B) Z t − δ (B)ŝ t , and Table 8's results for δ (B)û t are similar. Both have more negative autocorrelations at lag 12 than δ (B) Z t and positive autocorrelations at lag 24 (and negligible autocorrelations at lag 36-not shown), increasing the tendency for more changes of direction than δ (B) Z t . for ≥ 0.3 Calendar month smoothing is further reinforced at 3 years remove, not as strongly at as two < 0.δ (B) sa t is more nonsmooth than δ (B) Z Table 8 Lag 12

Lag 1 autocorrelation and monthly smoothness results
Familiarly, an estimated trend visually smooths a seasonally adjusted monthly series. We examined the lag 1-12 autocorrelations (not shown) of the differenced trend estimates, δ (B)p t for the , θ under consideration. At lags 1-6 all are positive. At lags 7-11, some or all can have either sign. Thus δ (B)p t will have at least a half-year of resistance to oscillation. At lag 12, all are negative. This is in strong contrast to δ (B) Z t , which, among lags 1-6, has a non-zero autocorrelation only at lag one, with a negative value indicating nonsmoothness (except when θ < 0), see (86). For the differenced irregular component δ (B)û t , Tables 9 and 10 below show that δ (B)û t is more nonsmooth than the nonsmooth δ (B) Z t . Monthly δ (B) Z t are nonsmooth for θ > 0, smooth for θ < 0 . δ (B)û is always more nonsmooth than δ (B) Z andû

Concluding remarks
The simple seasonal models considered have provided very informative formulas for two-and three-component decompositions of seasonal time series. The factored formulas for the seasonal random walk simply display the full range of differencing operators (in biannual form) of ARIMA model-based seasonal decomposition filters identified in Bell (2012Bell ( , 2015. The formulas for the estimates' auto-and cross-correlation formulas have led to new insights and results. For example, the finding of negative sample autocorrelations at the seasonal lag of the differenced seasonally adjusted series now appears as an inevitable result of removing a seasonal component whose calendar month subseries are smooth. It is not a defect of the seasonal adjustment procedure, contrary to a view expressed in some of the literature motivating McElroy (2012). For the irregular component, there are the common empirical findings, with airline and similar models, of negative sample autocorrelations, often at the first lag (see Table 11 in Appendix 1) and at the first seasonal lag of the estimated irregular componentû or differencedû as in Tables 1 and 10. These can now be anticipated from the knowledge thatû can be regarded both as the detrended version of the seasonally adjusted series Z −ŝ, and also as the deseasonalized version of the detrended series Z −p, in both cases resulting from removal of a smooth component.
The capacity to provide illuminating precise answers to many questions is a very valuable feature of ARIMA-model-based seasonal adjustment, as is its conceptual simplicity relative to nonparametric procedures, at least for adjusters with sufficient modeling background and experience. (The challenge is always to find an adequate model for the data span to be adjusted, if one exists.) Also valuable are the error variance and autocovariance measures (not accounting for sampling or model error) that AMBSA easily provides (only) for additive direct seasonal adjustments and their period-to-period changes. The latter, with log-additive/multiplicative adjustments, the most common kind, yield approximate uncertainty intervals for growth rates, quantities of special interest for real-time economic analysis.
Disclaimer Results of ongoing research are provided to inform interested parties and stimulate discussion. Any opinions expressed are those of the authors and not necessarily those of the U.S. Census Bureau or the Bank of Spain.