Journal of Economics and Finance

, Volume 36, Issue 1, pp 211–225

The housing bubble in real-time: the end of innocence

Authors

    • Finance Department, College of BusinessUniversity of Houston-Downtown
Article

DOI: 10.1007/s12197-010-9165-4

Cite this article as:
Peláez, R.F. J Econ Finan (2012) 36: 211. doi:10.1007/s12197-010-9165-4

Abstract

Market agents suffering through unanticipated boom-bust cycles would find extremely useful analytical techniques capable of serving as an early warning system. Unobserved components models and cointegration analysis are valuable in this respect. The stylized facts from unobserved components models alone do not suffice, but coupled with results from the Johansen cointegration test provided early evidence of the housing bubble and of its denouement. The paper uses real-time data vintages and shows that by 1998 the relationship between the smoothed growth rates of house prices and of per capita income was in uncharted territory. Moreover, the actual growth rates are cointegrated. This is important, as it establishes that any disequilibrium between the two becomes less tenable as its magnitude increases. By 2003, the disequilibrium was spectacular, yet it grew for another 4 years. In effect, we did not have to wait until 2008; the gruesome ending was predictable ex ante. Ironically, the greatest financial delusion of all occurred in an age that revered rationality, market efficiency, and the financial enlightenment of the TBTF actors. The empirical findings of this paper are a major problem for the rational expectations hypothesis and the remnants of the EMH.

Keywords

CointegrationUnobserved ComponentsRationalityMarket Efficiency

JEL Classification

C220G010D8

1 Introduction

The conventional view holds that bubbles are only recognizable ex post; otherwise, they would not occur (Kindleberger and Aliber 2005). However, Reinhart and Rogoff (2009), Borio and Lowe (2002), and Shiller (2000, 2003), document cases in the early stages of financial bubbles when deviations from empirical regularities signaled the looming disaster. It is extremely important to develop empirical methodologies capable of identifying asset price booms before they mature into full-blown bubbles.

This paper shows that an early warning system was available that could have probably averted the greatest destruction of wealth in history. The data and analytical techniques were available in real time as the bubble inflated. As early as 2001-02 the relationship between the growth rates of house prices and of income deviated sharply from the historical pattern of more than 25 years. Moreover, the quarterly growth rates of the house price index and of per capita disposable personal income are cointegrated. Therefore, as the disequilibrium grew the more certain and violent the correction had to be. Tragically, we did not have to wait until 2008 to know the denouement. The empirical findings cannot be reconciled with the rational expectations hypothesis or the efficient markets hypothesis (EMH).

By 2003, the disequilibrium was unprecedented, yet it would grow for another 4 years of delirious speculative fever. It is important to show that the crisis was not a random event like an unpredictable 100-year flood; instead, as the disequilibrium grew the more predictable was the disaster. The bubble swept into the graveyard of ideas the last remaining illusion about financial market macro-efficiency, vindicating Samuelson’s (1998) dictum.

The plan of the work is as follows. Section 1 describes the data and its sources. Section 2 describes a basic structural time series model. Sections 3 shows the smoothed growth rates that market participants could have used to formulate expectations about the growth of house prices in real time. Section 4 shows that the growth rates of the house price index and of disposable personal income per capita are cointegrated. The final section concludes.

2 Data

This paper is about inflation in house prices, thus clearly all variables are nominal. The Federal Housing Finance Agency (http://www.fhfa.gov) publishes quarterly and monthly house price indices for the USA, nine U.S. Census divisions, 50 states and the District of Columbia. We use the quarterly USA HPI index. The HPI is a weighted repeat-sales index for mortgages of single-family properties. Its methodology is similar to one developed by Bailey et al. (1963) and later refined by Case and Shiller (1989). However, it differs in scope and coverage from the S&P/Case-Shiller index.1 It is worth noting that the S&P/Case-Shiller index shows an even larger drop in prices from their peak than the HPI (OFHEO 2008). Calhoun (1996) provides a detailed technical description of the HPI. With each release, the entire HPI is revised from its inception in 1975 to the present.2 Since each subsequent vintage differs from previous ones, it is important to use the actual data vintages available in historical time as the bubble inflated.

Nominal per capita disposable personal income, hereafter DPI, is the quarterly series published by the Bureau of Economic Analysis. It is released in stages: advance, preliminary, and final. This paper samples the ‘final’ version released about 3 months after the end of the reference quarter.3 The 30-year conventional mortgage rate is the average contract rate on commitments for fixed-rate first mortgages reported in the H15 Release of the Board of Governors of the Federal Reserve System. Figure 1a plots the logs of HPI (left axis), and DPI (right axis). An underlying economic relationship links the two since income determines debt-carrying capacity and is a loan qualifier.
https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig1_HTML.gif
Fig. 1

a Logs of the level variables (1975:Q1–2009:Q2). b Quarterly growth rates, %

The growth rates in Fig. 1b exhibit the noisy behavior typical of the quarterly growth rates of nominal variables. Directly computed growth rates reflect the behavior of components with different time-series properties, such as trend, cyclical, seasonal, and irregular. In order to observe more clearly the stylized facts of the underlying or core growth rates it is necessary to filter-out the higher frequency irregular and cyclical components.

3 Structural time series models (STMs)

Identifying the stylized facts of time series is an important part of research in economics, see e.g., Harvey (1989, 1997), Hodrick and Prescott (1980), Nelson and Plosser (1982), and Blanchard and Fisher (1989). Various techniques are available to separate the high-frequency components of a series (seasonal, irregular, and cyclical) from the more slowly evolving trend and its slope. The empirical analysis of business cycles provided the initial motivation for much of the early work. For example, it is important to know if a change in productivity is a temporary blip or a more permanent development. The Hodrick-Prescott filter (HP) was popular, but it may produce spurious cyclical behavior and distortion near the sample end-points (Cogley and Nason 1995, and Baxter and King 1999; Harvey and Jaeger 1993). Baxter and King proposed a band-pass filter to extract the business cycle components of macroeconomic activity. The band-pass filter approximates a two-sided moving average that retains components of a quarterly time series with periodic fluctuations between 6 and 32 quarters, while removing components at lower and higher frequencies.

In contrast to univariate filtering techniques, STMs have greater flexibility, allow inclusion of explanatory variables, and are useful in forecasting.4 A basic STM views a time series yt as the sum of four components, trend μt, cycle ψt, seasonal γt, and a white noise irregular term, εt,
$$ {{\hbox{y}}_{\rm{t}}} = {\mu_{\rm{t}}} + {\psi_{\rm{t}}} + {\gamma_{\rm{t}}} + {\varepsilon_{\rm{t}}},\;{\varepsilon_{\rm{t}}}\sim {\hbox{NID}}\left( {0,{\sigma^{{2}}}_{\varepsilon }} \right). $$
(1)
The cycle dissipates without lasting impact, while the trend indicates the long-term movement of the series. In the most general case, a local linear trend model, the trend consists of level and slope both evolving stochastically as random walks,
$$ \mu {_{\rm{t}}} = {\mu_{{{\rm{t}} - {1}}}} + {\beta_{{{\rm{t}} - {1}}}} + {\eta_{\rm{t}}} $$
(2)
$$ {\beta_{\rm{t}}} = {\beta_{{{\rm{t}} - {1}}}} + {\zeta_{\rm{t}}} $$
(3)
βt is the slope or one-period growth in μt, while the disturbances ηt and ζt are normally and independently distributed with variances, \( \sigma_{\eta }^2 \) and \( \sigma_{\zeta }^2 \) respectively. The presence of ηt allows the level to shift up or down, while ζt allows the slope to change. The model in Eqs. (1)–(3) is essentially a time-varying parameter model.

Less general models arise depending on the variances of the disturbance terms. If \( \sigma_{\eta }^2 = 0 \) and \( \sigma_{\zeta }^2 > 0 \), we have a smooth trend model with a stochastic slope. Alternatively, if \( \sigma_{\eta }^2 > 0 \) and \( \sigma_{\zeta }^2 = 0 \), the trend is a random walk with drift. Finally, a deterministic trend emerges if both \( \sigma_{\eta }^2 \) and \( \sigma_{\zeta }^2 \) equal zero. Model selection is data dependent and is based on a measure of goodness of fit, such as prediction error variance, or R2 with respect to first differences \( \left( {{\hbox{R}}_d^2} \right) \).

A rich set of dynamic models results from including explanatory variables in an STM; such models combine time series and regression. For example, including the mortgage rate as an explanatory variable in a model of the HPI allows the level to reflect the mortgage rate, while the stochastic trend captures the effects of changes in tastes, government policy, expectations, or simply the animal spirits of market participants.

The HPI is not seasonally adjusted, hence we tested for possible seasonal effects within a local linear trend model, but the seasonal factors were not statistically significant. The X-12 ARIMA model of the U.S. Department of Commerce confirmed the absence of verifiable seasonality, thus the finally selected local linear trend model did not include seasonal factors. STAMP 8 of Koopman et al. (2007) was used in this part of the work. The model of log(HPI) included the contemporaneous logged value of the mortgage rate as an explanatory variable. In all cases, the coefficient of the mortgage rate was negative and statistically significant. A univariate model was chosen for the local linear trend model of log DPI.

4 Expectations and the smoothed growth rates of HPI and DPI

Local linear trend models were estimated with successive data vintages from 2003Q1 through 2006Q1 at annual intervals, in order to extract the smoothed growth rates of house prices and income that real time agents would have seen as the bubble inflated. A vintage shows the actual data available in real time. Only the graphical output is shown below due to the number of models and that estimation in STAMP produces a large amount of output. However, all results are available from the author.

Figure 2 shows the smoothed slopes (growth rates) obtained with the 2003:Q1 vintage (the HPI is from the release of February 2003 containing quarterly data through 2002:Q4 and the DPI is from the March 2003 release). Thus, on March 2003 a real-time investor would have noticed four stylized facts. First, during almost a quarter of a century (1975:Q1–1998:Q1) the smoothed growth rate of house prices remained below that of income; the average spread was negative (−1.27 percentage points). Second, both growth rates decreased from 1979 until the early 1990s. Third, beginning in 1990 the growth rate of house prices, although still below that of DPI, began to increase more rapidly. Fourth, on 1998:Q2 the spread became positive for the first time and then widened for the next 4 years as the growth of house prices outpaced income growth. On 2002:Q4, the spread reached a hitherto unprecedented 1.62 percentage points. Notably, this anomaly occurred against the backdrop of stagnating income growth and falling savings rates. Although the HPI model includes the mortgage rate as an explanatory variable, similar results obtain if the mortgage rate is omitted. Evidently, declining lending standards and price feedback effects fueled expectations of price increases. Participants ignored the significance of the fact that, contrary to the historical norm, both growth rates had been moving in opposite directions since 1998. The saturnalia of the smart money was just beginning; after a running start, it would push housing prices to levels in excess of any rationally supportable base.
https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig2_HTML.gif
Fig. 2

Smoothed annualized growth rates (%), 2003:Q1 vintage

One year later, on 2004:Q1, a real-time agent would re-estimate each STM with a new vintage to obtain the smoothed slopes shown in Fig. 3. Notice that the spread is now 5.01 percentage points. The growth rate of house prices points to heaven, while income growth traces a 6-year decreasing trajectory. The background of Fig. 3 includes declining lending standards and a mount of debt resting on a stagnating income base—all superimposed on the lowest personal savings rate in recorded U.S. history. Figure 3 points to the severity of the looming crisis. Evidently, the magnificently informed rational agents dismissed all danger signals as vestiges of an irrelevant archaic age.
https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig3_HTML.gif
Fig. 3

Smoothed annualized growth rates (%), 2004:Q1 vintage

The view when the curtain rises for the next act would instill terror into anyone familiar with the stylized facts. The spread had not been positive for a quarter of a century prior to 1998, but on 2005:Q1 it is an astonishing 5.63 percentage points. See Fig. 4. Just then, our TBTF investors exhibiting a remarkable want of insight increased their frantic acquisition of mortgage-backed securities and their derivatives. Next year, the 2006:Q1 vintage provided the incredible picture in Fig. 5 where the smoothed growth rate of the HPI exceeds that of DPI by 6.62 percentage points. Shortly afterwards the bubble ruptured.
https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig4_HTML.gif
Fig. 4

Smoothed annualized growth rates (%), 2005:Q1 vintage

https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig5_HTML.gif
Fig. 5

Smoothed annualized slopes (%), 2006:Q1 vintage

Christopher Dodd (2007), Chairman of the U.S. Senate Committee on Banking, Housing, and Urban Affairs indicated that the regulatory agencies first noticed credit standards deteriorating late in 2003. Historically, the borrower’s ability to repay framed the lending decision, but this age-old practice receded as predatory lenders pushed interest-only loans, Alt-A loans, and option-ARM loans on wage earners, elderly families on fixed income, lower-income families, and others eager to “invest” in real estate.

It is worth considering if the smoothed growth rates shown in Figs. 2, 3, 4 and 5 are an artifact of the local linear trend model. Evidently, this is not the case as the band-pass filter yields similar results. Once more, we use the 2006:Q1 data vintage, but now Fig. 6 shows the smoothed slopes obtained with the band-pass filter of Baxter and King (1999) applied to the log level variables. On 2005:Q4, the growth rate of the HPI exceeded that of DPI by 6.2 percentage points in Fig. 6, as compared to an excess of 6.6 percentage points in Fig. 5. Moreover, the crossover points in the two figures are nearly identical: 1997:Q4 in Fig. 6, versus 1998:Q2 in Fig. 5.
https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Fig6_HTML.gif
Fig. 6

Band-pass retained slopes, 2006:Q1 vintage

5 Testing for cointegration

A finding of cointegration between the actual growth rates Dlog(HPI), and Dlog(DPI) in Fig. 1b explodes the notion that market participants behaved rationally.5 Testing for a unit root is the first step in testing for cointegration. It may appear unlikely for the growth rates to be integrated of order one, I(1), as this requires that the level variables be integrated of order two, I(2). Nevertheless, Juselius (2008) notes that nominal variables in levels often exhibit I(2) behavior. A non-stationary series (Xt) is integrated of order one, I(1), if its first difference (Xt–Xt-1), is stationary I(0). The order of integration is the number of times that a non-stationary series is differenced to obtain a stationary I(0) series. Stationarity also occurs in fractionally integrated I(d) series in which the order of integration, d, is less than 0.5. Besides differencing, other transformations may induce stationarity. In the context of this paper, if the growth rates Dlog(HPI) and Dlog(DPI) are non-stationary due to unit roots, there may exist a cointegration vector such that Dlog(HPI)–βDlog(DPI) is stationary. Cointegration vectors are of great interest because short-run disequilibria between cointegrated variables are temporary, and the long-run relationship is one of equilibrium.

The presence of a unit root in a time series is central to the issue of the persistence of shocks, i.e., the effect of a current shock on the forecast of the series. A large amount of literature develops tests of the null hypothesis of a unit root. Initially, the Dickey and Fuller (1979), augmented Dickey and Fuller (1981), and Phillips and Perron (1988) tests were popular, but Cochrane (1991) and others warned about drawing strong inferences from those tests due to their low power. Maddala and Kim (2000) more bluntly state that the Dickey-Fuller and Phillips-Perron tests should not be used any more. A related critique is that the augmented Dickey-Fuller (ADF) test often fails to reject the unit root null when the time series is fractionally integrated (see e.g., Diebold and Rudebusch 1991; Hassler and Wolters 1994; and Lee and Schmidt 1996).

In recent years significantly more powerful tests have been developed, see e.g., Elliott et al. (1996), Perron and Ng (1996), and Ng and Perron (2001). Elliott, Rothenberg, and Stock (ERS) show via Monte Carlo experiments that local Generalized Least Squares (GLS) detrending of the data together with a data-dependent lag-length selection procedure, yields substantial power improvements over the widely used ADF test. Their DFGLS test in effect supersedes the ADF test. Perron and Ng (1996) developed modified versions (MZα, MZt, MSB, and MPt) of the Phillips and Perron (1988) Zα and Zt tests, Bhargava’s (1986) R1 test, and the feasible Point Optimal test of Elliott et al. (1996). Ng and Perron (2001), extended those four tests to allow for GLS detrending of the data, and introduced a class of data-dependent Modified Information Criteria for selecting the lag length of the autoregressive process. These refinements yield tests with desirable size and power properties (Ng and Perron 2001).

Table 1 presents the results of the ADF, DFGLS, and Ng and Perron (2001) tests. These tests reject in the lower tail, thus rejection of the unit root null hypothesis requires a test statistic smaller than an appropriate critical value shown in parentheses. Since all of the test statistics exceed the 5% critical values, it is fitting to characterize the growth rates, Dlog(HPI) and Dlog(DPI), as unit root processes.
Table 1

Test statistics and 5% probability values, in parentheses, for tests of the unit root null hypothesis

Variable

Dlog(HPI)

Dlog(DPI)

Exogenous variables:

Constant

Const. & trend

Constant

Const. & trend

ADF

−1.72 (−2.88)

−2.25 (−4.03)

−1.26 (−2.88)

−2.81 (−3.44)

DFGLS (CV5%)

−1.05 (−1.94)

−2.45 (−2.99)

0.71(−1.94)

−1.89 (−3.00)

MZα (CV 5%)

−5.06 (−8.1)

−11.16 (−17.3)

0.48 (−8.1)

−0.56 (−17.3)

MZt (CV 5%)

−1.35 (−1.98)

−2.33 (−2.91)

0.82 (−1.98)

−0.42 (−2.91)

MSB (CV 5%)

0.27 (0.23)

0.21 (0.168)

1.72 (0.23)

0.75 (0.168)

MPT (CV 5%)

5.44 (3.17)

8.35 (5.48)

170.5 (3.17)

107.8 (5.48)

All tests reject in the lower tail. The sample period is 1975Q2–2009Q4. The Modified Akaike Criterion selected an autoregressive lag of two quarters for all tests on Dlog(HPI), and a lag of 8 quarters for Dlog(DPI). All test results were obtained with EViews 5.1 of Quantitative Micro Software.

Each I(1) growth rate may meander without converging to a long-run level. However, if they are cointegrated a linear combination of the two is stationary, i.e., the growth rates move towards a state of long-run equilibrium with each other. The economic significance is that any disequilibrium between cointegrated variables becomes more unlikely as its magnitude grows. In the bubble’s context, cointegration implies predictability. Short-run disequilibria induced by a sharp rise in the growth of the HPI must eventually ebb as the growth of the HPI returns to its long-term cointegrating equilibrium with the growth rate of DPI.

The EMH has retreated from the 1970s Panglossian view of “the price is right,” to admitting that investors may behave irrationally, at times under-reacting or over-reacting and unwittingly creating bubbles. However, if these bouts of irrationality are random and thus unpredictable like a 100-year flood, the EMH allegedly survives. Fama (1998) admits that researchers have documented irrationalities and biases, but claims that critics have not shown how to exploit the irrationality of others to earn an abnormal return, i.e., irrationality has to be systematic enough to make prices predictable to the point of economic significance. According to this reasoning, a finding of cointegration is a problem for then the bubble’s inevitable and ruinous denouement would have been predictable.

Johansen (1988) developed a likelihood ratio test for cointegration; Johansen and Juselius (1990), and Johansen (1991) refined and extended the test. In a VAR system consisting of n I(1) variables, the cointegration rank r (the number of cointegration equations), is bounded by the interval 0≤r≤(n–1).6 The first step is to test the null of no-cointegration (r = 0); the null is rejected if the likelihood ratio (LR) exceeds some critical value, say, 1%. One then tests whether there is one cointegration equation, i.e., r = 1, and continues testing ever higher r until the null is not rejected.

The Johansen test allows for the inclusion of intercepts, linear or quadratic trends, seasonal factors, intervention variables, and exogenous variables; typically, an information criterion selects the lag length in the VAR. The finally selected VAR system included a restricted constant since the growth rates do not have linear trends, three lags of Dlog(DPI) and DlogHPI), and five intervention dummy variables to account for transitory shocks.7

Both endogenous variables are subject to sporadic shocks or blips. Dlog(DPI) is affected by changes in tax rates, labor strikes, stimulus spending bills, inflation, and other exogenous shocks. One approach for handling exogenous shocks is to ignore them. We tried this and found one cointegrating vector linking Dlog(HPI) and Dlog(DPI) in a stationary long-run relationship, but do not report those results here. Another approach is to include intervention dummy variables unrestrictedly in the system. We follow this second approach for two reasons. First, the dummy variables improve the fit of the model by removing the impact of outliers or blips, and second they describe shocks to the system due to specific events; however, the results in Table 3 shows that the cointegration test results do not depend on the presence of dummy variables. One shock was due to the September 11, 2001, event that dampened income growth and spending. The Federal Reserve lowered the funds rate, and as confidence returned Dlog(DPI) rebounded in 2002:Q1. It is appropriate to model this shock with a zero–one transitory intervention dummy,
$$ {\hbox{D}}0{{1}_{\rm{t}}} = {1}\;{\hbox{for}}\left( {{\hbox{t}} = {2}00{1}:{4}} \right),\;{\hbox{D}}0{{1}_{\rm{t}}} = - {1}\;{\hbox{for}}\;\left( {{\hbox{t}} = {2}00{2}:{1}} \right),\;{\hbox{and}}\;{\hbox{D}}0{{1}_{\rm{t}}} = 0\;{\hbox{otherwise}}. $$
A more recent shock occurred on 2008:Q2 when Dlog(DPI) increased sharply due to disbursements under Bush’s Economic Stimulus Package; the bulk of the stimulus payments were distributed in May and June 2008. This one-time shock is modeled with,
$$ {\hbox{D}}0{\hbox{8t}} = {1}\;{\hbox{for}}\;\left( {{\hbox{t}} = {2}00{8}:{2}} \right),\;{\hbox{D}}0{\hbox{8t}} = - {1}\;{\hbox{for}}\left( {{\hbox{t}} = {2}00{8}:{3}} \right),\;{\hbox{and}}\;{\hbox{D}}0{{8}_{\rm{t}}} = 0\;{\hbox{otherwise}}. $$

Other minor shocks occurred earlier. The 1980 recession stretched from January through July. In 1980:Q4 employment, average weekly hours, and real GDP rebounded strongly, inducing a blip in the growth rate of DPI that is dummied-out by D804t = 1 for t = 1980:Q4, and zero otherwise. In 1992, real GDP growth averaged over 4.3%, but in 1993:Q1 growth decelerated sharply to 0.7%. The sudden downshift induced a sharp drop in the growth rate of DPI on 1993:Q1 which was dummied out via, D93t = 1 for (t = 1993:Q1), D93t = −1 for (t = 1993:Q2), and D93t = 0 otherwise. Another blip occurred on 2005:Q1, when the growth rate of DPI dropped from 2.2% in 2004:Q4 to −0.006% on 2005:Q1; hence D05t = 1 for 2005:Q1, and D05t = 0 otherwise.

The cointegration work used the software package Cointegration Analysis of Time Series also known as CATS. The results in Table 2 show that the growth rates are cointegrated. Here, the trace statistic rejects the null that the rank is zero (r = 0), (P-value = 0.000), whereas the null that r = 1 is not be rejected in favor of r = 2, (P-value = 0.855).8 In short, the growth rates tend to gravitate to a long-run equilibrium, underscoring the incongruity of the bubble that nearly destroyed the U.S. economy.
Table 2

The upper part of the table describes the model; the I(1)-analysis in the lower part shows the results of the Johansen test

https://static-content.springer.com/image/art%3A10.1007%2Fs12197-010-9165-4/MediaObjects/12197_2010_9165_Tab2_HTML.gif
It is worth considering if the finding of cointegration is unique to the above sample period or if it holds for shorter sample periods. This paper would falter if, say, in 1998 or 1999, as the bubble began to develop cointegration could not be found. Table 3 shows the results of the Johansen test for several samples. In all cases, the null of zero rank is decisively rejected whereas the null that r = 1 (one cointegrating vector) is not be rejected.
Table 3

This table shows the Johansen cointegration trace-test statistic and its probability value for various sample periods

Effective sample

Ho: rank<=0 Trace test [Prob.]

Ho: rank<=1 Trace test [Prob.]

1976:1–1998:4

25.246 [0.008]**

3.6306 [0.481]

1976:1–1999:4

25.963 [0.006]**

3.8653 [0.445]

1976:1–2000:4

27.306 [0.004]**

4.4147 [0.366]

1976:1–2001:4

22.360 [0.023]*

4.5330 [0.35]

1976:1–2002:4

23.243 [0.017]*

4.8871 [0.307]

1976:1–2003:4

25.880 [0.006]**

5.2246 [0.269]

1976:1–2004:4

25.226 [0.008]**

4.9677 [0.297]

1976:1–2005:4

23.199 [0.017]*

5.2279 [0.269]

1976:1–2006:4

25.942 [0.006]**

6.3059 [0.174]

1976:1–2007:4

25.486 [0.007]**

5.2184 [0.27]

1976:1–2008:4

23.898 [0.013]*

2.6197 [0.659]

1976:1–2009:4

26.040 [0.006]**

2.2020 [0.737]

The VAR model included three autoregressive lags, and a restricted constant. Intervention dummy variables were not included. The finding of cointegration holds for all periods.

** and * denote significance at the 1% level and 5% level, respectively.

6 Conclusions

The EMH claims that it does not matter if some investors are irrational some of the time, and if others are irrational all of the time, because the smart money takes advantage of the irrationality of others, thus keeping markets efficient. However, no amount of theorizing can bury the fact that the superbly informed agents of the TBTF created the greatest financial disaster since the 1930s.

Using the empirical evidence available in real-time this paper has demonstrated an early warning system of the housing price boom. Both the data and analytical techniques used were available in the early stages of the bubble to document the imbalances that eventually had to be resolved. The same techniques may serve in other cases.

The Johansen cointegration test shows that the relationship between the actual growth rates is stationary, i.e., that the disequilibrium term has time-invariant properties. As early as 2001-02, the gruesome ending was predictable ex ante; as the short-run disequilibrium grew, the more certain a drastic drop in house prices became.

In the wake of the Great Depression, economists created a revolution in macroeconomics. The greatest bubble in history put to rest the idea that the price is right in the macro sense, validating Samuelson’s (1998) dictum that financial markets are macro-inefficient. Its most durable legacy is likely to be that generations of economists will develop models in which irrationality plays a larger role in the determination of asset values.

Footnotes
1

The Case-Shiller index, unlike the HPI, is not national in coverage; it omits 13 states and has incomplete coverage for 29 other states (Leventis, 2007).

 
2

The HPI began publication on January 1975, by the Office of Federal Housing Enterprise Oversight (OFHEO)—an agency that operated within the Department of Housing and Urban Development. On July 2008, the Housing and Economic Recovery Act combined OFHEO and the Federal Housing Finance Board into the new Federal Housing Finance Agency.

 
3

Eventually, even the final version suffers the inevitable benchmark or methodological revisions.

 
4

STMs have been used to model the behavior of exchange rates (Harvey et al. 1992); forecast consumer expenditures (Harvey and Todd, 1983); model inflation and the output gap (Kuttner 1994; Domenech and Gomez 2006; and Harvey 2008). Other examples include modeling inflation persistence (Stock and Watson 2007, and Dossche and Everaert 2005); productivity growth (Peláez 2004, and Crespo 2008); business cycles (Clark 1987), earnings per share (Peláez 2007); permanent income (Huang et al. 2008); the U.S. regional housing market (Fadiga and Wang 2009); modeling the core unemployment rate, (Harvey and Chung 2000); and testing for deterministic trends (Nyblom and Harvey 2005).

 
5

Dlog(HPI) is the first-difference of the natural logarithm of HPI. Clearly, this section does not use the smoothed growth rates.

 
6

If r = n, we have a contradiction of the assumption that the variables are I(1).

 
7

The Schwarz Criterion and the Hannan-Quinn Criterion reached their minima for a lag length of three quarters. The finding of cointegration is robust to an unrestricted constant and to the exclusion of any intervention variables as well.

 
8

The P-value is the probability of obtaining the given value of the test statistic under the null.

 

Copyright information

© Springer Science+Business Media, LLC 2010