Skip to main content

Rationalizing forecast inefficiency

Abstract

We show analysts’ own earnings forecasts predict error in their own forecasts of earnings at other horizons, which we argue provides a measure of the extent to which analysts inefficiently use information. We construct our measure by exploiting two sources of variation in analysts’ incentives: (i) more recent forecasts have greater salience at the time of the earnings release so accuracy incentives are higher (lower) at shorter (longer) forecast horizons and (ii) analysts have greater incentives for optimism (pessimism) at longer (shorter) horizons. Consistent with these incentives affecting the incorporation of information into forecasts, we document (i) current year forecasts underweight (overweight) information in shorter (longer) horizon forecasts and (ii) the mis-weighting is more pronounced when recent news is negative—when analysts have greater (weaker) incentives to incorporate the news into shorter (longer) horizon forecasts. Finally, returns tests suggest that forecasts adjusted for the inefficiency we document better represent market expectations of earnings.

Introduction

Predicting earnings is a central function of accounting research. Early studies on the properties of earnings relied on time-series models to predict earnings (e.g., Ball and Watts 1972). As analyst forecasts became more widespread, studies documented that analysts provide more accurate forecasts than time-series models and their forecasts generate larger earnings response coefficients, suggesting they better proxy for market expectations, due to both a timing and information advantage (Collins and Hopwood 1980; Fried and Givoly 1982; Brown et al. 1987; Stickel 1990; Schipper 1991). While this evidence fueled the use of analyst forecasts as proxies for market expectations, subsequent studies have documented a range of predictable errors related to publicly available information (e.g., DeBondt and Thaler 1990; Lys and Sohn 1990; Abarbanell 1991; Mendenhall 1991; Abarbanell and Bernard 1992; Easterwood and Nutt 1999; So 2013). Moreover, analyst forecasts’ superiority over time-series models dissipates as the forecast horizon increases, and, in some cases, analysts’ forecasts even become less accurate (Brown et al. 1987; Kross et al. 1990; Lys and Soo 1995; Bradshaw et al. 2012).Footnote 1

We take a novel approach to predict forecast error: using a regression model that combines analyst forecasts of earnings at multiple horizons into a more accurate forecast than the published version. Whereas prior studies demonstrate that analysts’ forecasts are inefficient by predicting forecast errors using publicly available information, we show that analysts are inefficient in using the information they possess. We argue analysts’ incentives to use information efficiently will vary with horizon, because forecast inaccuracy exposes analysts to greater reputational risk at shorter horizons, given the outstanding forecasts can be more readily compared to actual earnings. If the quality of analysts’ forecasts deteriorates at longer forecast horizons, we hypothesize users of analyst forecasts face a trade-off when obtaining information about expectations of longer horizon earnings.Footnote 2 Shorter horizon forecasts contain higher quality information about changes in fundamentals, but that information has not been calibrated for longer horizons. In contrast, longer horizon forecasts have horizon-specific information about future earnings but are of lower quality. Therefore we predict more accurate forecasts can be created by combining analysts’ forecasts of earnings at multiple horizons. The model we employ regresses future earnings on earnings forecasts at multiple horizons and allows the weights on those forecasts to vary to minimize squared forecast error.

We find there is substantial information about future earnings in analysts’ forecasts of earnings at other forecast horizons. This suggests that analysts use the information in their own forecasts inefficiently. A simple model regressing the current year’s forecast error on (i) the current quarter’s earnings forecast, (ii) the remainder of the current year’s forecast (i.e., quarters two through four), and (iii) next year’s forecast predicts over 10% of the error in the current year’s forecast.Footnote 3 The significantly positive coefficient on the current quarter’s forecast suggests analysts underweight the information in that horizon’s forecast. The significantly negative coefficient on next year’s forecast suggests analysts overweight the information in that horizon’s forecast. These results suggest that forecast accuracy could improve by placing more weight on the information in the shorter horizon forecast (which is relatively higher quality) and by placing less weight on the information in the longer horizon forecast (which is relatively lower quality).

Our next set of tests examines whether analyst optimism increases the overweighting of long-horizon information. A long line of literature on the forecast walkdown (e.g., Bradshaw et al. 2016) suggests that managers and investors prefer optimistic forecasts at longer horizons yet beatable expectations at shorter horizons. This suggests analysts’ longer horizon forecasts are of even lower quality when they are more optimistic. We thus expect our model to re-weight analysts’ earnings forecasts to a greater degree when analysts’ incentives for optimism are greater.

We use two proxies to capture analyst optimism. First, we use past returns to capture the horizon-dependent variation in analysts’ incentives to incorporate news into forecasts. We expect analysts will incorporate negative (positive) news more fully into shorter (longer) horizon forecasts, which will tend to make longer horizon forecasts more informationally inefficient in the presence of negative news. Second, we measure the analyst’s actual optimism as the difference between the analyst’s share price target and the outstanding share price. Share price target forecasts, which are often based off a discounted cash flow model, differ significantly from the market’s expectation of cash flows (e.g., Bradshaw et al. 2013). We expect this optimism (or pessimism) will largely be reflected in longer horizon earnings expectations, resulting in overweighted information. We interact both proxies with the shorter horizon forecast (current quarter’s) and the longer horizon forecast (next year’s). In both cases, we find significantly greater re-weighting in the presence of incentives for optimism—the model places even greater weight on shorter horizon forecasts and even less weight on longer horizon forecasts—along with increased explanatory power.

We then benchmark the ability of analysts’ own forecasts to predict error against a firm characteristic-based model. We find that analysts’ own forecasts predict a similar magnitude of forecast error relative to an extensive set of firm characteristics used by prior studies to predict forecast errors (Hou et al. 2012; Larocque 2013; So 2013). Specifically, analysts’ own forecasts predict 6.5% (10.5%) of forecast error in individual (consensus) forecasts, whereas firm characteristics, defined as the combination of variables used by Larocque (2013) and So (2013), predict 6.2% (10.7%) of forecast error. After incorporating the information in both past returns and share price target optimism, analysts’ own forecasts explain 11.3% of the current year’s forecast error and the incremental predictability from firm characteristics is only 2.2%. These findings are consistent with analysts’ own forecasts explaining much of the error predicted by firm characteristics and suggest that analysts do not have sufficiently strong incentives to efficiently incorporate the information they collect into all of their forecasts.

Next, we assess whether forecasts adjusted for the documented inefficiency provide better proxies for market expectations of earnings (Li and Mohanram 2014). We do so by regressing future returns on the earnings surprise, calculated using both unadjusted forecasts and forecasts adjusted for predictable errors (Gu and Wu 2003; Hughes et al. 2008). Measurement error in expectations will bias the earnings response coefficient toward zero, leading the expectation with the least measurement error to have the greatest coefficient (Brown et al. 1987). The earnings response coefficient is significantly larger when using the adjusted forecast to calculate the earnings surprise, suggesting it better captures market expectations, relative to the unadjusted forecast. We also find that the earnings response coefficient from our adjusted model outperforms the coefficient from a model adjusted for firm characteristics. We thus argue our methodology improves measurement of earnings response coefficients by purging error from earnings expectations (Brown et al. 1987; Kothari and Sloan 1992).Footnote 4

Finally, we examine forecast frequency to provide evidence consistent with our assumption that analysts’ accuracy incentives decline with the forecast’s horizon. We argue the issuance of a forecast provides a measure of analyst effort and that less frequent forecasting is consistent with weaker accuracy incentives. We find that analysts issue forecasts less frequently at longer forecast horizons, consistent with lower accuracy incentives. We also find analysts’ forecasts are asymmetric—analysts issue shorter horizon forecasts more in response to negative news and longer horizon forecasts more in response to positive news—consistent with stronger incentives to incorporate positive (negative) news at longer (shorter) horizons.

We contribute to the literature by providing an improved measure of earnings expectations. Our approach predicts over 10% of forecast error at the current year’s forecast horizon, a commonly used benchmark that research has shown to be highly accurate (Basu and Markov 2004; Bradshaw et al. 2012). These predictable errors resemble more conventional methods of correcting forecast errors, which regress earnings or forecast errors on firm characteristics known to predict forecast biases (So 2013; Larocque 2013). Moreover, our method yields similar improvements using fewer variables and is parsimonious, as it only requires the use of forecasts of multiple horizons. Finally, the estimates we produce have a stronger association with future returns, suggesting they serve as a better measure of earnings expectations.

Second, our analysis relates to how analysts map information into forecasts. Given that analysts possess both timing and information advantages over time-series models, it is unclear why analyst forecasts are inferior expectations of earnings than time-series models at long (or really any) horizon (Schipper 1991). By using the analyst’s own forecast to predict forecast error, our evidence suggests much of the predictable errors in forecasts arises from a failure to incorporate information into all of the analyst’s forecasts. We find analysts’ own forecasts can predict much of the forecast error explained by firm characteristics, inconsistent with these predictable errors resulting from a failure to collect information or an inability to incorporate collected information into any earnings forecast. We construct our regression model using two simple trade-offs that affect accuracy incentives (e.g., Chen and Jiang 2006; Bagnoli et al. 2008), so we believe our findings are most consistent with incentive-based explanations. This does not rule out behavioral explanations, such as difficulty increasing with horizon or analysts better understanding the implications of information for shorter horizon earnings. However, these explanations are potentially related—if long-horizon forecast accuracy is sufficiently unimportant to investors, analysts may not have sufficiently strong incentives to learn how information maps into long-horizon earnings.

Prior literature

We contend that analysts face trade-offs in producing accurate forecasts. Producing accurate earnings forecasts requires effort to properly calibrate information to the horizon of the forecast. If there are benefits to biasing forecasts—for example, overweighting private information or catering to managers (Bernhardt et al. 2006; Berger et al. 2019)—then analysts will trade off their incentives to bias with their incentives for accuracy. We argue accuracy incentives are stronger (weaker) at shorter (longer) horizons. Our main result documents substantial accuracy improvements by using a regression model that re-weights analysts’ own forecasts according to their incentives to produce accurate forecasts. In this section, we place our study in the context of the literature.

Forecast inefficiency

In the 1990s and early 2000s, there was significant academic debate as to whether analysts use information efficiently. Several scholars argued that available information does not predict forecast errors (e.g., Givoly 1985; Keane and Runkle 1998; Gu and Wu 2003; Basu and Markov 2004) and that statistical evidence of an association between firm characteristics and forecast errors arose because of outliers or analysts’ loss functions. However, the evidence from more recent studies is more challenging to reconcile with the view that analysts use information efficiently. For instance, studies have documented larger biases, shown evidence of median bias (i.e., Hou et al. 2012, Table 3), and have greater statistical power because of a longer time-series.

The intuition behind the studies arguing against “forecast inefficiency” remains persuasive—highly paid professionals engaged in the repetitive task of forecasting earnings are unlikely to make costly and easily identifiable errors (e.g., Kahneman 2011). However, given analysts have weak accuracy incentives (Bagnoli et al. 2008; Brown et al. 2015; Brown et al. 2016), the cost of effort and/or incentives to issue non-Bayesian forecasts could rationally lead to predictable errors. Our study builds on the intuition of the studies arguing for forecast inefficiency by attempting to “rationalize inefficiency.” In particular, we exploit two sources of variation in analysts’ incentives to produce accurate forecasts. First, because more recent forecasts have greater salience at the time of the earnings release, we argue accuracy incentives are higher (lower) at shorter (longer) forecast horizons (e.g., DeBondt and Thaler 1990). Second, analysts have incentives to provide beatable forecasts at shorter horizons and optimistic forecasts at longer horizons (e.g., Ke and Yu 2006; Berger et al. 2019). Thus their incentives for optimism increase with the forecast’s horizon, which decreases accuracy incentives by introducing bias.

There are a number of more specific explanations that our tests cannot differentiate between. For example, motivated reasoning could explain greater overweighting of private information at longer horizons. As Kunda (1990) explains, motivated reasoning suggests that “the motivation to be accurate enhances the use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion.” The wider (narrower) range of possible outcomes at longer (shorter) horizons allows analysts to (disciplines analysts not to) overweight their private information and underweight firm characteristics (Bradshaw et al. 2016). Motivated reasoning thus predicts that analysts will use information less efficiently in forecasts where they are less constrained (i.e., longer horizon forecasts). Alternatively, analysts could be cognizant of the reason for biasing forecasts. For example, analysts could strategically overweight private information and may not comply with Bayes’ rule because sales incentives make such a strategy optimal. Further, weak accuracy incentives could simply lead analysts to exert less effort calibrating their short-horizon information to longer horizons. We cannot pinpoint the precise strategic explanation for the inefficiency documented herein.

Sample selection and descriptive statistics

Sample selection

We use samples from both the I/B/E/S detail file and the I/B/E/S consensus file. The detail file analysis allows us to hold the information set of the analyst constant by comparing forecasts across horizons issued by the same analyst on the same date. This ensures each forecast was made with an identical information set. A downside of the detail file analysis is that we require the analyst forecast each horizon, which imposes somewhat restrictive sampling procedures and introduces generalizability concerns, because analysts sometimes do not respond to information at each forecast horizon (as we illustrate in section 4.5). The consensus file analysis, which uses the average forecast across analysts, mitigates this concern while allowing us to benchmark our results against the literature on forecast error predictability, which predominantly uses the consensus file (Hou et al. 2012; So 2013; Larocque 2013). Therefore we view both sets of analyses as complements.

In both the detail and summary file samples, we use all firm-years over the years 1985 to 2018 from the unadjusted file. In the detail file analysis, we retain analyst-firm-years with analyst forecasts for the first, second, third, and fourth quarter earnings of the current year as well as next year earnings, all issued on the same date between the prior year earnings announcement and the first quarter earnings announcement. In the consensus file analysis, we retain firm-years with consensus forecasts for the first, second, third, and fourth quarter earnings of the current year as well as next year earnings, all with a statistical period end-date between the prior year earnings announcement and the first quarter earnings announcement. If the forecasts are available on the same day more than once for the same analyst-firm-year (firm-year), we retain only the final set of forecasts for the detail file (consensus file). We scale all earnings forecasts and forecast errors by price computed on the trading day before we record the forecast. We require data from CRSP to split-adjust forecasts and actuals to the forecast announcement date (Payne and Thomas 2003). We use data from Compustat to calculate control variables, with all control variables coming from the prior fiscal year. We remove observations with a price below $5 to avoid scaling by small denominators. After imposing these restrictions, the detail (consensus) file sample includes 166,030 (54,073) analyst-firm-years (firm-years). We detail how we select our sample in Appendix B. All continuous variables are winsorized at the top and bottom 1%. Below, we introduce a simple timeline to help clarify when we calculate the variables (Fig. 1). The periods in the timeline correspond to the time subscripts used in the variable names. The forecasts used in the analysis are all made between the prior year’s earnings announcement, EA(YRt-1), and the earnings announcement for the first quarter of the current year, EA(Q1). The control variables all correspond to values from YRt-1 unless stated otherwise.

Fig. 1
figure1

Forecast Timeline

Descriptive statistics

Table 1, Panel A, reports descriptive statistics for the detail file sample. The first quarter forecasts are slightly pessimistically biased as the mean forecast error (actual minus forecast) scaled by price equals 0.0005. The forecasts become more optimistic as the horizon increases, moving from a forecast error of −0.0003 for the second quarter forecasts to −0.0014 for the third quarter forecasts to −0.0028 for the fourth quarter forecasts, and to −0.0134 for the next year forecasts (−0.0034 per quarter). These statistics confirm prior findings that analysts’ estimates are optimistic and their errors increase with the forecast horizon (O’Brien 1988; Richardson et al. 2004).

Table 1 Descriptive Statistics and Correlations

Table 1, Panel B, reports descriptive statistics for the consensus file sample, which are generally consistent with the corresponding figures in the detail sample. The first quarter forecasts are slightly pessimistically biased on average, as evidenced by the mean forecast error scaled by price of 0.0001. As in the detail file sample, the consensus forecasts become optimistically biased at the second quarter horizon, and the optimistic bias increases monotonically with horizon through the next year’s forecast. Panels C and D of Table 1 report correlations for the detail and consensus files, respectively.

Research design and empirical results

Testing for cross-horizon forecast error predictability

Studies have examined whether analyst forecasts efficiently use information by regressing forecast error on the forecast for the same horizon (DeBondt and Thaler 1990; Keane and Runkle 1998)—for example, by regressing the current year’s forecast error on the current year’s forecast. Coefficients above (below) zero provide evidence that analysts underweight (overweight) the information incorporated into the forecast.Footnote 5 We adjust this methodology to test whether analysts efficiently use the information they incorporate into forecasts at any horizon. Specifically, we regress forecast error on earnings forecasts from a variety of horizons, allowing us to assess whether analysts assign Bayesian or non-Bayesian weights to information in their own forecasts at other horizons. Evidence that a different horizon’s earnings forecast has a positive (negative) coefficient would suggest that analysts underweight (overweight) the information in that horizon’s forecast. We interpret the R-squared as the amount analysts can reduce forecast bias by more efficiently weighting the information they have collected and incorporated into their own forecasts. Because accuracy incentives decrease with horizon, we expect forecast bias can be reduced by placing greater weight on the shorter horizon forecasts and less weight on the longer horizon forecasts. We estimate the following model.

$$ FE={\beta}_0+{\beta}_1 FORE(Shorter)+{\beta}_2 FORE(Contemporaneous)+{\beta}_3 FORE(Longer)+\varepsilon $$
(1)

FE is the analyst’s forecast error and is calculated as the firm’s actual earnings per share less the analyst’s forecasted earnings per share scaled by price. We study forecast errors at five horizons: the four quarterly forecasts of the current year (FE(Q1), FE(Q2), FE(Q3), FE(Q4)), and next year’s annual forecast (FE(YRt+1)). Consistent with studies on forecast efficiency, we control for the contemporaneous forecast, but we are primarily interested in the forecasts at shorter and longer horizons. FORE(Contemporaneous) is the analyst’s forecast of earnings per share for the same horizon as the forecast error in the dependent variable. FORE(Shorter) is the sum of the shorter horizon earnings per share forecasts. FORE(Longer) is the sum of the longer horizon earnings per share forecasts. For instance, when the dependent variable is FE(Q3), FORE(Contemporaneous) is the third quarter forecast, FORE(Shorter) is the sum of the first and second quarter forecasts, and FORE(Longer) is the sum of the fourth quarter and next year’s forecasts. All of the forecast variables are scaled by price. Again, the forecasts used to calculate each of the above variables are all made on the same day to hold the analyst’s information set constant (Kang et al. 1994).

The results from estimating Eq. (1) are reported in Table 2. In column (1), we regress the first quarter forecast error on a contemporaneous forecast (the first quarter forecast) and a longer horizon forecast (the sum of the second, third, and fourth quarter forecasts as well as next year’s forecast). We observe no forecast error predictability as both of the independent variables have insignificant coefficients and the adjusted R-squared is very low at 0.0001. In column (2), we regress the second quarter forecast error on a shorter horizon forecast (the first quarter forecast), a contemporaneous forecast (the second quarter forecast), and a longer horizon forecast (the sum of the third and fourth quarter forecasts as well as next year’s forecast). The coefficient on the shorter horizon forecast is significantly positive, whereas the coefficient on the longer horizon forecast is significantly negative. Thus the second quarter forecast bias could decrease by placing greater weight on the information in the shorter horizon forecast and less weight on the information in the longer horizon forecasts. We observe similar results in columns (3) to (5) where the dependent variable is the third quarter, fourth quarter, and next year’s forecast error, respectively. In each column, the shorter horizon forecast loads positively, and the longer horizon forecast loads negatively. Further, the predictability of forecast error increases with horizon, as evidenced by the adjusted R-squared increasing monotonically with horizon, from 0.037 at the second quarter horizon to 0.104 at the next year horizon.Footnote 6

Table 2 Testing for Cross-Horizon Forecast Error Predictability

There are several important takeaways from these analyses. First, analysts’ own forecasts from other horizons do not predict forecast error at the current quarter’s forecast horizon. This suggests the shorter horizon of the forecast disciplines the analyst to incorporate information efficiently. However, we begin to observe forecast inefficiency at the second quarter horizon, and it increases monotonically at longer horizons. We observe shorter horizon forecasts are positively associated with forecast error, while longer horizon forecasts are negatively associated with forecast error.

Measuring the ability of other horizon forecasts to explain current year forecast error

Next, we examine the ability of other horizon forecasts to explain current year forecast error and contrast the predictability with other sources of predictable error identified in the literature. We focus on current year forecast error predictability, because (i) it is a commonly used horizon to forecast earnings by both academics and practitioners (i.e., Martin et al. 2018; Call et al. 2021) and (ii) the strong forecast error predictability for the third and fourth quarters in Table 2 suggests we can plausibly explain a significant proportion of forecast error at the annual horizon.Footnote 7 We regress the current year forecast error on the first quarter forecast, the remainder of the current year forecast, and next year’s forecast via the following model.

$$ FE\left({YR}_t\right)={\beta}_0+{\beta}_1 FORE\left({Q}_1\right)+{\beta}_2 FORE\left({Q}_{2- 4}\right)+{\beta}_3 FORE\left({YR}_{t+ 1}\right)+\varepsilon . $$
(2)

FE(YRt) is the analyst’s current year forecast error. FORE(Q1) is the first quarter forecast, FORE(Q2–4) is the remainder of the current year forecast (i.e., quarters two through four), and FORE(YRt+1) is next year’s forecast.Footnote 8

The results from estimating Eq. (2) using the detail file sample are reported in Panel A of Table 3. To present a baseline specification, column (1) reports the results from regressing current year forecast error on the current year forecast, similar to prior forecast efficiency studies (DeBondt and Thaler 1990; Keane and Runkle 1998). The coefficient on the current year forecast is negative and significant, but the model explains minimal variation in forecast error as evidenced by the adjusted R-squared of 0.004. In other words, the traditional approach to detect forecast inefficiency, by regressing forecast error on earnings forecasts for the same horizon, explains little of the error in the current year’s forecast (DeBondt and Thaler 1990; Keane and Runkle 1998).

Table 3 Cross-Horizon Forecast Error Predictability

Column (2) reports the results from estimating Eq. (2) without control variables. The coefficient on the first quarter (next year) forecast is significantly positive (significantly negative). This suggests, in the current year forecast, the analyst underweights (overweights) the information included in the shorter (longer) horizon forecasts. The model substantially increases forecast error predictability as evidenced by the adjusted R-squared of 0.065.Footnote 9

In column (3), we regress the current year forecast error on a series of firm characteristics that studies have shown to predict forecast error, to compare the power of the forecast inefficiency we document to models used in the literature. The control variables follow both So (2013) and Larocque (2013). Following So (2013), we include controls for prior earnings, accruals, asset growth, dividends, book to market, and price. ACT[YRt-1]+ is the prior year earnings per share scaled by price when positive and zero otherwise. ACT[YRt-1] is an indicator variable set equal to one when the prior year earnings per share scaled by price is negative and zero otherwise. ACCRUALS is the change in current assets minus the change in cash and cash equivalents minus the change in current liabilities plus the change in current debt, scaled by beginning market value of equity. ACCRUALS+ (ACCRUALS) is equal to ACCRUALS when ACCRUALS is positive (negative) and zero otherwise. AG is asset growth. DIV is dividends scaled by beginning market value of equity and NODIV is an indicator set equal to one if no dividends were paid and zero otherwise. BTM is the book value of common equity scaled by the market value of equity. PRC is the share price the day before the forecasts are issued. Following Larocque (2013), we also control for past forecast error and past returns. FE(YRt-1) is the prior year forecast error, and RETPRE is the 12 month market-adjusted return ending the month before the forecast date. Specific variable definitions are reported in Appendix A, and descriptive statistics are reported in Table 1. The adjusted R-squared of 0.062 in column (3) resembles the adjusted R-squared of 0.065 in column (2). Thus the forecast inefficiency we document predicts a similar level of the current year’s forecast error, relative to an extensive set of firm characteristics used in prior studies.

In column (4), we estimate a model combining the forecast characteristics and other horizon forecasts to examine whether the two models predict similar errors. The incremental R-squared from adding forecast characteristics to the forecast inefficiency model is 0.041. To measure the ability of the forecast inefficiency we document to explain the predictable errors from firm characteristics, we compute the ratio of the incremental R-squared (from incorporating firm characteristics into the forecast inefficiency model) to the R-squared of the model with just the firm characteristics included. We find roughly 34% of the information explained by firm characteristics is also explained by analysts’ forecasts at other horizons (1–0.041/0.062). Finally, in column (5), we include firm and quarter fixed effects, to ensure our results are not driven by time or firm-invariant characteristics. In both columns, the coefficient on the first quarter forecast remains significantly positive, and the coefficient on next year’s forecast remains significantly negative.Footnote 10

Importantly, we also demonstrate out of sample forecast bias reductions to more fully ensure all of the information used to adjust the forecasts would be available at the time the forecasts are issued. To do so, we estimate rolling, out-of-sample regressions for each of the specifications in Panel A of Table 3 (except column 5, because this specification includes fixed effects). For each year t in the sample, we estimate the regression using the prior three years of data (years t-3 to t-1). We then use the coefficients from these regressions to determine the predicted forecast error in year t. Finally, we report the out of sample adjusted R-squared values to gauge the level of forecast bias reduction on an out of sample basis. Similar to the in-sample values, the out-of-sample adjusted R-squared is larger in column (2) than in column (3) (0.055 versus 0.046, respectively), and the difference is even larger on a percentage basis.

In Panel B of Table 3, we conduct a similar set of tests using consensus analyst forecasts rather than individual analyst forecasts. These analyses have several purposes. First, they allow us to assess whether identifying forecast inefficiency yields significant improvements in the consensus forecast, which is more commonly used in the literature to predict earnings because of its greater accuracy (Bradshaw et al. 2012). Second, they allow us to assuage concerns that the results are not generalizable beyond the restrictive sample used in the detail file analyses (which requires the same analyst to forecast earnings at five horizons for the same firm on the same day).

In column (2), using the consensus file sample, we document higher forecast error predictability than in the detail file sample, inconsistent with idiosyncratic errors leading to forecast inefficiency. Specifically, the coefficient on the first quarter forecast increases from 0.669 in the detail file to 0.887 in the consensus file, the coefficient on next year’s forecast decreases from −0.172 to −0.338, and the adjusted R-squared increases from 0.065 to 0.105. Contrasting the forecast inefficiency model in column (2) with the firm characteristic model in column (3), both models continue to predict similar levels of forecast error (adjusted R-squared of 0.105 versus 0.107). In column (4), we see that adding the firm characteristics increases the R-squared only 5.4%, relative to the model including only forecasts at other horizons, so the information in other forecasts explains nearly 50% of the error predicted by firm characteristics (1–0.054/0.107). In column (5), we again include year and quarter fixed effects. In both columns, the coefficient on the first quarter forecast remains significantly positive, and the coefficient on next year’s forecast remains significantly negative. Overall, information from forecasts at other horizons explains much of the error predicted by firm characteristics, which is consistent with analysts processing the information that predicts errors in at least one horizon’s forecast.

Analyst optimism and forecast inefficiency

In this section, we examine how optimism affects the forecast inefficiency documented in the prior section. Analysts have incentives to issue optimistically biased longer horizon forecasts (Kang et al. 1994; Ke and Yu 2006; Jackson 2005), because doing so stimulates trading volume and enhances their access to firm managers by catering to the reporting preferences of those managers. In contrast, we expect analysts have strong reputational incentives to provide an accurate forecast for shorter horizon forecasts, given the proximity of the earnings realization. We thus suspect that optimism will lead to greater inefficiency and greater forecast re-weighting in the presence of more optimistic forecasts (i.e., incrementally more weight on the first quarter forecast and incrementally less weight on next year’s forecast). We test this via the following model.

$$ FE\left({YR}_t\right)={\beta}_0+{\beta}_1 FORE\left({Q}_1\right)+{\beta}_2 FORE\left({Q}_{2- 4}\right)+{\beta}_3 FORE\left({YR}_{t+ 1}\right)+{\beta}_4 OPTIMISM+{\beta}_5 FORE\left({Q}_1\right)\ast OPTIMISM+{\beta}_6 FORE\left({YR}_{t+ 1}\right)\ast OPTIMISM+\varepsilon . $$
(3)

We use two proxies to capture distinct aspects of optimism (OPTIMISM). First, we use past firm news, because analysts are incentivized to incorporate (not incorporate) negative firm news into their shorter (longer) horizon forecasts.Footnote 11 We measure past news by multiplying prior returns compounded over the 12 months ending the month before the forecast by negative one, percentile ranking the variable, and rescaling the variable between zero and one (NEWS). The lowest (highest) values of NEWS represent firm-years with the most positive (negative) past returns. Second, we use analysts’ share price target optimism as a measure of the observed optimism, because (i) analysts are incentivized to issue optimistic price targets to generate trading commissions and (ii) analysts’ price targets convey little information about future price changes (e.g., Bradshaw et al. 2013). We measure share price target optimism as the share price target on the I/B/E/S consensus file, divided by the outstanding share price, minus one. We then percentile rank the variable and rescale it between zero and one (SPT). The lowest (highest) values of SPT represent firm-years with the least (greatest) share price target optimism. NEWS captures analysts’ incentives for optimism, whereas SPT captures analysts’ realized optimism.

We interact each optimism variable with the shortest horizon forecast (the current year’s first quarter forecast) and the longest horizon forecast (next year’s annual forecast).Footnote 12 The coefficients on the main effects of the forecasts can be interpreted as the re-weighting effect when analyst optimism is weakest, and the coefficients on the interaction terms can be interpreted as the incremental re-weighting effect when analyst optimism is strongest. The main effects on the optimism proxies capture their relations with forecast bias, but our interest is in the effect of optimism on forecast inefficiency (thus the interactions).

In column (1) of Table 4, we report the results from estimating eq. (3), using the optimism proxy NEWS. Consistent with prior research, we find a significantly negative coefficient on the main effect of NEWS (i.e., analysts underreact to past news). We also find a significantly positive coefficient on FORE(Q1)*NEWS and a significantly negative coefficient on FORE(YRt+1)*NEWS. The model incorporating both accuracy incentives related to horizon and the variation in those incentives associated with recent public information predicts 13.6% of forecast error. In column (2) of Table 4, we report the results from estimating eq. (3), using the optimism proxy SPT. The main effect on SPT loads with a significantly negative coefficient, suggesting analysts are consistent in their optimism across both types of forecasts. More importantly, its interaction with FORE(Q1) is significantly positive, and its interaction with FORE(YRt+1) is significantly negative. As predicted, these results suggest that incrementally more (less) weight should be placed on the shorter (longer) horizon forecast when analyst optimism is greater.

Table 4 Incentives for Optimism & Cross-Horizon Forecast Error Predictability

In column (3), we combine both sets of interactions and find the interactions with both NEWS and SPT remain significant. Moreover, the main effect on the first quarter forecast reduces from 0.887 (in Table 3 Panel B column 2) to 0.241, and the main effect on next year’s forecast becomes insignificantly positive. Thus, when optimism is lowest, we find little evidence of forecast inefficiency.

In columns (4) to (6), we include the firm characteristics from Table 3 to document two main findings. First, after including this series of control variables, the interactions from columns (1) to (3) remain significant. Second, including the interactions, which identify when analysts have incentives to use information less efficiently, substantially attenuates the forecast error predictability from firm characteristics. We measure the ability of our model to explain the predictable forecast errors from firm characteristics as the ratio of the incremental R-squared (the increase in R-squared attributable to incorporating firm characteristics to a model that already includes the forecast variables and interactions) to the R-squared from a model that includes only forecast characteristics. The model that includes the NEWS interactions explains 68% of the forecast error predictability from firm characteristics.Footnote 13 When estimating our most extensive model including both NEWS and SPT, the incremental R-squared from including firm characteristics is only 2.2% (column (6) versus column (3)), suggesting our most extensive forecast inefficiency model explains 80% of the predictable errors from firm characteristics.

Overall, we conclude that accuracy incentives drive forecast inefficiency and that forecast inefficiency explains a substantial portion of the predictable errors in firm characteristics. Thus we argue that incentives likely explain the majority of the cross-section of predictable forecast errors.Footnote 14

Do market expectations adjust for the predictability of forecast errors?

In the next set of analyses, we test whether market expectations adjust for predictable errors. Specifically, we test whether unexpected earnings computed using analyst forecasts adjusted for forecast inefficiency have a stronger association with future returns than published analyst forecasts (Gu and Wu 2003; Hughes et al. 2008). The strength of the association between earnings surprises and future returns should allow us to identify the model the market uses to compute expected earnings. This is because measurement error in the calculation of the earnings surprise will bias the regression coefficient toward zero and generate a smaller R-squared value. If market expectations adjust for predictable errors, removing the predictable errors from the earnings surprise would increase the coefficient estimate on the earnings surprise as well as the R-squared value. We examine whether an adjusted forecast better represents the market’s expectations of earnings, relative to the unadjusted forecast, by estimating the following set of equations.

$$ RET={\beta}_0+{\beta}_1 FE{\left({YR}_t\right)}_{UNADJ}+\varepsilon . $$
(4a)
$$ RET={\theta}_0+{\theta}_1 FE{\left({YR}_t\right)}_{ADJ}+\varepsilon . $$
(4b)

The dependent variable is either market adjusted returns over the 12 months beginning the first month after the first-quarter earnings announcement (RETPOST) or market-adjusted returns over the three days surrounding the current fiscal-year earnings announcement (RETEA).

FE(YRt)UNADJ is the current-year forecast error calculated using analysts’ published forecasts, and FE(YRt)ADJ is the current-year forecast error adjusted for predictable errors. We compare the unadjusted forecast error to the forecast error adjusted for three sources of predictable errors. First, we adjust for predictable errors from analysts’ own forecasts. FE(YRt)ADJ-FORECASTS is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (2) in Panel B of Table 3. Second, we adjust for predictable errors from firm characteristics. FE(YRt)ADJ-CONTROLS is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (3) in Panel B of Table 3. The control variables are measured as of year t-1 to avoid look ahead bias. Third, we adjust for predictable errors from both sources. FE(YRt)ADJ-BOTH is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (4) in Panel B of Table 3. If the adjusted forecast better represents the market’s expectations of earnings, we expect it to have a stronger association with future returns. Thus the coefficient θ1 would be significantly greater than the coefficient β1 (and the R-squared would significantly increase). We calculate the predicted component of forecast error on an out-of-sample basis, using the same methodology wherein we calculated the out of sample R-squared values in Table 3. Specifically, we regress the first stage model on years t-3 to t-1 and then use the coefficient estimates on the data in year t to calculate predicted forecast error.

Table 5 reports the results from estimating eqs. (4a) and (4b). Panel A reports descriptive statistics. Panel B reports the results for the long-window returns. In column (2), the coefficient on the earnings surprise adjusted for forecasts at other horizons is significantly greater than the coefficient on the earnings surprise computed using published forecasts in column (1). The difference is statistically significant at the 1% level. Further, the adjusted R-squared increases by roughly 21% from 0.061 to 0.074. A Vuong (1989) test, which compares R-squared values, indicates this difference is statistically significant at the 1% level.

Table 5 Do Adjusted Forecasts Better Approximate the Market’s Expectation of Earnings?

In column (3), we benchmark this improvement against that obtained by adjusting the earnings surprise using firm characteristics and lagged news variables (i.e., the regression model used to estimate column (3) in Panel B of Table 3). Consistent with prior research (e.g., Larocque 2013), we find these adjustments substantially improve explanatory power, relative to the unadjusted forecasts. However, the improvement is lower than that obtained by using the adjustment for other horizon forecasts in column (2).

In column (4), we adjust the earnings surprise using the combination of both firm characteristics and other horizon forecasts. As expected, the coefficient on the adjusted earnings surprise is significantly larger than the coefficient on the unadjusted earnings surprise. More importantly, the coefficient on this adjusted earnings surprise is significantly larger than the coefficient adjusted for firm characteristics only (column (3)), but it is not significantly different from the coefficient adjusted for other horizon forecasts only (column (2)). Thus adding the other horizon forecasts to the firm characteristic model improves its explanatory power, but adding the firm characteristics to the other horizon forecast model does not. Untabulated Vuong (1989) tests yield similar inferences as the F-tests.

In Panel B of Table 5, we obtain similar results estimating earnings response coefficients using returns over the three-day window centered on the current fiscal year’s earnings announcement. Earnings surprises adjusted using other horizon forecasts have larger earnings response coefficients than both unadjusted forecasts and forecasts adjusted using a series of firm characteristics.Footnote 15

Do analysts issue longer-horizon forecasts less frequently in response to bad news?

Two assumptions underlying our analyses are that (i) analysts’ accuracy incentives decrease with horizon and (ii) analysts’ incentives for optimism increase with horizon. Although prior findings are consistent with these assumptions, in this section, we provide evidence on both assumptions by examining the frequency of forecast issuances. While forecast accuracy is a complex process, the issuance of a forecast has an intuitive link to effort. Evidence that analysts exert less effort to forecast longer horizon earnings and that this becomes stronger with greater incentives for optimism would support our assumptions. This analysis also contributes to prior research that finds analysts selectively respond to good news (McNichols and O’Brien 1997; Scherbina 2008) by showing the decision to forecast varies with horizon. That is, longer (shorter) horizon forecasts are more responsive to good (bad) news (Berger et al. 2019).Footnote 16

We first present descriptive statistics in Table 6, Panel A. Like the previous analysis, the sample is limited to forecasts issued between the prior fiscal year’s earnings announcement and the first fiscal quarter’s earnings announcement. We also require market-adjusted returns from CRSP. We see that analysts issue more forecasts for the current year horizon than the next year horizon (before the log transformation, 7.8 versus 6.6 forecasts, respectively). For the quarterly forecasts, analysts issue more forecasts for the first (i.e., current) quarter than each of the next three quarters (before the log transformation, 6.4 forecasts for the first quarter versus 5.4, 5.3, and 5.5 forecasts for the second, third, and fourth quarters, respectively). These findings suggest that analyst effort declines with horizon, which is consistent with our assumption that accuracy incentives decline with horizon.

Table 6 Analyst Forecast Frequency

Next, we regress the log of the number of forecasts during a firm-quarter on market-adjusted returns (calculated during the quarter and at the prior year earnings announcement) while controlling for other horizon forecasts and firm fixed effects. We measure the number of forecasts for both longer and shorter horizons. Our use of market-adjusted returns as well as firm fixed effects removes time-invariant and economy-wide shocks that affect both returns and the information analysts respond to. We use returns to measure news because it provides a comprehensive measure of firm news. We control for revisions to other forecasts, because this allows us to control for the average response to news and examine which horizon analysts intensively map information into. We estimate the following model.

$$ Log\left(\# Forecasts\right)={\beta}_0+{\beta}_1\ Returns+{\beta}_k\ Log\left(\# Other\ Horizon\ Forecasts\right)+\sum Firm\ Fixed\ Effects+\varepsilon . $$
(5)

If analysts respond more to negative (positive) news in their shorter (longer) horizon forecasts, we expect a smaller coefficient (β1) when the dependent variable is the number of shorter horizon forecasts and a larger coefficient when the dependent variable is the number of longer horizon forecasts. We present the results from estimating eq. (5) in Table 6, Panels B and C. Our main result is that analysts revise shorter horizon forecasts relatively more in response to negative news and longer horizon forecasts relatively more in response to positive news.

In Panel B, the returns are measured over the corresponding quarter. In column (1), the dependent variable is the log of the number of current fiscal year forecasts, and we control for the number of forecasts for the next fiscal year. We find that returns in this specification load with a significantly negative coefficient, suggesting that forecasts of current year earnings are more responsive to negative news. In column (2), the dependent variable is the log of the number of next fiscal year forecasts, and we find a significantly positive coefficient, suggesting that forecasts of next year earnings respond more to positive news.

In columns (3) through (6), we estimate a similar set of equations, except our dependent variables and control variables are forecasts for the four quarterly forecasts of the current year’s earnings. Specifically, in column (3), the dependent variable is the number of current quarter forecasts, and the control variables include the number of forecasts issued for the subsequent three quarters. We find a significantly negative coefficient on returns, consistent with the number of forecasts issued for one quarter ahead being more sensitive to negative news. In column (4), we replace the dependent variable with the number of next quarter forecasts and use the number of forecasts issued at all other quarterly horizons as control variables. We find the coefficient on returns remains significantly negative, although it is smaller in magnitude than in column (3). In columns (5) and (6), the coefficient on returns continues to monotonically increase as the forecast horizon lengthens. In column (6), we find a significantly positive coefficient on returns, suggesting that, relative to shorter horizon forecasts, longer horizon forecasts are more sensitive to positive news.

Finally, in Panel C, we re-estimate eq. (5), using as our independent variable of interest market-adjusted returns over the three-day window centered on the prior year earnings announcement. Because earnings announcement returns tend to be driven by public information, this allows clean identification of how the sign of news influences analysts’ responses to information (eliminating concerns of reverse causation, such as the analyst forecasting causing returns during the quarter). Similar to Panel B, we find (i) significantly negative coefficients on the shortest horizon forecasts (in both annual and quarterly forecast regressions), (ii) significantly positive coefficients on the longest horizon forecasts, and (iii) monotonic increases in the coefficient on returns as the horizon lengthens. Collectively, these results support our contention that analysts’ incentives for optimism increase with the forecast horizon.

Conclusion

We propose a novel measure of forecast inefficiency: the extent to which analysts’ earnings forecasts predict error in their own earnings forecasts at other horizons. Because the analyst has by definition processed the information we use to predict forecast errors, our methodology isolates information that analysts use inefficiently (as opposed to the analyst not collecting or not being aware of the information). We document current year forecasts underweight information in shorter horizon forecasts (the first quarter of the current year) and overweight information in longer horizon forecasts (next year’s forecast).We document that information from other forecasts predicts similar errors as an extensive set of firm characteristics and a similar percentage of error, suggesting a failure to incorporate information in other forecasts explains much of the cross-section of predictable errors. Finally, we measure earnings response coefficients to demonstrate that forecasts adjusted for the inefficiency we document better represent the market’s earnings expectations, both relative to unadjusted analysts’ forecasts and analysts’ forecasts adjusted for predictable errors from firm characteristics.

Because we construct our model using variation in analysts’ accuracy incentives, we argue our analysis identifies rational forecast inefficiency. While we highlight that this is potentially consistent with several specific interpretations (i.e., less effort or more bias), these findings suggest incentives explain a substantial portion of the cross-section of analyst forecast errors.

Notes

  1. 1.

    Recent studies have also shown that earnings forecasts and stock recommendations generated using machine learning techniques can outperform analysts’ forecasts or forecasts based on time-series models (e.g., Anand et al. 2019; Hunt et al. 2019; Coleman et al. 2020).

  2. 2.

    At longer forecast horizons, analyst forecasts are too extreme, less accurate, and underweight firm characteristics to a larger degree (DeBondt and Thaler 1990; Bradshaw et al. 2012; Raedy et al. 2006).

  3. 3.

    We predict forecast error instead of earnings in the model because we can then interpret the R-squared values as the percentage decrease in bias, but our inferences would remain if we predicted earnings instead.

  4. 4.

    In untabulated analyses, we find the difference between our predicted earnings and published earnings forecasts cannot be used to trade profitably, providing additional evidence our methodology helps identify expected earnings.

  5. 5.

    We predict forecast errors (rather than earnings) so that we can interpret the R-Squared value as the percentage reduction in forecast error. In contrast, DeBondt and Thaler (1990) predict realized earnings as opposed to forecast errors. Our inferences hold if predicting earnings instead of forecast errors.

  6. 6.

    We sum shorter horizon and longer horizon forecasts to make the model more tractable. In untabulated analyses, the inferences are similar if we (i) include all five forecasts as regressors or (ii) include only the first quarter forecast and next year’s forecast. That is, the coefficient on the first quarter forecast increases monotonically, the coefficient on the next year forecast decreases monotonically (becomes more negative), and the R-squared increases monotonically.

  7. 7.

    Current quarter forecasts are also a common horizon to forecast future earnings, but we do not conduct further analysis, because other horizon forecasts do not predict error in the current quarter’s forecast (per Table 2).

  8. 8.

    We divide the annual forecast into two variables (rather than all four quarters) to limit the number of forecast horizons modeled, and we group the second, third, and fourth quarters, because we are primarily interested in the shortest and the longest horizon forecast (not the intermediate forecasts).

  9. 9.

    We also estimate the model in column (2), using median regression (Basu and Markov 2004) with robust standard errors and our inferences remain unchanged.

  10. 10.

    The coefficient on the first quarter forecast remains significantly positive and the coefficient on next year’s forecast remains significantly negative if we include analyst and quarter fixed effects or analyst, firm, and quarter fixed effects.

  11. 11.

    In Section 4.5, we use the frequency of forecasts to provide evidence consistent with this assertion. We show that analysts are more responsive to negative (positive) news in their shorter (longer) horizon forecasts, which is consistent with their having greater incentives to incorporate positive (negative) news in longer (shorter) horizon forecasts.

  12. 12.

    We do not include an interaction with the second through fourth quarter forecast to reduce multi-collinearity and because we make no hypothesis related to this intermediate horizon forecast.

  13. 13.

    For example, to compute the ratio for the NEWS model we take the incremental R-squared from adding firm characteristics (the difference in the column (1) and column (4) R-squared, 17.1% minus 13.6%) and then divide the incremental R-squared by the R-squared from just firm characteristics, 10.7% (i.e., 3.4%/10.7% = 32%). We then subtract the 32% from one, to obtain the percentage of the error predictability explained by our forecast inefficiency model including the NEWS interactions.

  14. 14.

    In untabulated analyses, we interact the first quarter forecast and the next year forecast with an indicator variable denoting the presence of a management forecast—neither interaction is significant.

  15. 15.

    In untabulated analyses, we find adjusting earnings forecasts using other horizon forecasts does not improve cost of capital estimates.

  16. 16.

    Berger et al. (2019) show that, after the initial forecast of the quarter, analysts tend to respond to good news by revising share price targets and bad news by revising the current quarter’s earnings forecast. In this section, we document that longer horizon earnings forecast revisions respond more to good news than shorter horizon earnings forecasts.

References

  1. Abarbanell, J.S. (1991). Do analysts’ earnings forecasts incorporate information in prior stock price changes? Journal of Accounting and Economics 14 (2): 147–165.

    Article  Google Scholar 

  2. Abarbanell, J.S., and L. Bernard. (1992). Tests of analysts’ overreaction/underreaction to earnings information as an explanation for anomalous stock price behavior. The Journal of Finance 47 (3): 1181–1207.

    Article  Google Scholar 

  3. Anand, V., R. Brunner, K. Ikegwu, and T. Sougiannis. (2019). Predicting profitability using machine learning. Working paper, University of Illinois at Urbana-Champaign.

  4. Bagnoli, M., S.G. Watts, and Y. Zhang. (2008). Reg-FD and the competitiveness of all-star analysts. Journal of Accounting and Public Policy 27 (4): 295–316.

    Article  Google Scholar 

  5. Ball, R., and R. Watts. (1972). Some time series properties of accounting income. The Journal of Finance 27 (3): 663–681.

    Article  Google Scholar 

  6. Basu, S., and S. Markov. (2004). Loss function assumptions in rational expectations tests on financial analysts’ earnings forecasts. Journal of Accounting and Economics 38: 171–203.

  7. Berger, P.G., C.G. Ham, and Z.R. Kaplan. (2019). Do analysts say anything about earnings without revising their earnings forecasts? The Accounting Review 94 (2): 29–52.

    Article  Google Scholar 

  8. Bernhardt, D., M. Campello, and E. Kutsoati. (2006). Who herds? Journal of Financial Economics 80 (3): 657–675.

    Article  Google Scholar 

  9. Bradshaw, M.T., M.S. Drake, J.N. Myers, and L.A. Myers. (2012). A re-examination of analysts’ superiority over time-series forecasts of annual earnings. Review of Accounting Studies 17 (4): 944–968.

    Article  Google Scholar 

  10. Bradshaw, M.T., L.D. Brown, and K. Huang. (2013). Do sell-side analysts exhibit differential target price forecasting ability? Review of Accounting Studies 18 (4): 930–955.

    Article  Google Scholar 

  11. Bradshaw, M.T., L.F. Lee, and K. Peterson. (2016). The interactive role of difficulty and incentives in explaining the annual earnings forecast walkdown. The Accounting Review 91 (4): 995–1021.

  12. Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2015). Inside the “black box” of sell-side financial analysts. Journal of Accounting Research 53 (1): 1–47.

  13. Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2016). The activities of buy-side analysts and the determinants of their stock recommendations. Journal of Accounting and Economics 62 (1): 139–156.

    Article  Google Scholar 

  14. Brown, L.D., R.L. Hagerman, P.A. Griffin, and M. Zmijewski. (1987). Security analyst superiority relative to univariate time-series models in forecasting quarterly earnings. Journal of Accounting and Economics 9 (1): 61–87.

  15. Call, A.C., J. Donovan, and J.N. Jennings. (2021). Private lenders’ use of analyst earnings forecasts when establishing debt covenant thresholds. Working paper, Arizona State University and University of Notre Dame and Washington University in St. Louis.

  16. Chen, Q., and W. Jiang. (2006). Analysts’ weighting of private and public information. Review of Financial Studies. 19 (1): 319–355.

    Article  Google Scholar 

  17. Coleman, B., K. Merkley, and J. Pacelli. (2020). Man versus machine: A comparison of Robo-analyst and traditional research analyst investment recommendations. Working paper, Indiana University.

  18. Collins, W.A., and W.S. Hopwood. (1980). A multivariate analysis of annual earnings forecasts generated from quarterly forecasts of financial analysts and univariate time-series models. Journal of Accounting Research 18 (2): 390–406.

  19. DeBondt, W.F.M., and R.H. Thaler. (1990). Do security analysts overreact? The American Economic Review 80 (2): 52–57.

  20. Easterwood, J.C., and S.H. Nutt. (1999). Inefficiency in analysts' earnings forecasts: Systematic misreaction or systematic optimism? The Journal of Finance 54 (5): 1777–1797.

    Article  Google Scholar 

  21. Fried, D., and D. Givoly. (1982). Financial analysts' forecasts of earnings: A better surrogate for market expectations. Journal of Accounting and Economics 4 (2): 85–107.

    Article  Google Scholar 

  22. Givoly, D. (1985). The formation of earnings expectations. The Accounting Review 60 (3): 372–386.

  23. Gu, Z., and J.S. Wu. (2003). Earnings skewness and analyst forecast bias. Journal of Accounting and Economics 35 (1): 5–29.

    Article  Google Scholar 

  24. Hou, K., M.A. Van Dijk, and Y. Zhang. (2012). The implied cost of capital: A new approach. Journal of Accounting and Economics. 53 (3): 504–526.

    Article  Google Scholar 

  25. Hughes, J., J. Liu, and W. Su. (2008). On the relation between predictable market returns and predictable analyst forecast errors. Review of Accounting Studies. 13 (2–3): 266–291.

    Article  Google Scholar 

  26. Hunt, J., J. Myers, and L. Myers. (2019). Improving earnings predictions with machine learning. Working paper, Mississippi State University and University of Tennessee.

  27. Jackson, A.R. (2005). Trade generation, reputation, and sell-side analysts. Journal of Finance. 60: 673–717.

    Article  Google Scholar 

  28. Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux.

  29. Kang, S., J. O’Brien, and K. Sivaramakrishnan. (1994). Analysts' interim earnings forecasts: Evidence on the forecasting process. Journal of Accounting Research. 32 (1): 103–112.

    Article  Google Scholar 

  30. Ke, B., and Y. Yu. (2006). The effect of issuing biased earnings forecasts on analysts' access to management and survival. Journal of Accounting Research. 44 (5): 965–999.

    Article  Google Scholar 

  31. Keane, M.P., and D.E. Runkle. (1998). Are financial analysts' forecasts of corporate profits rational? Journal of Political Economy. 106 (4): 768–805.

    Article  Google Scholar 

  32. Kothari, S.P., and R.G. Sloan. (1992). Information in prices about future earnings: Implications for earnings response coefficients. Journal of Accounting and Economics. 15 (2): 143–171.

    Article  Google Scholar 

  33. Kross, W., B. Ro, and D. Schroeder. (1990). Earnings expectations: The analysts' information advantage. The Accounting Review 65 (2): 461–476.

  34. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin 108 (3): 480–498.

    Article  Google Scholar 

  35. Larocque, S. (2013). Analysts’ earnings forecast errors and cost of equity capital estimates. Review of Accounting Studies 18 (1): 135–166.

    Article  Google Scholar 

  36. Li, K.K., and P. Mohanram. (2014). Evaluating cross-sectional forecasting models for implied cost of capital. Review of Accounting Studies 19 (3): 1152–1185.

    Article  Google Scholar 

  37. Lys, T., and S. Sohn. (1990). The association between revisions of financial analysts' earnings forecasts and security-price changes. Journal of Accounting and Economics 13 (4): 341–363.

    Article  Google Scholar 

  38. Lys, T., and L.G. Soo. (1995). Analysts' forecast precision as a response to competition. Journal of Accounting, Auditing & Finance 10 (4): 751–765.

    Article  Google Scholar 

  39. Martin, X., H. Seo, J. Yang, and D.S. Kim. (2018). Target performance goals in CEO compensation contracts and management earnings guidance. Working paper, Indiana University and National University of Singapore and Peking University and Washington University in St. Louis.

  40. McNichols, M., and P.C. O’Brien. (1997). Self-selection and analyst coverage. Journal of Accounting Research 35: 167–199.

    Article  Google Scholar 

  41. Mendenhall, R.R. (1991). Evidence on the possible underweighting of earnings-related information. Journal of Accounting Research 29 (1): 170–179.

  42. O’Brien, P.C. (1988). Analysts' forecasts as earnings expectations. Journal of Accounting and Economics 10 (1): 53–83.

  43. Payne, J.L., and W.B. Thomas. (2003). The implications of using stock-split adjusted I/B/E/S data in empirical research. The Accounting Review 78 (4): 1049–1067.

  44. Raedy, J.S., P. Shane, and Y. Yang. (2006). Horizon-dependent underreaction in financial analysts’ earnings forecasts. Contemporary Accounting Research 23 (1): 291–322.

    Article  Google Scholar 

  45. Richardson, S.A., S.H. Teoh, and P.D. Wysocki. (2004). The walk-down to beatable analyst forecasts: The role of equity issuance and insider trading incentives. Contemporary Accounting Research 21 (3): 885–924.

    Article  Google Scholar 

  46. Scherbina, A. (2008). Suppressed negative information and future underperformance. Review of Finance 12 (3): 533–565.

    Article  Google Scholar 

  47. Schipper, K. (1991). Commentary on analysts' forecasts. Accounting Horizons 5 (4): 105–121.

    Google Scholar 

  48. So, E.C. (2013). A new approach to predicting analyst forecast errors: Do investors overweight analyst forecasts? Journal of Financial Economics 108 (3): 615–640.

    Article  Google Scholar 

  49. Stickel, S. (1990). Predicting individual analyst earnings forecasts. Journal of Accounting Research 28: 409–417.

    Article  Google Scholar 

  50. Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57 (2): 307–333.

Download references

Acknowledgments

The authors thank Peter Easton (editor), two anonymous reviewers, Phil Berger, Jonathan Black (discussant), Wei Chen, Jimmy Downes (discussant), Rich Frankel, Jeremiah Green, Jared Jennings, Stephannie Larocque, Xiumin Martin, Joe Pacelli, Oded Rozenbaum, Sunny Yang, as well as seminar participants at University of Connecticut, University of Notre Dame, Villanova University, Washington University in St. Louis, the AAA Annual Meeting, and the AAA Midwest Meeting for helpful discussions and comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Charles G. Ham.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Appendix A Variable Definitions
Appendix B Sample Selection

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ham, C.G., Kaplan, Z.R. & Lemayian, Z.R. Rationalizing forecast inefficiency. Rev Account Stud (2021). https://doi.org/10.1007/s11142-021-09622-8

Download citation

Keywords

  • Sell-side analysts
  • Forecast bias
  • Forecast horizon
  • Analyst forecasts

JEL classifications

  • G17
  • M41