Skip to main content
Log in

Determinants of analysts’ revenue forecast accuracy

  • Original Research
  • Published:
Review of Quantitative Finance and Accounting Aims and scope Submit manuscript

Abstract

This study investigates financial analysts’ revenue forecasts and identifies determinants of the forecasts’ accuracy. We find that revenue forecast accuracy is determined by forecast and analyst characteristics similar to those of earnings forecast accuracy—namely, forecast horizon, days elapsed since the last forecast, analysts’ forecasting experience, forecast frequency, forecast portfolio, reputation, earnings forecast issuance, forecast boldness, and analysts’ prior performance in forecasting revenues and earnings. We develop a model that predicts the usefulness of revenue forecasts. Thereby, our study helps to ex ante identify more accurate revenue forecasts. Furthermore, we find that analysts concern themselves with their revenue forecasting performance. Analysts with poor revenue forecasting performance are more likely to stop forecasting revenues than analysts with better performance. Their decision is reasonable because revenue forecast accuracy affects analysts’ career prospects in terms of being promoted or terminated. Our study helps investors and academic researchers to understand determinants of revenue forecasts. This understanding is also beneficial for evaluating earnings forecasts because revenue forecasts reveal whether changes in earnings forecasts are due to anticipated changes in revenues or expenses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We measure experience and portfolio complexity as revenue specific; that is, we employ analysts’ firm-specific and general experience in forecasting revenues and the complexity of analysts’ revenue forecast portfolio (measured from the numbers of covered firms and industries).

  2. In 1998, 13.92% (572/4110) of the analysts in I/B/E/S provide at least one revenue forecast. In 1997, it is only 1.57% (53/3371) of the analysts.

  3. If too few analysts cover a specific firm-year, then the range-adjusted variables might yield weak differentiation criteria. For example, if a firm is covered by only two analysts, they are assigned the values 0 and 1.

  4. The number of covered firms (15.30) diverges from the reported average number of covered firms in Table 1, Panel B (9.69). We report the average over all forecasts in Table 2, Panel A, while we report the average over all analysts in Table 1, Panel B.

  5. We can directly compare the coefficient estimates because all variables are adjusted and range between 0 and 1.

  6. In contrast, we find the expected effect for the other portfolio complexity measure \( Industries \) (coefficient estimate − 0.0090, t value − 3.66); that is, revenue forecasts are less accurate if the analyst covers a larger number of different industries. This finding is reasonable because the practices of revenue recognition might differ between industries.

  7. Keung (2010) conducts a related analysis and finds that 79.20% of the forecasts are consistent. However, Keung (2010) aims to compare earnings forecasts that are not accompanied by revenue forecasts versus consistent pairs versus inconsistent pairs. He finds that earnings forecasts are most accurate when accompanied by consistent revenue forecasts and least accurate when not accompanied by revenue forecasts at all.

  8. We excluded 51,295 observations for the following reasons: the analyst’s prior forecast equals the current one; the consensus forecast equals the current one; or the consensus immediately before the forecast could not be determined.

  9. We naturally employ all revenue forecast data available in I/B/E/S to determine whether the analyst keeps issuing revenue forecasts, and we do not restrict this determination to our employed sample.

  10. Similar to Clement and Tse (2005, 325), we make this inference because \( Accuracy_{ijt - 1} \) is scaled to range between 0 (for the analyst with the worst revenue forecasting performance for firm \( j \) in fiscal year \( t - 1 \)) to 1 (for the analyst with the best performance for firm \( j \) in \( t - 1 \)). The odds ratio (0.292) is the difference in odds that a one-unit difference in \( Accuracy_{ijt - 1} \) has on the probability to observe \( Firm\_Stop_{ijt} = 1 \). By construction, a one-unit difference in \( Accuracy_{ijt - 1} \) corresponds to the difference between the worst and the best revenue forecaster for firm \( j \) in fiscal year \( t - 1 \). The probability that the best forecaster for firm \( j \) in \( t - 1 \) stops forecasting in year \( t \) is only 0.292 times the probability that the worst forecaster for firm \( j \) in \( t - 1 \) stops. This corresponds to 70.8% smaller odds.

  11. We employ \( \overline{Accuracy}_{it - 1} \), instead of \( \frac{1}{J}\mathop \sum \limits_{j = 1}^{J} Lagged\_Accuracy_{ijt} \), as we attempt to measure the average revenue forecast accuracy of all firms analyst \( i \) covers in year \( t - 1 \). \( \frac{1}{J}\mathop \sum \limits_{j = 1}^{J} Lagged\_Accuracy_{ijt} \) can only be calculated for firms that the analyst covers in the two succeeding years.

  12. This inference is conducted from the odds ratio (0.065), which means that an analyst with \( \overline{Accuracy}_{it - 1} = 1 \) has only 0.065 times the odds to stop forecasting revenues compared to an analyst with \( \overline{Accuracy}_{it - 1} = 0 \). This corresponds to 93.5% smaller odds.

  13. Carter and Manaster (1990) base their ranking on the hierarchy of an underwriter’s position in the stock offering “tombstone” announcements. Each underwriter is assigned a rank between 0 (low prestige) and 9 (high prestige). Loughran and Ritter (2004) update the rankings in regular intervals and assign values between 1.1 and 9.1 to the brokerage houses (the attached 0.1 helps to differentiate their ranking from Carter and Manaster’s (1990)). We use the data supplied on Jay Ritter’s homepage. For discussions on the ranking process see Carter and Manaster (1990) or Carter et al. (1998).

  14. Ertimur et al. (2011) conduct a related analysis. However, they employ a multivariate logit model, instead of two different logit models and measure the impact of revenue forecasts on promotion and demotion simultaneously. Furthermore, Ertimur et al. (2011) use a notable smaller sample (8954 vs. 25,668 observations) and a shorter time span (1999–2006 vs. 1999–2014).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanja Lorenz.

Appendices

Appendix 1

See Table 10.

Table 10 Variables for the main analysis of determinants of analysts’ revenue forecast accuracy

Appendix 2

See Table 11.

Table 11 Variables for additional analyses on the analyst-firm-year level

Appendix 3

See Table 12.

Table 12 Variables for additional analyses on the analyst-year level

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lorenz, T., Homburg, C. Determinants of analysts’ revenue forecast accuracy. Rev Quant Finan Acc 51, 389–431 (2018). https://doi.org/10.1007/s11156-017-0675-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11156-017-0675-4

Keywords

JEL Classification

Navigation