Advertisement

Conclusion

  • John D. Levendis
Chapter
Part of the Springer Texts in Business and Economics book series (STBE)

Abstract

In this text, we have explored some of the more common time-series econometric techniques. The approach has centered around developing a practical knowledge of the field, learning by replicating basic examples and seminal research. But there is a lot of bad research out there, and you would be best not to replicate the worst practices of the field.

References

  1. Anscombe, F. J. (1973). Graphs in statistical analysis. The American Statistician, 27(1), 17–21.Google Scholar
  2. Benjamin, D., Berger, J., Johannesson, M., Nosek, B., Wagenmakers, E., Berk, R., et al. (2017). Redefine statistical significance Technical report, The Field Experiments Website.Google Scholar
  3. Coase, R. H. (1982). How should economists choose? In The G warren nutter lectures in political economy (pp. 5–21). Washington, DC: The American Enterprise Institute.Google Scholar
  4. Elliott, G., & Granger, C. W. (2004). Evaluating significance: Comments on “size matters”. The Journal of Socio-Economics, 33(5), 547–550.CrossRefGoogle Scholar
  5. Gelman, A. (2016). The problems with p-values are not just with p-values. The American Statistician, Supplemental Material to the ASA Statement on p-values and Statistical Significance, 10(00031305.2016), 1154108.Google Scholar
  6. Gelman, A. (2017). The failure of null hypothesis significance testing when studying incremental changes, and what to do about it. Personality and Social Psychology Bulletin, 44(1), 16–23.CrossRefGoogle Scholar
  7. Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time, Department of Statistics, Columbia University.Google Scholar
  8. Gelman, A., & Loken, E. (2014). The statistical crisis in science data-dependent analysis—a “garden of forking paths”—explains why many statistically significant comparisons don’t hold up. American Scientist, 102(6), 460.CrossRefGoogle Scholar
  9. Hendry, D. F. (1980). Econometrics-alchemy or science? Economica, 47, 387–406.CrossRefGoogle Scholar
  10. Huff, D. (2010). How to lie with statistics. New York: WW Norton and Company.Google Scholar
  11. Leamer, E. E. (1983). Let’s take the con out of econometrics. The American Economic Review, 73(1), 31–43.Google Scholar
  12. McCloskey, D. N. (1985). The loss function has been mislaid: The rhetoric of significance tests. The American Economic Review, 75(2), 201–205.Google Scholar
  13. McCloskey, D. N. (1992). The bankruptcy of statistical significance. Eastern Economic Journal, 18(3), 359–361.Google Scholar
  14. McCloskey, D. N., & Ziliak, S. T. (1996). The standard error of regressions. Journal of Economic Literature, 34(1), 97–114.Google Scholar
  15. Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on p-values: Context, process, and purpose. American Statistician, 70(2), 129–133CrossRefGoogle Scholar
  16. Ziliak, S. T., & McCloskey, D. N. (2004). Size matters: the standard error of regressions in the American economic review. The Journal of Socio-Economics, 33(5), 527–546.CrossRefGoogle Scholar
  17. Ziliak, S. T., & McCloskey, D. N. (2008). The cult of statistical significance: How the standard error costs us jobs, justice, and lives. Ann Arbor: University of Michigan Press.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • John D. Levendis
    • 1
  1. 1.Department of EconomicsLoyola University New OrleansNew OrleansUSA

Personalised recommendations