Abstract
This paper by using data envelopment analysis (DEA) and statistical inference evaluates the citation performance of 229 economic journals. The paper categorizes the journals into four main categories (A–D) based on their efficiency levels. The results are then compared to the 27 “core economic journals” as introduced by Diamond (Curr Contents 21(1):4–11, 1989). The results reveal that after more than 20 years Diamonds’ list of “core economic journals” is still valid. Finally, for the first time the paper uses data from four well-known databases (SSCI, Scopus, RePEc, Econlit) and two quality ranking reports (Kiel Institute internals ranking and ABS quality ranking report) in a DEA setting and in order to derive the ranking of 229 economic journals. The ten economic journals with the highest citation performance are Journal of Political Economy, Econometrica, Quarterly Journal of Economics, Journal of Financial Economics, Journal of Economic Literature, American Economic Review, Review of Economic Studies, Journal of Econometrics, Journal of Finance, Brookings Papers on Economic Activity.
Similar content being viewed by others
Notes
KIEL internal rankings for 2009 can be downloaded from http://www.ifw-kiel.de/academy/Journal%20Ranking%203%20Jan%2009.pdf. Accessed 13 November 2010.
ABS Academic Journal Quality Guide can be found at http://www.the-abs.org.uk/?id=257. Accessed 13 November 2010.
RePEc data can be retrieved from http://ideas.repec.org/top/top.journals.simple.html.
Data from Social Science Citation Index can be retrieved from http://thomsonreuters.com/products_services/science/science_products/a-z/social_sciences_citation_index. Accessed 13 November 2010.
SCOPUS data can be retrieved from http://www.scopus.com/home.url. Accessed 13 November 2010.
Data from Econlit database can be retrieved from http://www.aeaweb.org/econlit/journal_list.php. Accessed 13 November 2010.
When a journal was not in the SSCI database more than 5 years, the latest impact factor (i.e. for 2009) has been used.
In Kiel report the journals take the values from “A” (high quality journal) to “D” (lower quality journal). In addition we sign the value of 4 to “A”, 3 to “B”, 2 to “C” and 1 to “D”. Equally, in the ABS report four values can be assigned for journals’ quality (1, 2, 3 and 4). The highest quality in a journal is a signed with “4” whereas the lowest quality with “1”. In contrast with the KIEL quality assessment the ABS “grasps” the quality of the journals within their subject area (i.e. Accounting and Auditing, Finance, Economics, etc.).
The results of the BCC model are available upon request.
We did the categorization in such a way in order to be comparable to the two main quality ranking reports as introduced by Kiel institute (which separates the journals into four categories from “A” to “D”) and to ABS (which also separates economic and other discipline journals into four quality categories from 1 to 4).
As stated previously only the economic journals which are registered and measured in the 6 main databases/reports (Econlit, SSCI, RePEc, Scopus, Kiel rankings, ABS quality rankings report) are considered for evaluation.
We assumed that journals’ self citation have been used to promote and support the ongoing research.
References
Bakkalbasi, N., Bauer, K., Glover, J., & Wang, L. (2006). Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomedical Digital Libraries, 3(7), 1–8.
Banker, R. D., Charnes, A., & Cooper, W. W. (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092.
Bar-Ilan, J. (2010). Citations to the “Introduction to infometrics” indexed by WoS, Scopus and Google Scholar. Scientometrics, 82(3), 495–506.
Bauer, K., & Bakalbassi, N. (2005). An examination of citation counts in a new scholarly communication environment. D-Lib Magazine, 11(9). http://www.dlib.org/dlib/september05/bauer/09bauer.html.
Boles, J. N. (1967). Efficiency squared—Efficient computation of efficiency indexes. In Western Farm Economic Association Proceedings 1966 (pp. 137–142).
Boles, J. N. (1971). The 1130 Farrell efficiency system—Multiple products, multiple factors. Berkeley: Giannini Foundation of Agricultural Economics, University of California.
Bollen, J., & Van de Sompel, H. (2008). Usage Impact Factor: The effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59(1), 136–149.
Bonaccorsi, A., & Daraio, C. (2008). The differentiation of the strategic profile of higher education institutions. New positioning indicators based on microdata. Scientometrics, 74(1), 15–37.
Bonaccorsi, A., Daraio, C., & Simar, L. (2006). Advanced indicators of productivity of universities. An application of robust nonparametric methods to Italian data. Scientometrics, 66(2), 389–410.
Burton, M. P., & Phimister, E. (1995). Core journals: A reappraisal of the Diamond list. Economic Journal, 105(429), 361–373.
Charnes, A., Cooper, W. W., & Rhodes, E. L. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444.
Coelli, T. J., & Perelman, S. (1999). A comparison of parametric and non-parametric distance functions: With applications to European railways. European Journal of Operational Research, 117(2), 326–339.
Coelli, T. J., Rap, D. S. P., O’Donnell, C. J., & Battese, G. E. (2005). An introduction to efficiency and productivity analysis (2nd ed.). New York: Springer Science.
Cook, W. D., Golany, B., Penn, M., & Ravin, T. (2007). Creating a consensus ranking of proposals from reviewers’ partial ordinal rankings. Computers & Operations Research, 34(4), 954–965.
Cook, W. D., Ravin, T., & Richardson, A. J. (2010). Aggregating incomplete lists of journal rankings: An application to academic accounting journals. Accounting Perspectives, 9(3), 217–235.
Debreu, G. (1951). The coefficient of resource utilization. Econometrics, 19(3), 273–292.
Diamond, A. M. (1989). The core journals of economics. Current Contents, 21(1), 4–11.
Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7(1), 1–16.
Etxebarria, G., & Gomez-Uranga, M. (2010). Use of Scopus and Google Scholar to measure social sciences production in four major Spanish universities. Scientometrics, 82(2), 333–349.
Farrell, M. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society Series A, 120(3), 253–281.
Førsund, F. R., Kittelsen, S. A. C., & Krivonozhko, V. E. (2009). Farrell revisited—Visualizing properties of DEA production frontiers. Journal of the Operational Research Society, 60(11), 1535–1545.
Førsund, F. R., & Sarafoglou, N. (2002). On the origins of data envelopment analysis. Journal of the Productivity Analysis, 17(1/2), 23–40.
Franceschet, M. (2010). A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometics, 83(1), 243–258.
Garfield, E. (1955). Citation indexes to science: A new dimension in documentation through association of ideas. Science, 122(3159), 108–111.
Garfield, E. (1979). Citation indexing: Its theory and applications in science, technology and humanities. New York: Wiley Interscience.
Garfield, E. (2005). The agony and the ecstasy—The history and meaning of the journal Impact Factor. In International congress on peer review and biomedical publication, Chicago.
Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.
Halkos, G., & Tzeremes, N. (2007). International competitiveness in the ICT industry: Evaluating the performance of the top 50 companies. Global Economic Review, 36(2), 167–168.
Halkos, G., & Tzeremes, N. (2010). The effect of foreign ownership on SMEs performance: An efficiency analysis perspective. Journal of Productivity Analysis, 34(2), 167–180.
Harvey, C., Kelly, A., Morris, H., & Rowlinson, M. (2010). Academic journal quality guide, Version 4. The Association of Business Schools.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102, 16569–16572.
Hoffman, A. J. (1957). Discussion on Mr. Farrell’s Paper. Journal of the Royal Statistical Society Series A, 120(III), 284.
Kalaitzidakis, P., Mamuneas, T. P., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of the European Economic Association, 1(6), 1346–1366.
Kalaitzidakis, P., Mamuneas T. P., & Stengos, T. (2010). An updated ranking of academic journals in economics, WP 10-15. The Rimini Centre for Economic Analysis.
Kiel (2010). Criteria for research publications. Kiel Institute for World Economy. http://www.ifw-kiel.de/academy/criteria-for-research-publications.
Klavans, R., & Boyack, K. (2009). Toward a consensus map of science. Journal of the American Society for information science and technology, 60(3), 455–476.
Koczy, L. A., & Strobel, M. (2007). The ranking of economics journals by a tournament method. Mimeo.
Kodrzycki, Y. K., & Yu, P. (2006). New approaches to ranking economic journals. Contributions to Economic Analysis and Policy, 5(1), Art. 24.
Koopmans, T. C. (1951). An analysis of production as an efficient combination of activities. In T. C. Koopmans (Ed.), Activity analysis of production and allocation (pp. 33–97). New York: Wiley.
Kousha, K., & Thelwall, M. (2008). Sources of Google Scholar citations outside the Science Citation Index: A comparison between four science disciplines. Scientometrics, 74(2), 273–294.
Laband, D. N., & Piette, M. J. (1994). The relative impacts of economics journals: 1970–1990. Journal of Economic Literature, 32(2), 640–666.
Leydesdorff, L., de Moya-Anegon, F., & Guerrero-Bote, V. P. (2010). Journal maps on the basis of Scopus data: A comparison with Journal Citation Reports of the ISI. Journal of the American Society for Information Science and Technology, 61(2), 352–369.
Liebowitz, S. J., & Palmer, J. C. (1984). Assessing the relative impacts of economics journals. Journal of Economic Literature, 22, 77–88.
Liner, G. H., & Amin, M. (2004). Methods of ranking economic journals. Atlantic Economic Journal, 32(2), 140–149.
Lopez-Illescas, C., de Moya-Anegon, F., & Moed, H. F. (2008). Coverage and citation impact of oncological journals in the Web of Science and Scopus. Journal of Infometrics, 2, 304–316.
Lovell, C. A. L., & Schmidt, P. (1988). A comparison of alternative approaches to the measurement of productive efficiency. In A. Dogramaci & R. Färe (Eds.), Applications of modern production theory: Efficiency and productivity. Boston: Kluwer.
Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and ranking of LIS faculty: Web of Science versus Scopus and Google Scholar. Journal of the American Society for Information Science and Technology, 58(13), 2105–2125.
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Infometrics. doi:10.1016/j.joi.2010.01.002.
Norris, M., & Oppenheim, C. (2007). Comparing alternatives to the Web of Science for coverage of the social sciences’ literature. Journal of Infometrics, 1(2), 161–169.
Noruzi, A. (2005). Google Scholar: The new generation of citation indexes. Libri, 55(4), 170–180.
Palacio-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963–977.
Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of Physics. Information Processing & Management, 12(5), 297–312.
Pudovkin, A. I., & Garfield, E. (2004). Rank-normalized Impact Factor: A way to compare journal performance across subject categories. In Proceedings of the 67th ASIS&T annual meeting, 17, November, 2004. http://www.garfield.library.upenn.edu/papers/asistranknormalization2004.pdf.
Pujol, F. (2008). Ranking journals following a matching model approach: An application to public economic journals. Journal of Public Economic Theory, 10(1), 55–76.
Rainer, K. R., & Miller, M. D. (2005). Examining differences across journal rankings. Communications of the ACM, 48(2), 91–94.
Ritzberger, K. (2008). A ranking of journals in economics and related fields. German Economic Review, 9(4), 402–430.
Schneider, F., & Ursprung, H. W. (2008). The 2008 GEA journal-ranking for the economics profession. German Economic Review, 9(4), 532–538.
Shephard, R. W. (1970). Theory of cost and production function. Princeton: Princeton University Press.
Simar, L., & Wilson, P. W. (1998). Sensitivity analysis of efficiency scores: How to bootstrap in non parametric frontier models. Management Science, 44(1), 49–61.
Simar, L., & Wilson, P. W. (2000). A general methodology for bootstrapping in non-parametric frontier models. Journal of Applied Statistics, 27(6), 779–802.
Simar, L., & Wilson, P. W. (2002). Non parametric tests of return to scale. European Journal of Operational Research, 139(1), 115–132.
Simar, L., & Wilson, P. (2008). Statistical interference in nonparametric frontier models: Recent developments and perspectives. In H. Fried, C. A. K. Lovell, & S. Schmidt (Eds.), The measurement of productive efficiency and productivity change (pp. 421–521). New York: Oxford University Press.
Theussl, S., & Hornik, K. (2009). Journal ratings and their consensus ranking. In Operations research proceedings 2008. Berlin: Springer-Verlag. doi:10.1007/978-3-642-00142-0_65.
Zitt, M., & Small, H. (2008). Modifying the journal Impact Factor by fractional citation weighting: The Audience Factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.
Acknowledgments
We would like to thank Professor Tibor Braun and the anonymous reviewers for the comments and suggestions made in an earlier version of our paper. Finally, we would like to thank Panayiotis Tzeremes for his assistance in collecting journals’ information. Any remaining errors are solely the authors’ responsibility.
Author information
Authors and Affiliations
Corresponding author
Appendix: methodology and statistical techniques applied
Appendix: methodology and statistical techniques applied
Based on the work by Koopmans (1951) and Debreu (1951) the production set Ψ constraints the production process and is the set of physically attainable points (x, y):
where \( x \in \Re_{ + }^{N} \) is the input vector and \( y \in \Re_{ + }^{M} \) is the output vector. For the input oriented efficiency score a country operating at the level (x, y) is defined as:
Furthermore, DEA became more popular when introduced by Charnes et al. (1978) in order to estimate Ψ allowing for constant returns to scale (CRS model). Later, Banker et al. (1984) introduced a DEA estimator allowing for variable returns to scale (VRS model). In our case, when evaluating journals’ citation performance input orientation of DEA models have been applied due to the fact that input quantities appear to be the primary decision variables (Coelli and Perelman 1999; Coelli et al. 2005; Halkos and Tzeremes 2010). The quality of the papers appeared in a journal but also the number of the papers to be published (i.e. the number of issues and volumes) is subject to the editors’ decision. Therefore the decision makers have most control over the input compared to the outputs used. Furthermore, the CRS model developed by Charnes et al. (1978) can be calculated as:
The VRS model developed by Banker et al. (1984) allowing for variable returns to scale can then be calculated as:
Then in order to obtain the corresponding input oriented DEA estimators of efficiency scores we need to plug in \( \hat{\Uppsi }_{\text{CRS}} \) and \( \hat{\Uppsi }_{\text{VRS}} \) respectively in Eq. 5 presented previously.
Simar and Wilson (1998, 2000, 2008) suggest that DEA estimators were shown to be biased by construction. They introduced an approach based on bootstrap techniques (Efron 1979) to correct and estimate the bias of the DEA efficiency indicators. The bootstrap bias estimate for the original DEA estimator \( \hat{\theta }_{\text{DEA}} (x,y) \)can be calculated as:
Furthermore, \( \hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) \) are the bootstrap values and B is the number of bootstrap replications. Then a biased corrected estimator of θ(x, y) can be calculated as:
However, according to Simar and Wilson (2008) this bias correction can create an additional noise and the sample variance of the bootstrap values \( \hat{\theta }_{{{\text{DEA}},b}}^{*} (x,y) \) need to be calculated. The calculation of the variance of the bootstrap values is illustrated below:
We need to avoid the bias correction illustrated in (9) unless:
Following Shephard (1970) the input distance function can be expressed as \( \hat{\delta }_{\text{DEA}} \left( {x,y} \right) \equiv \frac{1}{{\hat{\theta }_{\text{DEA}} \left( {x,y} \right)}} \) then we can construct bootstrap confidence intervals for \( \hat{\delta }_{\text{DEA}} \left( {x,y} \right) \) as:
In order to choose between the adoption of the results obtained by the CCR (Charnes et al. 1978) and BCC (Banker et al. 1984) models in terms of the consistency of our results obtained we adopt the method introduced by Simar and Wilson (2002). Therefore, we compute the DEA efficiency scores under the CRS and VRS assumption and by using the bootstrap algorithm we test for the CRS results against the VRS results obtained such as:
The test statistic is given as:
Then the p-value of the null hypotheses can be approximated by the proportion of bootstrap samples as:
where B is 2000 bootstrap replications, I is the indicator function and \( T^{*,b} \) is the bootstrap samples and original observed values are denoted by T obs.
Rights and permissions
About this article
Cite this article
Halkos, G.E., Tzeremes, N.G. Measuring economic journals’ citation efficiency: a data envelopment analysis approach. Scientometrics 88, 979–1001 (2011). https://doi.org/10.1007/s11192-011-0421-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-011-0421-y