Advertisement

Scientometrics

, Volume 102, Issue 1, pp 727–749 | Cite as

Proof over promise: towards a more inclusive ranking of Dutch academics in Economics & Business

  • Anne-Wil Harzing
  • Wilfred Mijnhardt
Article

Abstract

The Dutch Economics top-40, based on publications in ISI listed journals, is—to the best of our knowledge—the oldest ranking of individual academics in Economics and is well accepted in the Dutch academic community. However, this ranking is based on publication volume, rather than on the actual impact of the publications in question. This paper therefore uses two relatively new metrics, the citations per author per year (CAY) metric and the individual annual h-index (hIa) to provide two alternative, citation-based, rankings of Dutch academics in Economics & Business. As a data source, we use Google Scholar instead of ISI to provide a more comprehensive measure of impact, including citations to and from publications in non-ISI listed journals, books, working and conference papers. The resulting rankings are shown to be substantially different from the original ranking based on publications. Just like other research metrics, the CAY or hIa-index should never be used as the sole criterion to evaluate academics. However, we do argue that the hIa-index and the related CAY metric provide an important additional perspective over and above a ranking based on publications in high impact journals alone. Citation-based rankings are also shown to inject a higher level of diversity in terms of age, gender, discipline and academic affiliation and thus appear to be more inclusive of a wider range of scholarship.

Keywords

Rankings Citations ISI Google Scholar Economics 

Introduction

Economists love to rank. Even a casual five-minute literature search reveals literally hundreds of publications on rankings of academic productivity and impact. There are numerous rankings of universities (e.g., Kalaitzidakis et al. 2003), departments (e.g. Scott and Mitias 1996; García et al. 2012), journals (e.g. Kodrzycki and Yu 2006; Harzing and van der Wal 2009) and individuals (e.g. Tol 2009; Prathap 2010). Dutch economists are no exception. In fact, they produced what, to our best knowledge, is the oldest ranking in the field: a nation-wide ranking of Economists (the Economics top-40) that has entered its fourth decade and is “broadly accepted and supported by the Dutch academic community in the field” (Nederhof 2008, p. 172).

However, Nederhof (2008, p. 172) cautions us that the Economics top-40 is not based on actual impact and that “overall the rankings induced economists to focus more on maximizing publication output than on optimalizing their citation impact”. Harzing (2005) reported a similar effect for Australia, where a government focus on output seemed to lead to a high volume/low impact publication profile for the field of Economics & Business as a whole. More recently, Franses (2014) published an analysis of three decades of the Dutch Economics ranking and suggested that looking at citations, instead of publications alone, would provide a useful alternative to the current ranking. We fully agree with this suggestion, as publication—even in so-called “high impact” journals—does not guarantee citation impact. We argue that rankings based on publications in high impact journals are merely reflecting the “promise” of academic impact, whereas only rankings based on citations in a wider range of academic outlets provide the actual “proof” for this.

Although this argument is not new, as far as we are aware there are no prior studies that explicitly compare publication-based with citation-based rankings. Given its long-established history the Dutch Economics top-40 provides an ideal test case for such a comparison. The remainder of this paper is structured as follows. We first discuss the history of the Dutch Economics top-40 and introduce citation-based rankings as an alternative to publication-based rankings. Subsequently, we present our research methods and results, comparing three different rankings of Dutch economists. A discussion section puts the results in a broader perspective. We suggest that citation-based rankings are more democratic and are likely to produce rankings that are more inclusive of a wider range of scholarship.

The Dutch Economics top-40

A top-40 of Dutch Economists1 based on publications in ISI-listed journals has been published nearly every year since 1980. In the first year the list was published in Economisch Statistische Berichten (ESB), a 2-weekly Dutch magazine that publishes articles about the Dutch economy. Between 1981 and 2004 it was published by Intermediair, a Dutch weekly newspaper for professionals. Since 2005, its publication has returned to ESB and responsibility for the compilation of the ranking moved from CentER at the University of Tilburg to the Erasmus School of Economics.

After the Economics top-40 had moved to Intermediair in 1981, ESB started its own top-20 ranking based on citations. Its creator, Jaap van Duijn, expresses a strong preference for citation-based rankings over rankings based on publications. He argues that what matters is not simply output in terms of published papers, but rather whether the academic’s works is used by others, which is typically measured by citations. As detailed below, we strongly agree with this premise. Unfortunately, the methodology for this citation-based ranking was rather unsystematic, with frequent changes in coverage and a rather ad-hoc selection of academics that were included on a year-to-year basis. The ranking was not published in 2003–2004, because of problems with funding, and disappeared entirely after 2009. ESB also experimented with a top-20 ranking based on the h-index, published in both 2006 and 2010. Again, the methodology appeared to be rather unsystematic and this ranking was not repeated after 2010.2 Neither of these citation-based rankings included corrections for the number of co-authors or the length of an academic’s career. It appears that although the importance of a citation-based ranking was recognised, the creators had difficulty finding a methodology that was sustainable.

The publications-based Economics top-40 is thus the only Dutch ranking of individual academics that has survived over time. In this paper, we will therefore use this publication-based ranking as the basis for our investigation. The methodology of this ranking has varied over the years, but since the late 1990s has typically been based on the number of publications in ISI-listed journals over the 5-year period before the year of publication. For the 2013 list, for instance, publications between 2008 and 2012 were considered. In an attempt to operationalize quality, publications are multiplied by the ISI journal impact factor of the journal in which the article is published. In 2013 this journal impact factor weighting was replaced by the article influence score (see http://www.eigenfactor.org/faq.php), which represents the average influence of a journal’s articles over the first 5 years after publications and is roughly analogous to the 5-year journal impact factor (see http://admin-apps.webofknowledge.com/JCR/help/h_eigenfact.htm). As a further modification, the journal’s percentile score in the total list of AIS was used, rather than the raw AIS score.3 Co-authored papers were given fractional weight (2/1 + number of authors) in creating the ranking, such that a paper by two authors counts for 0.66, a paper by three authors for 0.5, a paper by four authors for 0.4, etc.). No consideration was given to the length of the papers. Comments, letters and notes counted equally to full-length conceptual, review or empirical papers.

Publication-based versus citation-based rankings

The inclusion of comments, letters and notes, combined with the multiplication of publications by their source journal’s impact factor or article influence score can lead to serious distortions in the ranking. For instance, the 2012 ranking featured Groningen-based Robert Maseland at number 10. It is likely his 10th place was based mainly on half-page single-authored letter to the editor of Nature commenting on a prior publication. Nature’s sky-high impact factor (38.597 in 2012 as against a median impact factor of 0.795/1.257/1.292 in Economics/Management/Business) ensured that Maseland was catapulted into the top-10. However, this particular letter did nothing to contribute to Nature’s impact factor, as—even after 6 years—it has not gathered a single citation in in either ISI or Google Scholar. This anomaly alerted us to a wider issue. Although publication in a high impact journal generally means that the paper has met certain quality criteria, it does not guarantee that the paper in question will be widely read or will have a high level of academic impact in terms of citations. Citations counts tend to be highly skewed, typically only 15 % of the articles in a journal account for half of the total citations (Seglen 1992).

Over the years, several studies have established that many papers in so called “low impact” journals are in fact cited more than papers in “high impact” journals. Starbuck (2005) found that although higher-prestige journals publish more highly cited articles, editorial selection involves considerable randomness. He concluded: “Evaluating articles based primarily on which journal published them is more likely than not to yield incorrect assessments of the articles’ value.” (2005, p. 196). Based on an analysis of seven years of citations to every article in 34 top management journals published in 1993 and 1996, Singh et al. (2007, p. 327) drew the same inescapable conclusion: “using journal ranking …can lead to substantial misclassification of individual articles and, by extension, the performance of the faculty members who authored them.” Singh et al. (2007, p. 319) warn “…both administrators and the management discipline will be well served by efforts to evaluate each article on its own merits rather than abdicate this responsibility by using journal ranking as a proxy for quality.” Most recently, Jin and Choi (2014) studied the factors influencing citation impact for the top-100 most cited economists and found that neither the journal impact factor, nor a dummy variable for the top-4 elite journals in the field had a significant impact on citations. In addition, and very relevant to our preference for Google Scholar over ISI, they find that in their sample scholarly books are cited much more than publications in the top journals.

On average, publications in high-impact journals by definition get cited more frequently than publications in low-impact journals as the journal impact factor or article influence score is based on average citations. We call this principle “promise”, i.e. publishing in a high-impact journal carries the implicit promise that the article will also be highly cited. However, not all individual papers published in these high-impact journals will fulfil this promise. In this paper, we therefore set out to create a ranking based on “proof”, i.e. rather than looking at the promised number of citations implied by the journal impact factor or article influence score, we look at actual citations to an author’s work. We present two new citation-based rankings. The first is based on citations over a recent period of time to allow for a relatively close comparison with the original ranking, the second is based on whole-of-career citations.

Our focus on proof over promise has two distinct elements. First, by looking at academics’ citation records, we assess whether publications in “high-impact journals” (the criterion for the current Economics top-40) do in fact get more highly cited than publications in low-impact journals. Second, by using Google Scholar as a data source rather than ISI, we include citations to publications outside ISI-listed journals, thus providing a more inclusive assessment of impact. According to the “promise” criterion these non-ISI publications are not expected to gather a substantial number of citations, as it is publications in ISI-listed journals rather than publications in other journals, conferences, or books that are normally regarded to be “high-impact” publications. If the publications of Dutch academics in Economics & Business in “high impact” ISI-listed journals indeed systematically outperform all other publications, our new ranking should be very similar to the current ranking. However, our use of Google Scholar instead of ISI allows us to test both the actual level of citations to publications in “high-impact” ISI listed journals as well as the level of citations to non-ISI publications. If either of these citation counts diverges from the general expectations implied in the ISI AIS, our new rankings could be very different from the current ranking.

Methods

In our paper, we compare and contrast three different rankings: the Economics top-40 as published by ESB and two new citation-based rankings. The first alternative ranking provides a ranking based on citations to papers published in the last 11 years only (2003–2013). Citations are corrected for the number of authors and divided by the number of years to arrive at the citations per author per year (CAY). The second provides a ranking based on a whole-of-career individual annual h-index (hIa). This metric improves on the h-index by correcting citations for the number of authors as well as the length of an academic’s publishing career. Further details on the sample, data source, data collection procedures and metrics used can be found below.

Sample and data source

As a sample, we started with all academics that were nominated for the Economics top-40 between 2011 and 2013.4 Academics that, in 2013, were no longer affiliated with a Dutch university—such as David de Cremer—were removed from our sample. After deduplication for academics that were nominated in multiple years, we ended up with 267 names. Google Scholar was used as the source of citation data. The advantage of Google Scholar over ISI is that it includes all academic publications, i.e. not just publications in ISI listed journals, but also books, book chapters, working papers, conference papers, and any other research outputs—such as for instance software—cited in academic publications.5 This largely removes the disciplinary bias that is present in Thomson Reuters Web of Science, in that ISI journal coverage in Economics and Management Science is far more comprehensive than in Management, Marketing and Finance & Accounting (see Harzing and van der Wal 2009).

Another advantage of Google Scholar is that it provides a more timely assessment of research impact than ISI, because of the former’s inclusion of intermediate research outputs, such as working papers and conference papers, and its immediate inclusion of accepted journal articles appearing in “online first” and open access repositories. This means that citations in Google Scholar might be evident many years before they appear in ISI. Overall, we argue that Google Scholar provides a much better basis for citation analysis in Economics & Business than ISI, especially when looking at a recent time period.

Data collection procedures

As Google Scholar on its own is not very suitable for bibliometric analyses, Publish or Perish (PoP) (Harzing 2007) was used to collect citation data from Google Scholar. There are now more than 500 published articles referring to the PoP program. This provides further evidence that—in spite of its limitations—Google Scholar is perceived to be a useful source of bibliometric data. PoP is a software program that retrieves and analyses academic citations. It uses Google Scholar to obtain the raw citations, then analyzes these and presents a very wide range of citation metrics in a user-friendly format. The results can also be saved to a variety of output formats for further analysis. We used this option to export results to Excel in order to perform various calculations and create results tables.

Search queries were defined in the multi-query centre in an iterative fashion over the course of a week. Final searches were all conducted on the same day, 21 January 2014. Although no citation search can guarantee 100 % accuracy, we are confident that we have captured all important publications of the academics in our sample. Our search strategy was carefully designed (see below) and searches were conducted personally by the first author, who has a very good knowledge of the discipline as well as nearly seven years of extensive experience in using Publish or Perish to search Google Scholar data. When attempting to verify the metrics used in our study, please note that Google Scholar is updated every few days and hence any metrics found will be slightly different from those reported in this paper.

Most Dutch academics have several initials, but they might not always list all these initials when publishing. Hence, we first searched with family name and first initial only. For most academics this provided satisfactory results; a visual inspection of their list of publications immediately showed that all publications related to the same academic. In some cases, however, a search with only the first initial resulted in some homonyms in other disciplines. Hence, the full given name was used with in conjunction with a search that listed all initials. Unlike Medicine and the Sciences, most publication outlets in Economics & Business list the academics full given name, so this search strategy is not likely to exclude publications. Before using these results, however, we did verify that it did not exclude major publications, especially in Management Science, where some journals publish with initials only.

For about a dozen academics, more complex search strategies were needed, as their names were very common and even searching with the full given name provided one or more homonyms. Unfortunately, in May 2012 Google Scholar removed the subject area selection in its search interface and hence it is no longer possible to exclude certain disciplines. In these cases, we therefore used topic exclusions that related to the disciplines we were excluding. For instance, for one academic the exclusion string ran as follows “ribozyme vinylic molecular biochemistry seminoma antibody meaenas pulsating dopants patellar “US Patent” antitumor surgically hazard cobalt trauma steel lattice kidney piano fertig”. For academics that were searched for with a full given name and those that needed many topic exclusions, we verified their publication output through Google Scholar Citations (where available) and university websites.

Many Dutch family names have prefixes such as (van) de, (van) der. This causes a problem in bibliometric analyses as not all referring authors use these prefixes correctly. For every academic with a prefix, we therefore also searched for the family name without the prefix as well as a family name with the prefix joined with the proper last name. So for instance for Jakob de Haan, we not only searched for Jakob de Haan, but also for Jakob Haan and Jakob Dehaan. The latter option typically did not add many citations, but the search without the prefix often resulted in a fairly substantial number of additional citations. In many cases, this increased the individual h-index by one or more points. The “den” prefix in particular seemed to be problematic as two of three academics in our sample with a “den” prefix saw their citations increase by 30 % by including a name variant without the prefix.

As some of the metrics we use are influenced by academic age, defined as the number of years lapsed since the academic’s first publication, we paid special attention to the academic’s early publications. Many Dutch academics published semi-academic publications that attracted very few citations in local Dutch journals, such as ESB, early in their career. Other early uncited or hardly cited publications might include Dutch book chapters, book reviews, working papers and dissertations. It would be unfair to “punish” academics with these early signs of research activity with low year-based metrics. We therefore only included early publications if they were either part of the individual h-index (and hence their exclusion would lower the hIa) or had more than 20 citations (and hence their exclusion would substantially lower citation counts).

Metrics

We use two fairly new metrics in our study: CAY and the hIa (see Harzing et al. 2014 for details). These two metrics are the individualised and annualised equivalents of respectively total citations and the traditional h-index, the two most important bibliometric measures. We correct our metrics for the number of co-authors as this positively influences both the number of publications an academic is able to publish (see amongst many other Börner et al. 2005; Katz and Martin 1997) and the number of citations (see e.g. Glänzel and Thijs 2004). A correction for the number of years an academic has been active is important as citations continue to increase over an academic career. Hence it would be unfair to compare the citation records of academics with 40 years of publications to those with only 10 years of publications. This correction is particularly important for a whole-of-career ranking. However, it still carries relevance for a ranking based on a fixed period in cases where not all academics in the sample in question have been publishing for the entire period.

In order to provide results that are comparable to the current Economics top-40 in terms of its focus on recent performance, our first alternative ranking—named PoP CAY top-40—looked at publications and citations since 2003. We chose an 11-year period rather than the 5-year publication period (2008–2012) that we are comparing our results with, because citations take a long time to accumulate in the social sciences. Hence, looking only at citations to publications in the last 5 years would capture a very small part of the academic’s citation impact. Ten to eleven years is also the period that Thomson Reuters’ Essential Science Indicators (ESI) uses in their list of Highly Cited Scientists. CAY is the more appropriate metric in this case as any metric derived from the h-index is likely to lead to many ties. This is because the h-index is by definition constrained by the number of publications and most academics in Economics & Business will not produce a very large number of publications per year. Total citations, corrected for co-authorship therefore provides a more reliable measure of impact for a constrained time period. We still divide the number of citations by the number of years an academic has been active as more than 10 % of the academics in our sample has been active for less than 11 years. Even so, one could still argue that this ranking disadvantages more junior academics as other academics might be more likely to cite publications of academics that are already well known. Hence, a more senior academic might acquire more citations than a junior academic even if the relevant publications were identical in all other aspects.6

Our second alternative ranking—named Publish or Perish individual annualised h-index (PoP hIa) top-40—therefore takes a whole-of-career perspective, which, when using the right metric, allows us to compare junior and senior academics on a more equitable basis. The metric used in this ranking is the hI, annual or hIa for short. The hIa is calculated by dividing the individual h-index (an h-index corrected for the number of co-authors) by the number of years an academic has been active, i.e. the number of years that have lapsed since their first publication. The metric thus represents the average number of single-author-equivalent “impactful” articles that an academic has published per year and hence permits an intuitive interpretation. Based on an empirical example of 146 academics in five major disciplines at different career stages, Harzing et al. ( 2014) showed that the hIa-index attenuates h-index differences that are purely attributable to (disciplinary) co-authorship practices and career lengths. As such, this metric provides a more reliable comparison between academics in different disciplines and at different career stages than the h-index. For a whole-of-career comparison, it is also preferable over the CAY metric, even though the latter metric also corrects for academic age. This is because the oldest publications for academics with a longer career are older (and thus have more citations) than a younger academic’s oldest publications.

Triangulation is the best way to establish errors and omissions in bibliometric research and to interpret findings. Hence, for all thirty academics that dropped one or more places between the Economics top-40 and the PoP CAY top-40, we verified whether their Google Scholar results were complete. We did so by checking whether all of their ISI publications with more than an incidental number of citations were covered in Google Scholar. Without exception, this turned out to be the case. This further validates the use of Google Scholar for bibliometric research in Economics & Business. We also used ISI data as an external source of validation for the old and new rankings. For each academic in the old and new rankings, we verified whether they were included in the ESI top 1 % most highly cited academics (February 2014 version). We included publications and citations in all fields, not just in journals classified under Economics & Business. Neglecting to do so would discourage the type of multi-disciplinary research that is necessary to address key societal challenges. It would also mean that we would be fully dependent on ISI’s journal classifications, which are not always intuitive. For instance, some Organization Behaviour journals are in classified in Management, whereas others are listed in in Psychology, many Economics journals are listed in the Social Sciences general category and journals in Environmental economics are often classified under Environment/Ecology. Finally, we used ISI citation reports to compare publication and citation profiles for the top-10 academics in the Economics top-40 and the top-10 academics in the PoP CAY top-40 in terms of the number of publications and citations, as well as the distribution of citations over the academic’s body of publications.

Our study focuses on citations, not publications. This is primarily a philosophical choice; we focus on proof over promise, and therefore attach more importance to impact than to the pure volume of publications. It is also a pragmatic choice as, once one includes citations to non-traditional research output such as books, conference papers, non-ISI listed journal articles, there will be a large number of so-called stray citations, i.e. citations that refer to slightly different versions of the publication and/or miscite the author’s name/year or other bibliometric details. Although the presence of stray citations might create the impression that Google Scholar is unreliable, one should realise that these occur in any citation database. When searching with the “Cited reference” function instead of the General Search (now called Basic Search) in ISI, stray citations abound. Hofstede’s 1980 Culture’s Consequences book for instance has well over 150 instances in the ISI Cited reference search, featuring different years, page number and a multitude of different spellings of the title. In fact, Google Scholar does a better job at aggregating different instances as the same book only has two dozen instances in Google Scholar. A focus on citations instead of publications ensures that publications counts are not inflated, but that citations are comprehensively covered.

Results

Table 1 reports on the demographic, discipline and affiliation characteristics of the current Economics top-40. It also includes the score of all academics on the metric used to create our first new ranking, the citation per author per year ranking, as well as the drop in ranking between the current rank and our ranking of 267 academics based on citations per year, i.e. the PoP CAY ranking. Academics are divided into four groups:
Table 1

Original Economics top-40 and comparison with new PoP CAY ranking

  1. (1)

    Academics listed in green bold font also feature on both new citation-based rankings, i.e. they are ranked on all three rankings.

     
  2. (2)

    Academics listed in orange bold font feature on the current Economics top-40 and on the PoP CAY ranking, i.e. they are ranked on two rankings.

     
  3. (3)

    Academics listed in red bold font feature on the current Economics top-40 and on the PoP hIa ranking, i.e. they are ranked on two rankings.

     
  4. (4)

    Academics listed in black regular font are unique to the current Economics top-40 and do not feature in either of the citation-based rankings.

     

Only 11 academics feature in all three rankings, whereas 24 academics feature in two of the three rankings. Forty academics feature on only one ranking, 18 only in the original Economics top-40, 10 only in the PoP CAY ranking and 13 only in the PoP hIa ranking.

The average age for academics in the Economics top-40 is 50 (range 39–67) and on average, they have 111 CAY in Google Scholar. Many academics in the original Economics top-40 have well under 100 CAY, the approximate cut-off score for the new PoP CAY top-40 discussed below. All but one of the academics in the original ranking are male and the vast majority (34) is Dutch, with four Flemish academics, one Greek and one Australian. Half of the academics (20) in the top-40, and no less than three quarters of the academics in the top-20 work in the Economics discipline. Management Science (8) and Marketing (6) are the second and third most frequently listed disciplines. Management (4) and Finance & Accounting (2) close the ranks. In terms of institutions, Erasmus tops the rank with 11 occurrences, followed by Tilburg (9). The VU (VU University Amsterdam) (7) and Groningen (6) follow third and fourth, with Wageningen (3) Maastricht (2) and Eindhoven (1) closing the ranks. UvA (University of Amsterdam), Nijmegen and Utrecht do not have any academics featuring in the top-40.

Nearly half of the academics in the current Economics top-40 drop out of the top-40 when ranked by the number of CAY for 2003–2013 (PoP CAY top-40) or the whole-of-career hIa (PoP hIa top-40). However, only two of the current top-10 (Goos and Oude Lansink) drop out in our new citation-based rankings and only five of the current top-20 do not feature in our new citation-based rankings. Out of the lower ranks (21–40), only seven make it to the ranking based on CAY or the whole-of-career hIa ranking; three of those seven are only just out of the top-20. Hence, in general academics listed at the top of the current Economics top-40 do indeed appear to be high performing on citation-based ranking criteria as well. However, there are some notable exceptions.

Peter Goos is ranked 7th in the Economics top-40, but, as can be seen in Tables 2 and 4, he does not appear in the top-40 of either of the new rankings. He drops 152 places in the PoP CAY ranking. Goos published 42 articles between 2008 and 2012, nearly all in journals towards the top end of the distribution of article influence score, such as Nature Communications, Biometrika, Marketing Science and Technometrics. However, none of these articles nor his other publications have high citation counts and as most were published with several other authors, his author-corrected citation scores are low.
Table 2

PoP CAY top-40

The high rank of Alfons Oude Lansink (#8) is a little surprising as the 28 articles he produced between 2008 and 2012 were mostly published in agricultural economics journals that do not have particularly high AIS. In any case, none of these articles, nor any of his other publications achieved high citation counts and as the articles were also normally co-authored with several co-authors the CAY metric is fairly low.

The two academics that experience the largest drop in ranking are Einmahl and van Dijke. Einmahl is ranked 28 in the Economics top-40, based on a “mere” 17 articles. However, all these articles were published in journals at the top end of the distribution of article influence scores such as Journal of Econometrics, Bernoulli, Annals of Statistics, and Journal of the American Statistical Association. Based on the raw AIS score he would even have been ranked 17th. However, his articles in these and other journals gathered very few citations; only eight of them attracted more than 10 Google Scholar citations overall.

Van Dijke has a similar profile in a very different field. He has published 25 articles in high AIS journals in Psychology, such as Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Journal of Experimental Social Psychology and Journal of Vocational Behaviour, but his publications in these and other journals have very modest citation levels and several co-authors. In van Dijke’s case, his citation levels might be further reduced by the fact that he has published more in the last five years than between 2003 and 2008, reflective of a relatively junior academic who only started publishing in 2003.

A citation-based ranking: PoP CAY top-40

Table 2 provides details of our first new ranking, based not on the expected impact inferred from the journals in which the academic publishes, but on the actual article level citations that the academic attracts. The average age for academics in the top-40 of this ranking is 53 (range 37–73) and on average, they have 174 CAY in Google Scholar. Five of the academics are female and the vast majority (32) is Dutch, with three US academic, two Flemish academics, one Greek, one German and one Australian. Seventeen of the academics in the CAY top-40 work in Economics, with Management (13) the second most frequently listed disciplines. Finance & Accounting (4), Marketing (3) and Management Science (3) close the ranks. In terms of institutions, Erasmus tops the rank with 11 occurrences, followed by Tilburg (8) and the VU (VU University Amsterdam) (7). Maastricht (5), UvA (University of Amsterdam) (4), Groningen (3) and Eindhoven (2) close the ranks. Nijmegen, Utrecht and Wageningen do not have any academics featuring in the top-40.

In comparison with the old Economics top-40, average age and distribution of nationalities are similar. However, we do find a more diverse set of nationalities at the top. Three out of the top-4 are not Dutch, whereas first-ranked Richard Tol has had his main employment outside the Netherlands for many years. The proportion of female academics has also increased, although still low at 12.5 %. Reflecting its focus on citations over publications, the average number of CAY has increased by 57 %. In terms of disciplines, the proportion of academics in Management in particular has increased substantially, from 10 to 33 %, mainly to the detriment of Marketing and Management Science. In terms of institutions, Maastricht and Eindhoven have increased their representation, whereas the UvA (University of Amsterdam)—which had no academics in the old top-40—now has no less than 4 academics in the top-40. Groningen saw its listed academics halved, whereas Wageningen no longer has any academics listed.

The resulting PoP CAY 2003–2103 top-40 is substantially different from the publications-based ranking. The new top-10 only includes four of the same academics: Richard Tol, who ranks first in both rankings, Daan van Knippenberg, who ranks 4th on publications and 5th on citations, Peter Nijkamp, who ranks 6th on both publications and citations and Michael McAleer who ranks 2nd on publications and 10th on citations. Jan van Ours, ranked 11th in the original Economics top-40, is now ranked 7th. New in the top-5 are Thorsten Beck and Bronwyn Hall, who do not feature in the publications-based ranking at all; whereas Marno Verbeek and Roy Thurik are newcomers to the top-10. Eva Demerouti was listed in the old Economics top-40, but jumps from 17 to 3. We will discuss each of these academics in turn.

Thorsten Beck published “only” 17 articles between 2008 and 2012 and, with a couple of exceptions, (e.g. Journal of Finance) they were not in journals with particularly high AIS. Hence, he did not even make it to the top-40 based on publications. However, citations to these publications as well as several books and book chapters (not covered in the original top-40) are very impressive. Citation levels to his 2003–2007 publications are no less than spectacular. Nine of his publications between 2003 and 2013 have more than 50 citations per year and 46 have more than 10 citations per year. Compare this with Goos, Oude Lansink, Einmahl and van Dijke who all have 0–3 publications with more than 10 citations per year.

Eva Demerouti illustrates a second aspect of our proof over promise approach: the more comprehensive journal coverage of Google Scholar. She already ranks high in a publication-based ranking and her publications in high impact journals are highly cited. However, what propels her to the top-3 in a citation-based ranking are three publications in journals that have no AIS and hence do not count at all for the original Economics top-40. Her publications in the International Journal of Managerial Psychology, Career Development International and the International Journal of Stress Management alone have gathered a total of more than 2,000 Google Scholar citations.

Bronwyn Hall illustrates a third aspect of our proof over promise approach: Google Scholar’s coverage of non-journal publications. Eight of her twenty most highly cited publications are NBER working papers, book chapters or conference proceedings papers. In addition, many of these papers are single-authored. In addition, Hall’s papers in ISI listed journals are all more highly cited than would be expected from their AIS.

Marno Verbeek features in the citation-based top-10 mainly based on a single-authored book “A guide to modern econometrics” which makes up 2/3 of his citation record. Some observers might argue that a top-10 ranking based largely on a publication that is not fully refereed is not justified. This is where an h-type indicator might provide a useful alternative viewpoint. Based on a whole-of-career hIa ranking (our second alternative ranking) Verbeek drops out of the top-40.

Finally, Roy Thurik illustrates a number of the above-listed aspects of our “proof over promise” approach. Like Bronwyn Hall, he has a large number of highly cited books or book chapters and working papers. However, he has also published about a dozen papers in the same journal: Small Business Economics, which does not have a particularly high AIS. However, he authored three of the journal’s top-10 most highly cited papers.

Overall, our Google Scholar citation-based ranking clearly taps into a different aspect of scholarly performance than the ISI publication-based ranking. We would argue that a focus on impact over output and a more inclusive consideration of publication outlets produces a more relevant ranking of research excellence. The ranking is more relevant academically as it considers the actual impact of publications, i.e. the acceptance of the relevance of publications by academic peers. The ranking is also more relevant societally as it includes publication outlets beyond a narrow set of academic journals. In the next section we analyse the differences between the Economics top-40 and the PoP CAY top-40 in a little more detail.

Publications versus citations: volume versus impact?

We would expect that the academics listed in the Dutch Economics top-40 would also feature highly in other rankings of academic excellence. There are few publicly available rankings of individual academics. However, there is a large literature on how excellent (highly cited) papers are defined in bibliometrics. A review of more than 300 papers by Bornmann (2014) identified the top 1 % most cited articles as the most frequently used measure of excellence. The equivalent for individual academics is readily available in Thomson Reuters ESI, which ranks the top 1 % most cited papers, universities and academics by field.

Therefore, for every academic in the original Economics top-40, we verified whether they were included in Thomson Reuters ESI as one of the top 1 % most highly cited academics for the years 2003–2013 (February 2014 edition). The ESI ranking only includes citations to publications in ISI-listed journals. However, given that the Economics top-40 is based on publications in ISI-listed journals multiplied by the average citation impact of publications in these journals, we would expect that most of the academics listed in the Economics top-40 would also feature in the ESI list of highly cited scientists. This turns out to be only partially true; only 17 of the original top-40 academics are listed in ESI. All but two of these 17 are also listed in the PoP CAY top-40. There are only two academics in the original top-40 (Nijstad and Bleichrodt) that do not feature in the new top-40, but are listed in ESI. In both cases, the relatively low ranking in the PoP CAY top-40 is caused by the fact that this ranking discounts citations for the number of authors, something that ESI doesn’t do. The average number of citations in the ESI for the original Economics top-40 is 392.

In our PoP CAY top-40, three quarters of the academics are listed in the ESI with an average of 587 citations. Those that are not listed are mainly those whose most cited publications are books, working papers or articles in non-ISI listed journals. Citations to these publications increase the citation score based on Google Scholar, but not ISI. However, it is remarkable that although the Economics top-40 and the ESI top 1 % are based on the same data source, there are more academics in the PoP CAY top-40 listed in the ESI top 1 % most cited academics than there are in the Economics top-40.

We also conducted some further analysis into the lifetime ISI publication profiles of academics ranked in the top-10 of either the Economics Top-40 or the new PoP CAY Top-40 (See Table 3). We used ISI as this type of analysis is currently not possible with Google Scholar. Given that Marno Verbeek’s listing in the PoP CAY top-10 appeared slightly anomalous and was mainly based on a textbook that would not feature in any ISI-based analysis, we included 11th ranked Ans Kolk instead. Although there are certainly exceptions, academics that appeared only in the Economics top-40 and not in the two alternative citation-based rankings tend to be characterized by a “high volume/low impact” publication profile, even when focusing only on ISI-listed publications.
Table 3

Analysis of life-time ISI publication and citation profile for Economics top-10 and PoP CAY top-10 academics

Name

# Articles

# Cites

% Self-cites

Cites/article

% Un-cited

% 0–5 Cites

# >100 Cites

# >50 Cites

# >10 Cites/year

Richard Tol

228

3,673

17.3

16.11

24

48

4

14

6

Michael McAleer

328

2,434

30.6

7.42

41

73

6

10

1

Philip Hans Franses

261

2,133

9.1

8.17

31

57

1

7

0

Daan van Knippenberg

107

3,609

15.7

33.73

8

31

9

20

11

Werner Brouwer

112

1,925

15.9

17.19

12

35

0

9

0

Peter Nijkamp

562

3,798

12.0

6.76

32

70

2

12

0

Peter Goos

69

480

37.7

6.96

19

64

0

1

0

Alfons Oude Lansink

80

622

12.7

7.78

20

59

0

0

0

Piet Rietveld

275

3,294

6.5

11.98

19

52

1

15

0

Rick van der Ploeg

129

974

10.5

7.55

33

65

0

3

1

Thorsten Beck

58

3,560

3.7

61.38

28

40

10

14

11

Eva Demerouti

80

3,178

8.4

39.72

18

46

9

18

8

Bronwyn Hall

54

4,449

1.5

82.39

26

41

11

13

7

Jan van Ours

133

1,204

12.0

9.05

26

58

0

2

0

Roy Thurik

137

1,888

16.4

13.78

34

62

4

9

2

Ans Kolk

45

448

9.4

9.96

27

59

0

2

0

Average

166

2,354

14

21

26

54

3.6

9.3

2.9

The six academics that disappeared from the top-10 on average had published 154 papers, but gathered only 9.93 citations per paper; only one of these academics had a paper with more than 10 citations per year. Their average proportion of self-citations is high at 15.4, ranging from 6.5 % for Rietveld to 37.7 % for Goos. The academics that took their place in the PoP CAY top-10 on average published “only” 85 papers, but these papers attracted an average of 36.05 citations per paper; four of the six academics had publications with more than 10 citations per year. The three new top-5 entrants (Beck, Demerouti, Hall) only published an average of 64 papers, but with an average of 61.16 citations per papers; on average nearly 9 of their papers had more than 10 citations per year. Their average proportion of self-citation is low at 4.5 %, ranging from 1.5 % for Hall to 8.4 % for Demerouti.

Nijkamp and McAleer, who remain listed in the citation-based top-10, also fit the high volume/low impact publication profile. In fact, they have published more than any of the other 14 academics and their average citations per paper are among the lowest. They also have the highest proportion of papers (nearly three quarters) that are very lightly cited. However, the sheer volume of their work keeps them in the top-10 even when the focus is on citations.

An alternative whole-of-career citations ranking

Both the Economics top-40 and the PoP CAY top-40 focus on recent performance only, the first on publications in the last five years, the last on citations to publications published in the last 11 years. We therefore also propose a third ranking (see Table 4), based on whole-of-career achievement, but corrected for academic age, defined as the number of years lapsed since the academic’s first publication. Although academics that are more senior will by definition have better citation records, the year correction ensures that junior and senior academics can be compared equitably.
Table 4

PoP hIa top-40

For this ranking we use a relatively new metric, the annualised individual h-index, hIa for short, calculated by dividing the individual h-index by the individual’s academic age. This metric measures the extent to which academics have produced a sustained level of impactful articles over the years. Senior academics nearing the end of their career will see their hIa decline with passing years as it becomes increasingly difficult to increase an already high h-index. That said, the hIa uses the individual h-index as its basis, and this h-index might still increase as it approximates the regular h-index with increasing citations to multi-authored papers. Some of the older academics that are listed in both the original Economics top-40 and the ranking based on CAY such as Peter Nijkamp (67), Michael McAleer (61), Piet Rietveld (61), Rik Pieters (58) and Rick van der Ploeg (57) drop out of the whole-of-career top-40, though they still make it to the top-100. That said, Jan van Ours is still highly ranked in the top-40, as are a range of academics in their early to mid-fifties.

The average age for academics in the PoP hIa top-40 is 47 (range 32 to 65) and on average, they have an annual individual h-index (hIa) of 1.45. This clearly reflects the fact that we are dealing with a high-performing group of academics. A hIa of 1.45 means that, on average, academics in our PoP hIa top-40 consistently published nearly 1.5 article per year that, when corrected for the number of co-authors, had accumulated enough citations to be included in the h-index. The average hIa for all 267 academics that were nominated between 2011 and 2013 is 0.86. Harzing et al. ( 2014), in the first study on the hIa, investigated a random sample of 146 full and associate professors at the University of Melbourne, one of the world’s top-30 universities in the Times Higher Education ranking. For these academics, working in the Life Sciences, Sciences, Engineering, Social Sciences and Arts & Humanities, the average hIa based on Google Scholar data was 0.50. For the top-40 academics in that study, the average was 0.74, ranging from 0.62 to 1.68. Based on these two studies, we suggest that a hIa above 1.0 should be considered to reflect excellent performance. A hIa above 1.5 might be considered to reflect outstanding performance, whereas a hIa above 2.0 can be seen as truly exceptional.

Three of the academics in this new top-40 are female and the vast majority (33) is Dutch, with four Flemish academics, one Greek, one German and one Italian. The most frequently represented discipline in this top-40 is Management (15), closely followed by Economics (12). Finance & Accounting (4), Marketing (4) and Management Science (5) have a similar representation. In terms of institutions Erasmus tops the rank with 9 occurrences, followed by Maastricht (7) and Groningen (6). The VU (VU University Amsterdam) (5), Tilburg (4), Eindhoven (4) and UvA (University of Amsterdam) (3) form the next tranche, with Nijmegen and Wageningen closing the ranks with 1 academic each. Utrecht is the only university without an academic in the hIa top-40.

In comparison with the two other top-40s, the distribution of nationalities is similar and the proportion of women remains low. However, the hIa top-40 is quite distinct from the two other top-40s in terms of age, discipline and university distribution. At 47, the average age in this top-40 is lower than in the two other rankings, but most noticeable is the addition an additional five academics under 40 and another five aged between 40 and 45. In fact, all but three of the newly listed academics in this top-40 are aged 45 or under. This clearly reflects the fact that the hIa effectively corrects for career length and if anything tends to be higher for mid-career researchers than for more senior researchers. Especially for researchers under 45/50, the new hIa ranking could therefore be interpreted as the list of “researchers to watch”, i.e. those academics who have achieved a sustained high performance and therefore might be the senior academic research leaders of the future.

In terms of disciplinary composition, the hIa top-40 has a more balanced disciplinary composition than either the original top-40 or the top-40 based on CAY. The dominant position of Economics in particular (50 % in the original top 40 and 45% in the CAY top-40) has diminished, with Economics (30 %) now outranked by Management (38 %). The smaller disciplines of Finance & Accounting, Marketing and Management Science are all similarly represented (10–13 % The change in disciplinary composition is particularly striking in the top-20. In the publication-based Economics top-40, three quarters of the academics in the top-20 were economists, in the PoP hIa top-40 this is reduced to just over a third.

In terms of institutional composition, the hIa top-40 is more balanced than the two other top-40s as well. In the original top-40 and the CAY top-40 academics affiliated with Erasmus and Tilburg made up just over or just under half of the list, in the hIa top-40, this is reduced to a third. Apart from the University of Utrecht, every university is represented in the hIa top-40. Maastricht, Eindhoven and the UvA (University of Amsterdam) in particular do much better in citation-based rankings than in the original publication-based top-40. In the publication-based top-40, these three institutions collectively only had 3 academics listed, in the CAY top-40 this increased to 11 and in the hIa top-40 they have no less than 14 representatives. Ten of these are in Management, Marketing or Finance & Accounting.

Discussion

This paper applied two fairly new metrics in a pilot study that created two new citation-based rankings for academics in Economics & Business in the Netherlands. Our new rankings were based on the “proof over promise” principle. Rather than simply inferring impact from publication in high impact journals, we measured actual impact through an investigation of article-level citations. We also used a more inclusive definition of research output, including not just articles in ISI-listed journals, but also books, book chapters, working and conference papers and articles in non-ISI listed journals. Although ISI listing is seen by many to imply a quality stamp, in our view it should not matter where research is published. If a particular research output is highly cited, it clearly influences the field and that should be more important than the journal in which it is published.

Table 5 integrates the three rankings discussed in this article. As before, academics ranked on all three rankings are listed in green, academics that are ranked on both the Economics top-40 and the PoP CAY top-40 are listed in orange, academics ranked on both the Economics top-40 and the PoP hIa top-40 are listed in red, whereas academics ranked on both citation-based rankings are listed in blue. Academics in black are ranked on one list only. Table 5 also provides a comparison between the three rankings for each individual academic. Although for some academics the resulting range of rankings is relatively narrow, for others the three rankings provide widely diverging results. Correlations between the actual scores for the various metrics indicate that there is certainly some communality underlying the rankings. The scores used in the original Economics top-40 and the PoP CAY 2003–2013 metric have a correlation coefficient of 0.69. With a correlation coefficient of 0.44 the relationship between the Economics top-40 scores and the PoP career hIa metric is much weaker. This is not surprising as the hIa not only focuses on citations rather than publications, but also considers the entire career rather than just the most recent performance. The PoP CAY 2003–2103 and the whole of career hIa metrics show the strongest correlation at 0.79. Even so, the metrics are dissimilar enough to provide unique information, unlike most of the h-index variants that show correlation coefficients above 0.90 with the original h-index (see Bornmann et al. 2011; Harzing et al. 2014).
Table 5

Comparison of Economics top-40, PoP CAY and PoP hIa

We argue that our “proof over promise” approach is more “democratic”/inclusive than the original Economics top-40. First, by expanding the type of research outputs considered beyond the narrow scope of publications in ISI-listed journals, we remove the disciplinary bias against Management, Marketing and Accounting & Finance, disciplines in which a smaller proportion of high-quality journals is ISI-listed than in Economics and Management Science (Harzing and van der Wal 2009). Whereas in the original publication-based Economics top-40 more than two thirds of the listed academics works in Economics or Management Science, in our citations-based rankings this proportion is reduced to just over half in the CAY ranking and to 43 % in the hIa ranking.

Second, citation-based performance metrics can be argued to be more democratic as their “verdict” is based on the reception of the paper by the academic community as a whole, whereas acceptance in a high-impact journal is dependent on only a handful of gatekeepers (the editor and reviewers). This also increases the chance that a publication-based performance metric is influenced not just by the quality of the papers, but also by particularistic criteria such as reputation or personal networks of the author, the reputation of the university the author is affiliated with or its presence in the editorial board of journals. Although citations are certainly not immune to this mechanism, they are less sensitive to particularistic criteria and the effect might be mediated by journal prestige (Judge et al. 2007). Although we cannot establish with any certainty that these mechanisms are in operation, the citation-based rankings feature more academics from universities not traditionally seen as the primary research universities in Economics & Business in the Netherlands, such as Maastricht, Eindhoven and Nijmegen. They also include a larger number of women and younger academics.

Third, our ranking was conducted with a free software program (Publish or Perish) and a publicly available data-base (Google Scholar). Hence, any reader can easily replicate the ranking without the need for subscription-based data-bases or complicated calculations based on percentile scores for the AIS. This also means that any academic can look up their own citation record and easily find out where they score in the current ranking. For instance, the first author of this article would rank 10th in a CAY-based ranking and 6th in a hIa-based ranking. When attempting to replicate our ranking, please note that Google Scholar is updated every couple of days, and that although one can limit the publication years, it is currently not possible to conduct a citation analysis at a specific date7. Although the whole-of-career hIa will be relatively stable with only occasional increases when another publication enters the individual h-index, the CAY metric will continue to increase as the calendar year progresses and citations accrue. Both metrics will decline by definition after the start of a new calendar year as the “academic age” of the academic in question increases by 1 year.

Fourth, citation-based rankings and in particular the hIa ranking are likely to provide more dynamic rankings in terms of changes over the years. The original Economics top-40 is fairly stable over time, especially in the higher ranks. In an analysis of 29 years of Economics top-40 Franses (2014, p. 1268) indicates that: “The main conclusion from this paper is that a small subset of all Dutch economists dominates the charts for years”. Once an academic supervises a large number of PhD students and is involved in a number of collaborative projects, publication output is likely to remain high, even if the resulting papers do not have a large citation impact. As in the Netherlands only full professors can be primary supervisors, this provides a built-in disadvantage for younger academics and those working in disciplines or universities that attract fewer PhD students. A ranking based on CAY allows academics that produce fewer, but more impactful papers, and work in smaller teams, to enter the ranking. However, unless one limits the comparison to a relatively short period of time, a similar set of—mainly older—academics is still likely to dominate the rankings over the years as their papers have had a longer time to gather citations. In comparison to the two other rankings, the hIa ranking is therefore likely to be most dynamic over time. Younger academics can more easily enter into this ranking if they perform well relative to their career length in terms of single-author equivalent impactful papers. It is also more dynamic in that currently ranked academics need to sustain their level of single-author equivalent impactful publications to remain listed; their hIa will decline over the years if no additional high impact papers are published.

Any ranking has its limitations, as does any metric on which the ranking in question is based. The nature of rankings means that even small differences in the metrics used can have an important impact on the position of individual academics (or departments or universities). This is clearly evident in the original Economics top-40 where the rank of individuals varied substantially depending on whether publications were multiplied by the journal impact factor, the raw article influence score or the percentile article influence score (variations that were applied in the methodology in recent years). Our own rankings made a conscious choice for author and age corrected metrics in order to provide more equitable comparisons between sub-disciplines with different authorship traditions and academics at different career stages.

However, rankings—especially rankings of individuals—are also vulnerable to minor variations in data input, even one additional publication or citation can make a difference in rank. This is especially true in the lower regions of any ranking; the variance in the top-20 for all three rankings discussed in this paper is substantially higher than the variance in the bottom top-20. In fact, the range of scores in the top-10 for any of the three rankings makes up 73–81 % of the total range of scores in the top-40, suggesting that beyond the top-10 differences are marginal and that it might be better to use bands of scores beyond the top-10. Beyond identification of the absolute top in a particular discipline, any differences in rank are due as much to specific ranking criteria, choice of metrics and the source of data. Rankings should therefore be considered for what they are: crude instruments to identify top performers, but should never be used as the only input for decision-making.

Conclusion

This paper used two relatively new metrics, the CAY metric and the hIa. Which metric one prefers depends on the purpose of the investigation. The CAY is the age and author-corrected equivalent of the total number of citations, whereas the hIa is the age and author-corrected equivalent of the h-index. If one is most interested in the cumulative citation impact of an academic, the CAY is more appropriate. If one is most interested in the average number of high-impact publications an academics produces on a yearly basis, the hIa is more appropriate. As for other research metrics, the CAY or hIa-index should never be used as the sole criterion to evaluate academics. Another crucial question that should always be asked is: “Has the scholar asked an important question and investigated it in such a way that it has the potential to advance societal understanding and well-being?” (see e.g. Adler and Harzing 2009). However, we argue that the hIa-index and the related CAY metric provide an important additional perspective over and above a ranking based purely on publications in high impact journals alone. Citation-based rankings were also shown to inject a higher level of diversity in terms of age, gender, discipline and academic affiliation and thus appear to be more inclusive of a wider range of scholarship.

Footnotes

  1. 1.

    The term economist is interpreted more broadly in the Netherlands than in Anglophone countries. In Anglophone countries there is generally a clear separation between Economics and Business and these disciplines might be located in different Faculties or Schools. In the Netherlands, Economics is generally sub-divided into General Economics (Economics), Business Economics (Business, i.e. Management, Marketing, Finance & Accounting) and Quantitative Economics (roughly equivalent to Econometrics and Management Science). Hence the Economics top-40 includes both academics in Economics/Econometrics and Business.

  2. 2.

    ESB also publishes another citation-based ranking, the Polderparade, which is based purely on citations in Dutch magazines and as such is not relevant for our discussion.

  3. 3.

    A recent publication (Abbring et al. 2014) shows that this choice alone dramatically influences the resulting ranking. They propose an alternative publication-based ranking using the raw AIS. Only half of the academics in the original top-40 are present in this new ranking. This clearly shows how vulnerable rankings are to the choice of criteria, something we will return to in our discussion section.

  4. 4.

    Every participating university in the Netherlands (11 in total) can nominate up to 20 (for large universities) or up to 10 (for small universities) economists to be included in the Dutch Economists Top 40. Criteria for nomination include at least a 0.2 appointment and at least one publication in a recognised journal in Economics & Business to ensure the nominee has a link to this field. We received the list of nominees from the team coordinating the ranking in order to enable an impact analysis beyond WOS and to discover opportunities to innovate the methodology for the ranking from a more inclusive perspective.

  5. 5.

    Google Scholar is not without its critics (see e.g. Jacso 2010). However, recent large-scale investigations of Google Scholar accuracy (e.g., the LSE project on impact in the Social Sciences London School of Economics and Political Science 2011; Harzing 2013) suggest that the level of accuracy, stability and comprehensiveness displayed by Google Scholar is sufficient for bibliometric analyses. In the LSE project, publications listed and the citing sources were verified manually for duplicate entries, unacknowledged citations, publishers’ publicity materials etc. These were removed to produce a completely ‘cleaned’ score. The correlation between the original scores and the cleaned scores was 0.95.

  6. 6.

    A similar argument could be made for the original Economics ranking which is based on recent publications in high-impact journals. Senior academics might have a better chance of getting their papers accepted in these journals, especially if they have published in these journals before, even if the paper itself isn’t necessary of higher quality.

  7. 7.

    Please note that this is a limitation that also applies to the ISI database. Although one can limit the year range for articles, it is not possible to do so for citations.

References

  1. Abbring, J. H., Bronnenberg, B. J., Gautier, P. A., & van Ours, J. C. (2014). Dutch Economists top 40. De Economist, 162, 107–114.Google Scholar
  2. Adler, N., & Harzing, A. W. (2009). When Knowledge Wins: Transcending the sense and nonsense of academic rankings. The Academy of Management Learning & Education, 8(1), 72–95.CrossRefGoogle Scholar
  3. Börner, K., Dall’Asta, L., Ke, W., & Vespignani, A. (2005). Studying the emerging global brain: Analyzing and visualizing the impact of co-authorship teams. Complexity, 10(4), 57–67.CrossRefGoogle Scholar
  4. Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. D. (2011). A multilevel meta-analysis of studies reporting correlations between the h-index and 37 different h-index variants. Journal of Informetrics, 5(3), 346–359.CrossRefGoogle Scholar
  5. Franses, P. H. (2014). Trends in three decades of rankings of Dutch Economists. Scientometrics, 98(2), 1257–1268.CrossRefGoogle Scholar
  6. García, J. A., Rodriguez-Sánchez, R., & Fdez-Valdivia, J. (2012). A comparison of top economics departments in the US and EU on the basis of the multidimensional prestige of influential articles in 2010. Scientometrics, 93(3), 681–698.CrossRefGoogle Scholar
  7. Glänzel, W., & Thijs, B. (2004). Does co-authorship inflate the share of self-citations? Scientometrics, 61(3), 395–404.CrossRefGoogle Scholar
  8. Harzing, A. W. (2005). Australian research output in Economics & Business: High volume, low impact? Australian Journal of Management, 30(2), 183–200.CrossRefGoogle Scholar
  9. Harzing, A. W. (2007) Publish or Perish, Retrieved February 3, 2014, from http://www.harzing.com/pop.htm
  10. Harzing, A. W. (2013). A preliminary test of Google Scholar as a source for citation data: A longitudinal study of Nobel Prize winners. Scientometrics, 93(3), 1057–1075.CrossRefGoogle Scholar
  11. Harzing, A. W., Alakangas, S., & Adams, D. (2014). hIa: An individual annual h-index to accommodate disciplinary and career length differences. Scientometrics, 99(3), 811–821.CrossRefGoogle Scholar
  12. Harzing, A. W., & van der Wal, R. (2009). A Google Scholar h-index for journals: An alternative metric to measure journal impact in Economics & Business? Journal of the American Society for Information Science and Technology, 60(1), 41–46.CrossRefGoogle Scholar
  13. Jacso, P. (2010). Metadata mega mess in Google Scholar. Online Information Review, 34(1), 175–191.CrossRefGoogle Scholar
  14. Jin, J. C., & Choi, E. K. (2014). Citations of Most Often Cited Economists: Do Scholarly Books Matter More than Quality Journals? Pacific Economic Review, 19(1), 8–24.CrossRefMathSciNetGoogle Scholar
  15. Judge, T. A., Cable, D. M., Colbert, A. E., & Rynes, S. L. (2007). What causes a management article to be cited—Article, author, or journal? Academy of Management Journal, 50(3), 491–506.CrossRefGoogle Scholar
  16. Kalaitzidakis, P., Mamuneas, T. P., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of the European Economic Association, 1(6), 1346–1366.CrossRefGoogle Scholar
  17. Katz, J. S., & Martin, B. R. (1997). What is research collaboration? Research policy, 26(1), 1–18.CrossRefGoogle Scholar
  18. Kodrzycki, Y. K., & Yu, P. (2006). New approaches to ranking economics journals. Contributions in Economic Analysis & Policy, 5(1), 1–40.Google Scholar
  19. London School of Economics and Political Science. (2011). Impact of the social sciences: Maximizing the impact of academic research. Retrieved February 3, 2014, from http://blogs.lse.ac.uk/impactofsocialsciences/.
  20. Nederhof, A. J. (2008). Policy impact of bibliometric rankings of research performance of departments and individuals in economics. Scientometrics, 74(1), 163–174.CrossRefGoogle Scholar
  21. Prathap, G. (2010). The 100 most prolific economists using the p-index. Scientometrics, 84(1), 167–172.CrossRefGoogle Scholar
  22. Scott, L. C., & Mitias, P. M. (1996). Trends in rankings of economics departments in the US: An update. Economic inquiry, 34(2), 378–400.CrossRefGoogle Scholar
  23. Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638.CrossRefGoogle Scholar
  24. Singh, G., Haddad, K. M., & Chow, C. W. (2007). Are articles in “top” management journals necessarily of higher quality? Journal of Management Inquiry, 16(4), 319–331.CrossRefGoogle Scholar
  25. Starbuck, W. H. (2005). How much better are the most-prestigious journals? The statistics of academic publication. Organization Science, 16(2), 180–200.CrossRefGoogle Scholar
  26. Tol, R. S. (2009). The h-index and its alternatives: An application to the 100 most prolific economists. Scientometrics, 80(2), 317–324.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  1. 1.ESCP Europe, 527London UK
  2. 2.Erasmus University RotterdamRotterdamThe Netherlands

Personalised recommendations