Introduction

Measuring the academic quality of schools and scholars is now routine practice. Scholars are ranked according to their output or citations to their work, typically with some correction for quality. The ranking apart, scholars are evaluated without context, with in reality scholars are affiliated to schools and part of communities. Schools are ranked according to some aggregation of the scores of their members (Kalaitzidakis et al. 2003; Nederhof 2008; Nederhof and Noyons 1992; Zhu et al. 1991).Footnote 1 However, some institutions are carried by a single, exceptional individual, while other institutions have a large number of good researchers (Crewe 1988). This has implications for the power relations within an institution and for the robustness of its ranking to job mobility. The contextualised performance indicators for individual scholars proposed in this article, when aggregated to schools, measure fragility and robustness of the performance of schools. The proposed indices thus complement standard measures of academic quality.

The proposed measures are variations of standard ones in economics. The starting point is the Shapley value (Shapley 1953), which measures the average contribution of an agent to any coalition. Instead of any coalition, I use the existing schools. Value is measured cardinally (i.e., the score of the school) and ordinally (i.e., the rank of the school). As far as I know, I am the first to propose this.Footnote 2

I use the normalised Herfindahl–Hirschman Index (HHI) to measure concentration (Herfindahl 1951; Hirschman 1964).Footnote 3 While not new (Cox and Chung 1991; Rubin and Chang 2003), this is not common in the scientometric literature.Footnote 4

While the new indicators constitute the main contribution of this article, the application is interesting too. I illustrate the new measures using data on business schools and business scholars in Ireland. Business research is rarely evaluated, at least in the academic literature (Harzing 2005; Hodder and Hodder 2010; Hogan 1990; Kao and Pao 2009; Vieira and Teixeira 2010).

This article continues as follows. Section 2 specifies the indicators and presents the data. Section 3 applies this to business schools and business scholars in Ireland and discusses the results. Section 4 concludes.

Methods and data

Methods

Let us consider the number of publications P as an indicator for the production of a scholar, and the number of citations C as an indicator for the quality or impact of the research. These are standard indicators, but they are devoid of any context: the performance of a scholar depends only on her own publications and citations.Footnote 5 When individual researchers are ranked according to these indicators, a context emerges—but individuals are treated as independent of one another. In reality, however, scholars group together in schools.

Shapley (1953) introduced a measure for the contribution of an agent (scholar) to a coalition (school). The Shapley value of an agent equals the average contribution of that agent to any coalition. I here define the value of a coalition as the average number of publications or citations. The contribution of a researcher is then the change in the coalition’s value should this researcher leave or join the coalition. I only consider existing institutions, rather than any coalition, and therefore refer to this indicator as a pseudo-Shapley value. Following the convention in the analysis of cartels (d’Aspremont et al. 1983), I further assume that scholars make decisions independently of other scholars about whether to change affiliation or retire.

The pseudo-Shapley value S for the number of publications p r of a researcher r in institution 1 is defined as:

$$ S_{\text{r}} : = \underbrace {{\frac{1}{I}\left[ {\bar{p}_{1} - \left( {\frac{{n_{1} }}{{n_{1} - 1}}\bar{p}_{1} - \frac{{p_{\text{r}} }}{{n_{1} - 1}}} \right)} \right]}}_{{{\text{contribution}}\;{\text{to}}\;{\text{own}}\;{\text{school}}}} + \underbrace {{\frac{1}{I}\sum\limits_{i = 2}^{I} {\left[ {\left( {\frac{{n_{i} }}{{n_{i} + 1}}\bar{p}_{i} + \frac{{p_{\text{r}} }}{{n_{i} + 1}}} \right) - \bar{p}_{i} } \right]} }}_{{{\text{contribution}}\;{\text{to}}\;{\text{other}}\;{\text{schools}}}} $$
(1)

where n i is the number of members of institution i, I is the number of institutions, and \( \bar{p}_{i} \) is the average number of publications per researcher at institution i.Footnote 6 Institutions can be ordered at will so that (1) defines S r for researcher in any school. The pseudo-Shapley value S r is the contribution of an individual scholar to the research performance of a school, averaged over all schools. It is increasing in the number of publications, and decreasing in the number of scholars.

(1) splits the pseudo-Shapley value into two components. The first component is the actual indicator value for the school minus the indicator value in case scholar r departs.Footnote 7 The second component is the indicator value in case scholar r joins another school minus the actual indicator value, averaged for the other schools.

Note that, in both components, we use the indicator with the scholar minus the indicator without the scholar. Therefore, the pseudo-Shapley value is independent of the current affiliation of the researcher.

The two components are more insightful than the pseudo-Shapley value itself. The first component is a measure of the power of an individual scholar over her school. The second component is a measure of the contribution a scholar would make to the competition, that is, a measure of the market value.

Let us therefore define two new indicators, power P and market value M, as follows:

$$ P_{\text{r}} : = \frac{1}{I}\frac{{p_{\text{r}} - \bar{p}_{1} }}{{n_{1} - 1}} $$
(2)
$$ M_{\text{r}} : = \frac{1}{I}\sum\limits_{i = 2}^{I} {\frac{{p_{\text{r}} - \bar{p}_{i} }}{{n_{i} + 1}}} $$
(3)

Note that P r + M r = S r. (2) equals the first component of (1), and (3) its second component; in both cases, the equations were simplified. (13) are defined for publication numbers, but the same definitions hold for citation numbers, or quality-weighted publication numbers, or indeed any individual performance indicator that would be averaged to assess the school performance.

In (13), the contribution of a scholar is defined as her contribution to an indicator score of a school. One may also consider the contribution of a researcher to the rank of any existing institution.Footnote 8 This could be referred to as an ordinal pseudo-Shapley valueFootnote 9 and it is split into an ordinal power indicator and an ordinal market indicator.

In fact, the ordinal power and market indicators contain more information than the ordinal Shapley values. The ordinal Shapley value measures the change in rankings should a scholar leave a school to join another. This would push one school up in the rankings, and another one down—while the ranking of third schools may be affected too. Indeed, the net change in rankings of all schools is zero.

In the application below, for individual scholars, I report the number of publications, the number of citations, the cardinal power and market indicators (based on both publications and citations), and the ordinal power and market indicators (based on both publications and citations).

For schools, I report the average number of publication and citations, and the average cardinal and ordinal power and market indicators. Note that the average cardinal power indicators are included for completeness only. The cardinal power index is an individual’s contribution to the average. Its average is zero. A high value of the average ordinal power measure indicates that a few scholars contribute most of the publications or citations of a school—because individual scholars would have low power over their school if all their colleagues perform equally well.

I compare the school’s power indicator to the HHI, which is a standard measure for the concentration of market share. The HHI is defined as the sum of the squared shares:

$$ {\text{HHI}} = \sum\limits_{i = 1}^{n} {\left( {\frac{{p_{i} }}{p}} \right)^{2} } ;\quad p = \sum\limits_{i = 1}^{n} {p_{i} } $$
(4)

where p i is the number of publications (say) and n is the number of scholars (including the ones that did not publish). The HHI lies between 1/n and one. The HHI would be one if a single scholar authored all publications of a school (monopoly). The HHI would be 1/n if all scholars in a school published the same number of papers. Because institutions differ in size, I use the normalized HHI:

$$ {\text{HHI}}^{*} = \frac{{{\text{HHI}} - {\text{HHI}}_{\min } }}{{1 - {\text{HHI}}_{\min } }};\quad {\text{HHI}}_{\min } = \sum\limits_{i = 1}^{n} {\left( \frac{1}{n} \right)^{2} } = \frac{1}{n} $$
(5)

HHI* lies between zero and one, regardless of n.

There is no reason to assume that the HHI and the power indicator will yield similar results. The HHI is quadratic in a scholar’s number of publications and inversely proportional to the square of the school’s publications. The power indicator also increases with a scholar’s publications and decreases with her school’s number, but the relationships are linear.

Data

I illustrate the above indicators with business scholars and business schools on the island of Ireland. Business schools are hard to define. Some universities have an entity called “school of business”, but other institutions mix business studies with other disciplines, or spread business studies over a number of schools. Table 1 shows, for each of the 11 institutions,Footnote 10 the schools (colleges, faculties) and their departments (schools, groups). There are two contentious issues. First, it is difficult to draw a line between business studies and economics. The topics are closely connected and often taught together. This study includes those economists who teach in business schools, but excludes other economists. The other issue is tourism. In two institutions, tourist studies are part of the business school, while elsewhere tourist studies are placed in other departments or indeed in a separate department. Here, tourism scholars are included if they teach in a business school.

Table 1 Business schools assessed in this study

For this study, business scholars are scholars who are employed in the business schools as defined in Table 1. People were identified as listed on the websites in early September 2010. There is no reason to believe that these lists are accurate. Indeed, several errors were uncovered (and corrected) during the data vetting process (see below). However, it is the only source of information available.

There are a total of 748 business scholars in Ireland. In addition, business schools employ administrative staff, teaching and research assistants, and PhD students—all of whom were excluded (if so identified). Business schools also have a large number of adjunct faculty—typically, senior business people who teach a few classes a year—while some business schools also host research staff from companies. These people were excluded too.

748 is a substantial number of scholars, each of which has to be assessed individually.Footnote 11 For that reason, a simple method is used. Data were collected from Scopus Footnote 12 only. Scopus has a much broader coverage than the Web of Science Footnote 13 for recent years (but a limited coverage before 1996). As Irish business scholars tend to be relatively young and tend to publish outside the core journals, Scopus is a more appropriate source of data. See also (Vieira and Gomes 2009). Nonetheless, some journals are not covered in Scopus, including a number of particular importance to business scholars in Ireland (e.g., Administration, Irish Journal of Management, Irish Marketing Journal, Irish Marketing Review). Google Scholar Footnote 14 (and thus Publish and Perish (Harzing 2010)Footnote 15 and Scholarometer Footnote 16) has a wider coverage than Scopus and would thus be more appropriate for business scholars (Mingers and Lipitakis 2010). However, Google Scholar suffers from a lack of quality control on publications and citations.Footnote 17

Four statistics were gathered from Scopus: year of first publication, number of publications, number of citations, and h-index (Hirsch 2005). People’s name, affiliation, specialization, degree, rank, and sex were also recorded. Here, I only use publications, citations, and affiliations.

The data are available at: http://hdl.handle.net/1902.1/15802.

The data have been cross-checked with CVs when online. Three preliminary versions of the data were published at IrishEconomy,Footnote 18 with an explicit invitation to correct data where needed. Heads of departments were all notified of the exercise and invited to comment. This vetting process led to substantial changes in the data—people and indeed departments were added; administrative, adjunct and trainee staff were removed; and publication and citation records were corrected.

The actual performance of business schools and scholars is discussed in detail in a companion paper (Tol 2010). I here focus on the information contained in the new indicators.

Results

Scholars

(2) has that the power indicator is linear in the number of publications or citations, with the slope equal to the inverse of the number of schools times the number of scholars in a school. The negative intercept equals the average number of publications or citations in the school, divided by the number of scholars in the school (−1) and by the number of schools. That is, both the intercept and the slope are different between schools but identical within schools. Because of this, the power ranking within schools is the same as the publication/citation ranking within schools.

Figure 1 shows the cardinal power indicators for the 748 business scholars in Ireland, plotted against the number of publications and citations. Power over the own school increases with production and influence, but the rate differs between schools. TCD is the smallest school and has therefore the steepest incline. UU and DIT are the largest schools, and the slope is thus the shallowest. Very productive or influential TCD scholars therefore have a greater power than equally productive or influential UU scholars.

Fig. 1
figure 1

Cardinal power indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Figure 2 shows the ordinal power indices. The pattern is very different from that in Fig. 1. Rank changes are integer, so average rank changes are discontinuous. Like the cardinal power indicators, the ordinal power indicators increase in the number of publications and citations. However, the context is much more important. While some TCD scholars have a great impact on the score of their small school, TCD is so far ahead of the other business schools that only one scholar could, by departing affect its ranking on publications—and none could affect TCD’s ranking on citations. DCU’s performance, on the other hand, is similar to some of its competitors and a number of DCU scholars could affect its ranking by departure.

Fig. 2
figure 2

Ordinal power indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

(3) has that market value is linear in the number of publications or citations. The intercept depends on the average number of publications or citations in all schools but one’s own and on the number of scholars in those schools. The slope depends on the number of scholars in other schools. One would therefore expect that all scholars are on roughly the same line. Figure 3 confirms this: Market value rise linearly with the number of either publications or citations, and there is little difference between schools. The slope varies between 0.0165 for TCD (the smallest school) and 0.0202 for DIT (the largest school). This is intuitive: scholars from larger schools have fewer outside opportunities and thus a greater impact. Differences are small, however.

Fig. 3
figure 3

Cardinal market value indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Figure 4 shows the ordinal market value indicators. The indicator is discontinuous. Market value increases with the number of publications and citations. However, scholars from some schools need to publish more/be cited more to command the same market value as do scholars of other schools. If a scholar moves from one school to another, the impact on the ranking depends on her productivity/influence, the initial score and size of either school and their position relative to other schools. With so many variables, the pattern is highly non-linear and hard to interpret and predict.

Fig. 4
figure 4

Ordinal market value indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Consider, for instance, the four scholars (one at NUIG, two at QUB, and one at UU) with an ordinal market value based on publications of 0.64. Their publication numbers range from 31 to 49. The discretisation of the ranking groups them together for a single market value. There is another QUB scholar with an ordinal market value of 0.73 and 46 publications. He scores better than his QUB fellows with 35 and 37 papers. He also scores better than the UU scholar with 49 publications. The reason is that ranks change if the QUB scholar moves to UU; but not if the UU scholar moves to QUB. The market value indicator captures part of the context in a way that simple publication or citation numbers cannot.

Schools

Table 2 shows the performance indicators for the 11 business schools in Ireland. The smallest school employs only 20 scholars, the largest 147. Average publication numbers range from 0.2 to 10.9 published papers per scholar, and average citations from 0.7 to 63.3 citations per scholar.

Table 2 Indicators for production (publications) and impact (citations) for eleven research-oriented business schools in Ireland

The average cardinal power indicator is zero by definition. The average ordinal power indicator ranges from −2.28 to +2.22 for publications and from −4.81 to 0.07 for citations. A negative value indicates that the school’s rank would improve if the average scholar departs. A positive value indicates that the departure of a scholar would lead, on average, to a lower rank—a scholar could use this threat to exert power. There are also schools with an average power index of zero—that is, the departure of the scholar would not on average affect the ranking. Zero values are more prevalent near the top and bottom of the publication and citation ranks, because ranks can change in one direction only.

Table 2 also shows the Herfindahl–Hirschman Indices which range from 0.03 to 0.24 for publications and from 0.05 to 0.90 for citations. Citations are more concentrated than publications, except for UCD.

Figure 5 plots the concentration indices against the power indices. Although there is a suggestion of some relationship—greater concentration and greater power seem to go together—the two indicators clearly measure different things. NCI, for instance, has an extremely high concentration of citations. However, as NCI is ranked last on citations, highly-cited researchers at NCI cannot exert any influence over NCI’s rank. Publications are less concentrated at NCI, but as NCI is ranked 10th, the (threatened) departure of a highly productive scholar would (potentially) affect NCI’s publication rank. Thus, while the HHI measures concentration, the power index contextualizes this and measures whether exceptional scholars can exert influence.

Fig. 5
figure 5

The HHI of concentration of publications (left axis) and citations (right axis) as a function of the average power indicator

DCU scores highest on the ordinal power indicator for publications. The departure of either of its top two scholars would see DCU drop 1.8 places on average (from 7th to 9th in most cases, and from 7th to 8th in some). There are another ten scholars whose departure would cause DCU’s publication rank to fall. However, the departure of none of the other 51 scholars would cause a change in rank. The large size of the department also means that HHI is relative small: although a few individuals stand out for the number of publications, their share is in the total output of the department is nonetheless small.

UCC scores lowest on the ordinal power indicator for publications. It would rise in the rankings if one of its 25 worst performers would leave, but the departure of only its 5 best performers would lead to a drop in ranking. NUIM scores lowest on the ordinal power indicator for citations. A departure by any of its 18 worst performer would improve its rankings, while the departure of only two would lead to a drop in rank.

The ordinal power indicator thus identifies schools with potential problems. In one school, two scholars can sway the rankings; in two other schools, a ranking-conscious head of department may ask certain people to leave.

Table 2 further shows the average market value indicators, both cardinal and ordinal. Figure 6 plots the market value indices against the number of publications and citations. Cardinal market value is linear in the number. This is true for the individual scores (cf. Fig. 3), and therefore also for the average scores. Ordinal market value tends to increase with the number, but a richer pattern emerges because ordinal values take context into account. Figure 4 shows that there is a different association between publications/citations and market value for scholars at different schools. Figure 6 confirms this.

Fig. 6
figure 6

The average market value (cardinal left axis, ordinal right axis) as a function of the number of publications (top panel) and citations (bottom panel)

Discussion and conclusion

Current performance measures of individual scholars ignore the context of scholarship. I introduce pseudo-Shapley values that measure a scholar’s contribution to (or power over) her own school and her value to other schools should they hire her. I illustrate the proposed measures with business scholars and business schools in Ireland. Although conceptually superior, the power indicators lead to an identical ranking of scholars within a school as conventional performance measures would, while the market value indicators lead to an identical ranking within schools and a very similar ranking between schools. I introduce both cardinal and ordinal indicators. The ordinal indices further contextualise performance measures and thus deviate more from conventional indicators. Furthermore, as the ordinal measures are discontinuous by construction, a natural classification of scholars emerges. Averaged over schools, the ordinal and particularly the cardinal market values offer little extra information over the average publication and citation numbers. The ordinal power measure gives, for the first time, an indication of the robustness or fragility of an institution’s place in the rank order. It is only weakly correlated with the Herfindahl–Hirschman concentration index of publications and citations.

The proposed measures open up new avenues for research. Do scholars prefer to work in schools in which they are powerful, or would they rather work in a place where their average colleague outperforms them (so that they maximise learning)? This cannot be answered with a cross-section (as used in this article); panel data are required to separate cause and effect. Is the ranking of fragile schools (according to the measures proposed here) indeed more volatile? This would again require data for multiple years.

The measures themselves can be further refined too. Particularly, I used publications and citations numbers for scholars, and their averages for schools. The mathematics is therefore rather straightforward. The power and market value indicators could be defined from more complex performance measures too, such as the h-index (Hirsch 2005). Furthermore, I assume that scholars are individuals, and negotiate as such with their schools. It is not uncommon, however, for a team of scholars to move from one school to another. Pseudo-Shapley values naturally generalise to this case—indeed, the actual Shapley value is defined for any coalition—but this was not considered here. All this is deferred to future research.