Sample of business schools
The sample of universities for the USA is the 115 universities with the highest levels of research activity in the Carnegie Foundation classification, covering the closest (2015) academic year. “Elites” are not compared with universities that are clearly inferior. Eight universities (such as Brown, Princeton, and Cal Tech) that lack business schools were dropped, as were schools with research masters but not MBAs (e.g., LSE and UCL). For the UK, the Quacquarelli Symonds (QS) and Times Higher Education (THE) rankings were used to determine how many universities are equivalent in global ranking to the 115 US universities. Nine schools were dropped from the sample for fewer than five publications in the sample, leaving 102 US schools and 38 UK schools. Only main campuses are counted, and only the graduate (MBA) school if distinct from the undergraduate school (as with Virginia).
Delineating the elite MBA institutions
To differentiate between elite and less elite, or “research business schools,” the MBA rankings are for 2016, as an estimate of the lead times from research concept to publication. The overall measure of 2016 MBA rankings in Poets & Quants was used. It is a weighted average of rankings from Business Week, Financial Times, Forbes, The Economist, and U.S. News for the USA and all but the latter for the UK. Rankings are contestable; none are definitive. However, studies in cognate fields (public administration and sociology) suggest that ranking are quite stable, especially towards the top (Fowles et al., 2016), and the delineation between “elite” and non-elite departments is enduring (Weakliem et al., 2012). Because rankings of the MBA programs are the basis of delineation, the term “elite” refers to that basis only.
Rankings do not provide unequivocal boundaries between elite and research schools. Exclusivity is a requirement for elite status (Leibenstein, 1950), yet a larger set of elite universities provides a conservative way to test elite effects. Thus, the sample was split in two ways. Based on relatively large discrepancies in rankings, 14 US business schools were classed as the “stringent” elite: Berkeley, Chicago, Columbia, Cornell, Dartmouth, Duke, Harvard, Michigan, MIT, Northwestern, Pennsylvania, Stanford, Virginia, and Yale. Seven UK business schools were classed as “stringently” elite: Cambridge, Cranfield, Lancaster, LBS, Manchester, Oxford, and Warwick. Next, the samples were split into quartiles. This created a US “quartile elite” that adds Carnegie Mellon, Emory, Indiana, North Carolina, Notre Dame, N.Y.U., Rice, Texas, UCLA, and Washington. By the same token, Cass, Durham, Imperial, and Strathclyde were added to form the UK quartile elite.
Sample of journals
Table 1 showed the sample of journals. It was designed to compare the allocation of publication by elite with non-elite business schools, across analytical fields and those that (in the critics’ view) should be more, not less, developed. Reviewing the critical literature and searching in ProQuest yields six scholarly fields that are widely promoted as needing more attention: entrepreneurship and innovation (Spender, 2014), ethics and social responsibility (Dyllick, 2015), information systems (Thomas et al., 2013), international management (Jain & Stopford, 2011), operations (Lambert & Enz, 2015), and “soft skills” and organizational behavior (Pfeffer & Fong, 2002). The six fields were broadly defined.
To control for one influence on the allocation of publications across journals, I sampled only those included in the Financial Times 50 (FT50) list. This is only one ranking list, but an influential one. I selected 22 journals for the managerial fields, and nine that cover analytical fields. Based on SJR scores, I added the three most prestigious non-FT50 entrepreneurship journals, International Small Business Journal, Journal of Small Business Management, and Small Business Economics, in order to compare them with the three FT50 journals. I excluded three important analytical journals. Because we wish to see the effects of factors influencing business schools, and the rankings to delineate elites are based on business schools, not universities, I counted only authors with at least a joint affiliation in a business school. A consequence of this decision was omitting three FT50 economics journals. The Quarterly Journal of Economics, Journal of Political Economy, and Review of Economic Studies seldom note affiliations below the university level.
Measures and analysis
Many disciplines have recently discouraged the use of p values in reporting findings (e.g., Gardner & Altman, 1986, in medicine). I follow the advice of Gardner and Altman (1986), Meyer et al. (2017), and Schwab (2015), presenting the results with 95%confidence intervals as estimates of differences between classes of schools, with an estimate of effect size; in this case, Glass’ Δ. Probabilities are interpreted as follows: disjoint confidence intervals mean that the samples differ, P < 0.05. Overlapping confidence intervals mean there is no difference between the samples, P > 0.05. Effect sizes are interpreted with the “conventional” heuristic of 0.2 standard deviations meaning “small,” 0.5 meaning “medium,” and 0.8 meaning “large” (Schäfer & Schwarz, 2019, for limitations of heuristics absent the research context). Confidence intervals are presented in tables rather than graphs. These results are complex as it is; graphs would be overwhelming. Therefore, bolding and underlining in the tables are used to draw attention to significant results. The tables provide all details for interested readers. Fortunately, the results, reported below, are straightforward.
The SJR was used as the measure for journal prestige because it is size-independent and draws from a larger sample of publications than the Eigenfactor score (González-Pereira et al., 2010). One other judgment was needed. Should authorships be discounted by numbers of co-authors? Studies of scholarly output do so (Abramo et al., 2013) This is appropriate for such a purpose, even though “universities generally do not fully discount… in promotion, tenure, and salary decisions” (Hollis, 2001, p. 526) However, discounting is not appropriate when the research question is the relative association of types of universities to scholarly fields. Unlike most bibliometric studies, this study does not compare authors or journals; it compares business schools. Thus, the unit of observation is the undiscounted authorship. Authorships are the unit of measurement as they reflect allocation of scholarly work, regardless of number of authors.
Two robustness checks
Two robustness checks examine my categorization of the 22 journals into analytic and managerial categories. The first correlates journals based on shared school authorships. This required an adjacency matrix representing the number of school authorships shared by each journal. It was created with the sum of the cross-minimums method in UCINET6, transforming a two mode matrix of schools-by-journals to a one mode matrix of journals mapped onto journals (Borgatti et al., 2002). This method is used with valued, non-negative data, as in this study (Hanneman & Riddle, 2005, Chap. 6). I used Johnson’s hierarchical method to cluster a correlation matrix of the adjacency matrix. The clustering, shown in Fig. 1, is similar to the categorization of journals used in this study, but there are differences. As expected, the major disjunction, at r = 0.485, is between all nine of the analytical journals and all six of the entrepreneurship journals. The entrepreneurship journals cluster together at a minimum correlation of 0.793. The strongest cluster among them, at 0.881, is ET&P, JBV, and JSBM. However, only four other managerial journals cluster with the entrepreneurship journals: Journal of Business Ethics, Journal of International Business Studies, Journal of Operations Management, and Research Policy. An egregious misclassification is OBHDP, with its 0.956 correlation with finance and OR journals.
A second check is counting the number of articles with Greek mathematical letters. This required a list of such letters and an iterative process to see which such letters are in fact used in the sample. I did not use α, β, μ, or σ, because they are widely used in statistics. I used 17 letters: Γ, γ, Δ, δ, ε, ζ, Θ, θ, Λ, λ, Σ, Φ, φ, Ψ, ψ, Ω, and ω. As a more discriminating set, I also used Θ, θ, Λ, and λ.Footnote 2 Although these sets have only face validity, they served to discriminate amongst the journals, as shown in Table 9. Journals classed as “analytical” have much higher percentages of articles with these letters. The entrepreneurship journals, except SBE, have especially low percentages. These results help explain the relatively low clustering of JFQA with the other finance journals, but leave unexplained the results for OBHDP.