Abstract
This paper presents rankings of U.S. public policy schools based on their research publication output. In 2016 we collected the names of about 5000 faculty members at 44 such schools. We use bibliographic databases to gather measures of the quality and quantity of these individuals’ academic publications. These measures include the number of articles and books written, the quality of the journals the articles have appeared in, and the number of citations all have garnered. We aggregate these data to the school level to produce a set of rankings. The results differ significantly from existing rankings, and in addition display substantial across-field variation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Recent years have seen an increase in the number and variety of university rankings. This growth is likely due, at least partially, to demand for information from participants in educational markets. For instance, prospective students might wish to see measures of university reputation, given that attending different schools has been shown to have a causal impact on individuals’ career outcomes.Footnote 1 Moreover, universities pursue a diversity of goals: While some individuals may be interested in which schools offer the most financial aid or the smallest classes, others may be interested in which generate the greatest gains for low-income students.Footnote 2 In this environment—and especially given expanding data availability—the best outcome might be for a large amount of information to be available on each school. Using these data, market participants can generate rankings focused on the inputs or outputs of their interest.
Consider the case of undergraduate college rankings as exemplifying such a high-information outcome. There are a multitude of college rankings available, and one of the more popular, that produced by U.S. News and World Report (USNWR), is based on quantitative indicators updated annually. USNWR’s online interface, further, allows users to generate rankings based on subsets of these measures (e.g. selectivity). In addition, research-based rankings refer to the institutions that house many of these colleges; for instance, the so-called Shanghai and Leiden rankings.Footnote 3 Besides undergraduate colleges, schools of business, law, and medicine also see numerous rankings, including some based on multiple indicators.
Public policy schools are towards the opposite end in terms of the availability of such information. Few rankings exist, and the one produced by USNWR is based on a single input: a survey (of about 500 people) asking only one question.Footnote 4 Similarly, the ranking of international relations schools produced by Foreign Policy is based on a single survey question.Footnote 5 An issue with these survey-based rankings is that respondents, although experts, might have little knowledge about some schools (272 of them in USNWR). The respondents might base their answers on those of other experts—or even on the previous iteration of rankings—and the resulting scores may eventually contain little information.
Our goal here is to make a simple contribution towards addressing the relative dearth of data on public policy schools. We present rankings of these schools based on the quantity and quality of research output their faculty members produce. Our research metrics are constructed only from the output of policy school faculty, illustrating the use of bibliographic data in assessing the research output of multidisciplinary sub-university academic units. This is in contrast with most research-based rankings, which consider either entire universities, or unidisciplinary departments.
Specifically, we collected the names of about 5000 faculty members at 44 public policy schools. We use bibliographic databases to gather measures of the quantity and quality of these individuals’ publications. These measures include the number of articles and books written, the quality of journals the articles have appeared in, and the number of citations all have garnered. We then aggregate these measures to produce school rankings. We report multiple rankings which may be of interest to different sets of agents. For instance, to administrators interested in research output in different disciplines, and to students if research output is one measure of faculty quality.
Our approach differs from those that yield existing rankings. Williams et al. (2014) rank universities in terms of their public administration research output. Their approach is close to ours in that they use bibliographic information, but it differs in two respects. First, they consider the performance of professors in entire universities, while we focus on faculty affiliated with specific public policy schools. Second, they consider only publications in journals of public administration, while we consider all types of research output. These choices are related. We opt to consider all types of research since faculty at public policy schools are active in diverse research areas—e.g., political science, climate science, and economics. Taking this inclusive perspective makes it important to focus only on faculty actually affiliated with the schools in question. Otherwise the procedure risks generating a university (rather than a policy school) ranking along the lines of the Shanghai ranking. All this said, we also report some discipline-specific results.
Our approach differs more fundamentally from that in the most publicized rankings, Foreign Policy and USNWR. As stated, those originate in single questions asking respondents for a broad assessment of each school. For instance, in the 2016 iteration, USNWR asked two individuals at each of 272 schools to rate the quality of master’s programs on a scale from 1 to 5.Footnote 6 Such an approach has advantages and disadvantages. On the one hand, it involves relatively little data collection, saving effort and reducing schools’ incentives to report data so as to “game” the rankings.Footnote 7 In addition, the survey respondents may condense a large amount of information—their view may be comprehensive and nuanced relative to what quantitative data can provide. On the other hand, the 1–5 grading results in a lack of granularity. In the 2016 ranking, for example, six schools tied for 13th place, while seven tied for 34th. It also contributes to volatility; in the most recent iteration three schools changed ten ranks. This could be due to real developments, but also to fairly minor changes in a small number of respondents’ scores.
Further, USNWR’s procedure may result in what economists call an “informational cascade”.Footnote 8 Namely, when making choices it can make sense to rely on the opinions of other market participants, particularly experts. The problem is that if these experts themselves rely on other experts, the ranking may eventually contain little information. For instance, some survey respondents may just look at a previous ranking and provide similar scores. This is particularly a consideration given that USNWR asks respondents to evaluate 272 schools, and it seems highly likely that some respondents do not know all these institutions well. Related to this, the 2016 iteration of the USNWR ranking does not report a result for more than 100 schools which have scores of two or lower (out of five). While we do not know the exact reason for this omission, one possibility is that only a fraction of respondents rank such schools, limiting the reliability of the overall evaluation.Footnote 9 Relatedly, USNWR reports that the response rate on its survey is 43 percent.
Perhaps because of these (and other) factors, our rankings differ significantly from those produced by USNWR. We describe our method in detail in Sect. 2. Section 3 reports and discusses our rankings. Section 4 concludes.
Methods
This section describes the five components of the approach we used to generate our rankings.
Sample of schools
Our procedure begins by establishing a sample of schools to consider. We include the public policy schools that USNWR ranked 41 or better in its “Best Graduate Public Affairs Programs” in 2016. The universe of schools USNWR considers, we understand, is determined in coordination with NASPAA, the Network of Schools of Public Policy, Affairs, and Administration, and APPAM, the Association for Public Policy Analysis and Management. As stated, that ranking features 272 schools, but due to cost considerations (as discussed below the procedure involved downloading and cleaning listings of faculty) we include only about forty schools. Due to ranking ties, it is impossible to select exactly forty, so we consider forty-four. The resulting sample is not representative of U.S. policy schools, and is especially tilted toward higher-ranked schools. Thus, we wish to be clear that our results may not be externally valid to schools beyond those in our sample.
Table 1 (page 6) lists the schools in our sample in alphabetical order. The first column contains the designation used by USNWR. In most cases this includes the name of the school in parentheses, leaving no ambiguity as to the institution in question. For example, “University of Michigan-Ann Arbor (Ford)” refers to the Gerald R. Ford School of Public Policy at the University of Michigan at Ann Arbor. For cases where a name was not stated in parentheses, column 2 lists the school or program which we considered.Footnote 10 Finally, column 3 lists the names we use to designate schools in subsequent tables, in some cases with abbreviations for the sake of space. Even though some institutions listed are not technically schools—for example in a few cases they are programs or institutes within another academic unit—we will henceforth refer to them as schools. In the text below we will use the designation in column 3 the first time we refer to a specific school; subsequently we will use further common abbreviations or designations.
Faculty listings
The next task is to construct lists of names of faculty for each school. We downloaded all the faculty lists for each of the 44 schools from their official websites. We obtained these downloads during the month of June, 2016, and had downloaded all relevant listings by June 30 of that year. Therefore the results do not reflect faculty membership changes past that date. This procedure yielded 4927 unique faculty members—an average of about 112 per institution.
This number might seem large, and part of the reason is that we include all faculty members listed (e.g. adjunct professors, lecturers, visitors, professors emeriti, affiliated professors, etc.). One could certainly make reasonable arguments for a more narrow focus. For example, it might be better to consider only individuals that schools describe as their “core” faculty, or to consider only faculty members who have a primary or sole (as opposed to affiliated or joint) appointment in the policy school.Footnote 11 Arriving at such a focus, however, would have required multiple subjective decisions and/or substantial additional data collection.
To elaborate, inspection of the faculty listings (in some cases combined with specific knowledge that authors have of colleagues’ affiliations) reveals that different schools follow different conventions in terms of how they list and classify faculty. For example, some schools list “core” faculty members, while others do not. A few schools list visitors as part of their core faculty; most do not. Some schools list adjunct professors even if they are not actively teaching that specific year; some provide no adjunct listing at all. Some list affiliated faculty, while others either do not have them or do not provide their names. Arriving at a uniform method to classify faculty would thus have required contacting each school, developing a common set of conventions, and working to arrive at a new listing, an exercise beyond the scope of this paper. As a result, we simply include in a school’s roster any individual that it listed as a faculty member, making no distinction between categories.
The fact that we make no adjustments for faculty composition is a major determinant of our choice to focus on results that rank schools by total research output, as opposed to adjusting output by the number of faculty to calculate productivity-type measures. Specifically, while we do present a few results that adjust for schools’ faculty sizes, we note that constructing these requires building ratios for each school, and we cannot be confident that the denominators are strictly comparable across institutions. We discuss this issue further below, and note in closing that the difficulty in arriving at comparable faculty counts may be one reason why the most widely-publicized research-based rankings typically focus only on universities’ aggregate output.
Matching faculty to bibliographic databases
We then obtained information on the research output of each of the 4927 individuals identified, with the final data collection happening in April of 2017. Thus, the results below reflect the cumulative output of individuals as measured in April 2017, based on their June 2016 affiliations.
We attempted to get measures of output using two bibliographic databases: the Web of Science citation index produced by Thompson Reuters,Footnote 12 and the Scopus index produced by Elsevier.Footnote 13 In our extracts from both databases the basic unit of observation is the publication, and these observations are in turn associated with information like the network of citations the publication has received. Each database uses algorithms to attribute publications to individuals, who can be identified by their name and affiliation.
For example, consider “Jane Doe,” who is currently affiliated with School X at University Y (obviously hypothetical names, although the numbers we discuss below refer to a real individual for whom we engaged in additional investigation to check the matching results). The bibliographic databases do not mention specific schools (e.g. the Ford School at the University of Michigan-Ann Arbor), and so we search for the combination of “Jane Doe” and “University Y.” We find Professor Doe on the Scopus database, for instance, and observe that she has publications listed since 2002. In this particular case, all these data points are confirmed by manual inspection of Professor Doe’s C.V., which we obtained online.Footnote 14 Both Web of Science and Scopus use algorithms to attribute publications to authors, and in this particular case these result in all these papers being attributed to Professor Doe at University Y. We note that this is even though Professor Doe only moved to University Y in 2006, having worked at multiple universities before.
While the outlines of this exercise are possible with both bibliographic databases, in the end we only used the Scopus data. This reflects that Web of Science identifies authors only by first initial and last name. While some faculty members have fairly unique surnames, a name like “J. Smith” can produce a large amount of spurious matches even within a university. This is likely an additional reason why other research-focused rankings consider universities rather than schools.
Even though Scopus uses both first and last names, our procedure can still result in spurious matches—for example, when a university has a “John Smith” in both its school of public policy and its chemistry department. The Scopus platform does try to account for these issues by using other contextual information to arrive at unique matches. While there are still errors and our data almost surely include both false positives and false negatives, this noise is greatly reduced relative to that we would see with Web of Science.
The results of the match, by institution, are in Table 2 (page 9). Despite the concerns about “overmatching” (e.g. matching to faculty members in more than one department who share the same name), it at first appears that the greater issue is having no match at all—only 51 percent of all our names produce a match in Scopus. What we conjecture, based on manual inspection and internet searches, is that in many cases the non-matches are due to listed faculty having no publications in these databases.
For example, young assistant professors who have not yet published will not show up. But we venture that the larger share of these non-matches are due to the inclusion of non-research faculty in our listings. Many adjunct faculty members’ main line of work is not at a university, and hence they may not have made publications in the venues covered by databases like Scopus.
We did discover that some non-matches were due to procedure error. For one example, professors with three words in their last name did not always match, an issue which we were able to correct. On the other hand, faculty members who change their name after marriage, or abbreviate parts of their first names inconsistently raise issues that are harder to address. In addition, through some manual exploration we identified what seem like isolated cases of faculty members who have multiple publications and yet are not matched to the university that our lists (and online confirmation) indicate they are affiliated with.
Where we identified this issue, it is possibly due to the ways in which the Scopus algorithms operate. To cite one example, we found isolated cases of economists who are not matched to a university. We were able to determine that where this happened it was because they have an additional affiliation that they cited very frequently in a way that (for reasons we do not fully understand, since we do not have access to the algorithm) results in their match to this other institution. For example there are some economists with many papers listed in the National Bureau of Economic Research (NBER) working paper series. For a small subset of these individuals, Scopus lists the NBER as the primary affiliation, even though in all the cases we saw they work at a university (which presumably they would say is their main affiliation). We did not correct this problem since it had little if any effect on our rankings. We further decided that trying to correct discipline-specific issues could introduce bias, since we have better knowledge of some disciplines than others. The bottom line is that there are many potential sources of noise in information from these sources, and some such noise remains in our data.
It is also worth noting that the match rate across schools is quite variable. Specifically, in Table 2 (page 9) column 1 lists the number of faculty members listed at each school, and column 2 the proportion found in the Scopus database. Column 3 lists the final number of faculty members (those found) used in all the results below. While Florida State University (Askew) has a match rate of one hundred percent, several schools have rates below thirty percent. This results in changes in the number of faculty between column 1 and column 3. For example, the top four schools by faculty size (from largest to smallest) in Column 1 are Columbia University (SIPA), Harvard University (Kennedy), Carnegie Mellon University (Heinz), and New York University (Wagner). Once one considers faculty actually matched (Column 3), the top four are Harvard, Syracuse University (Maxwell), Columbia, and Princeton University (Wilson). This suggests, for instance, that Columbia, Carnegie Mellon, and NYU likely have a higher prevalence of adjunct instructors than Harvard, Syracuse, or Princeton.
Note also that while less so than in the natural sciences, co-authorship is relatively common (and growing) in the social sciences and other areas schools of public policy are active in. For measuring research productivity, the question arises of how to assign credit among multiple authors. One could, for example, give half credit to each author for two-author works. We decided not to adjust for co-authorship, and all matched authors receive full credit. This is the convention for the standard academic h-index and the norm for promotion decisions in some fields. While there could be potential distortion in the rankings if authors at some institutions systematically co-author more than others, we did not notice that this would have a qualitative impact on our results.
Research-based rankings
Once faculty are matched the Scopus database immediately allows their individual publications to be extracted. The aggregate output of the 2496 individuals matched consists of 65,896 publications. Of these we will focus on 52,369 items that include articles (46,820), books (1140), and chapters in books (4409).Footnote 15 The set of matched publications go back to the 1970s, but over 90 percent of the publications are for 1982 or later, and more than 50 percentare for 2007 or later. This partly reflects the increasing coverage of Scopus in later years.
Table 3 (page 12) lists the 120 journals that most frequently appear in our publications data.Footnote 16 For each journal, the second column indicates the total number of articles matched to faculty in our data. A third column provides the SCImage Journal Rank (SJR), an indicator of journal quality produced by Scopus. This metric “accounts for both the number of citations received by a journal and the importance or prestige of the journals the citations come from,” according to Elsevier.Footnote 17
To construct our school-level rankings, we aggregate the researcher-level results by school. As a basic quantity measure, we sum across all the faculty at a school to get the total number of publications. We do the same for the number of articles, and for the number of books or book chapters.
To incorporate quality, we take two approaches. First, we use total citation counts to the research output of a school, under the assumption that higher quality work will be cited more often. Second, we use the SJR metric to produce counts of quality-thresholded publications; in particular, we count the number of articles from each school in journals with SJRs above the 99th, 90th, and 50th percentiles.Footnote 18
As discussed in Sect. 2.3, faculty counts are not necessarily comparable across schools; for instance, they may include adjunct or visiting faculty members in some cases and not in others. Still, one may worry that variation in these quantity and quality measures could be driven wholly by differences in faculty sizes and have nothing to do with the distribution of quality or productivity of researchers at a school. To address this issue, we complement the aggregate measures with per-faculty performance. As the denominator in this ratio, we use the faculty count from column 3 of Table 2 (page 9). This column contains, for each school, the number of faculty members that were found in Scopus. Arguably, this number provides a rough approximation of the number of “research-oriented” faculty members who work at each school.
It is important to note that our measure of “research” is limited to academic publications and therefore misses major and important components of research activities. The latter include the production of datasets, code repositories, partnerships with nonprofits or government agencies, consulting contracts, unpublished working papers, journalistic publications, and support of other researchers or graduate students. This omission reflects data availability constraints.
Field-specific rankings
Table 3 (page 12) shows that there is significant diversity in the research undertaken by policy-school faculty, at least if one judges by the venues in which it appears. Journals associated with public administration loom large; for example, Public Administration Review is the periodical most frequently seen (573 observations) in our data, and Journal of Policy Analysis and Management is second, with 397 entries. Economics journals are also well-represented, with American Economic Review, Journal of Public Economics, and Quarterly Journal of Economics seeing more than 250 articles each. Political Science publications are prevalent too, with American Political Science Review and Journal of Politics each accounting for more than 150 articles in the sample. Finally, it is interesting that journals with a natural science emphasis also make an appearance, as this is an area of increasing visibility in public policy schools: Proceedings of the National Academy of Sciences of the United States of America accounts for more than 250 articles, while Science and Nature each account for more than 100 papers.
While our main rankings aggregate all journals together, we also provide a few field-specific rankings. This is partly because the within-field SJR journal rankings—notwithstanding the clear advantages of external generation and cross-field applicability—often do not comport with the opinions of researchers in each field. To address this issue we generated four additional sets of rankings that cover journals with a focus on: economics, natural sciences, political science, and public administration.
For each of these fields we consulted with a small number of colleagues active in the area, and produced between two and five groups of journals. We generate a ranking of schools according to the number of articles published in each group of journals. We emphasize that there is no unique way of doing this, or of gaining the consensus of every observer. An advantage is that we draw on expert judgment, but a disadvantage is subjectivity of journal selection.
Based on our aggregation of colleagues’ input, we proceeded somewhat differently in each case. Specifically, the fields, groups, and journals covered are contained in Table 4 (page 15). The procedure is slightly different in each case, and so merits discussion.
In economics, Group A contains the five journals usually considered to contain the work with, on average, the highest quality. Group B includes all the journals in in Group A plus an additional few, considered somewhat less selective. In this case both groups A and B consist of “general interest” journals that appeal to audiences across different subfields, for example, trade or labor economics. Group C includes all the journals in groups A and B, and adds highly rated subfield-specific journals such as the Journal of Econometrics or the Journal of Monetary Economics.
For natural science, we use similar hierarchical groups. That is, every group also contains the journals in the previous groups, with Group A containing the two most prestigious outlets. In this case, however, there are no subfield-specific periodicals. We omit these because their number is much larger and their frequency is relatively low in our data.
In the case of Political science, conversations with researchers in the field suggested that a hierarchical grouping like that constructed for the previous two fields is harder to achieve. This reflects the fact that different sub-fields tend to publish in different groups of periodicals. In this case we therefore proceed by creating five groups of journals. First, there is a group we label General, containing two journals that most observers indicated appeal to researchers in essentially all subfields. Then, there are four groups that pertain to specific subfields: American politics, comparative politics, international relations, and political theory. In each of these we added three to seven further titles of journals that cater more specifically to these subfields (although they do not always fit neatly in only one, and so we allowed repetition).
Finally, for Public Administration, there are only two groups. These are organized in hierarchical order of quality as done for economics and natural science. Again, we note that this set of journals is subjective and based on conversations with colleagues.
In the case of field-specific rankings we only report aggregate rankings—we do not attempt to construct rankings on output per faculty member. We cannot perform an adjustment for faculty size as done for the general rankings because we do not have information on faculty members’ stated areas of focus. For example, total publications in economics would ideally be adjusted by the total number of professors active in economics. This could perhaps be inferred via publication in field-specific journals, but Scopus does not provide authoritative allocations of journals to fields. One might use newly created field journal lists, like those from Table 4 (page 15), but we choose not to for this paper due to large swings in the denominator depending on specification. In addition, at least some faculty members have publication lists spanning a number of fields. For these reasons, we leave the calculation of per-faculty-member field output for future research.
Results
This section presents our results, beginning with the overall research rankings and then proceeding to field-specific variants.
Overall performance
Table 5 (Page 17) presents our base results, which refer to all areas of research. Each column reports a ranking of the schools based on the Scopus data, with a ranking of 1 corresponding to the highest ranked school among those in our sample. We first discuss what the different columns contain and then make some remarks on the results.
First, columns (A)–(C) refer to research quality as measured by the number of articles that faculty at each school have published in journals with SJRs above the 99th, 90th, and 50th percentiles, respectively. The schools are sorted by column A, although that choice does not imply a preference for that particular ranking. Next, columns (D) and (E) count the the total number of citations to school publications, with column (D) including articles and column (E) including books and chapters. Finally, columns (F) and (G) provide quantity (rather than quality) measures by simply counting the number of articles (F) and book/chapters (G), respectively.
The most immediate feature of Table 5 (Page 17) is that Princeton and Harvard come up first and second, respectively, in all rankings. But beyond the top two spots (and Columbia’s consistent placement of third or fourth), there is meaningful heterogeneity in schools’ performance across different columns. The University of Chicago (Harris), the fourth-ranked school by column (A) has rankings as high as 26 in column (G). In short, Chicago tends to do much better when the measure focuses on publication quality rather than quantity. Cornell University (Cornell Institute for Public Affairs) provides a case in the opposite direction. While it ranks 13th in terms of papers in journals with SJRs above the 99th percentile, it ranks third in the quantity measures (columns F and G).
In total, 17 schools place in the top ten on at least one criterion. In addition to those mentioned, we see Carnegie Mellon, Duke University (Sanford), George Mason University (School of Policy, Government, and International Affairs), Georgetown University (McCourt), Indiana University-Bloomington (School of Public and Environmental Affairs), the University of Michigan-Ann Arbor (Ford), the University of Minnesota-Twin Cities (Humphrey), NYU, Stanford University (Public Policy Program), Syracuse, the University of Southern California (Price), and the University of Wisconsin-Madison (La Follette).
Next we look at the set of rankings where research output is adjusted for faculty size. Each column of Table 6 (page 19) reports a ranking using the same performance measure as the correspondingly lettered column from Table 5 (Page 17). The only difference is that each school’s measure (number of articles written by a school’s faculty, for example) is divided by the number of faculty in that policy school. As discussed in Sect. 2.3, our faculty count is “research-oriented” in the sense that it excludes faculty for which we found no authored publications in Scopus.
For the top spots of Column A, the population-adjusted rankings in Table 6 (page 19) are fairly similar to those in Table 5 (page 17). Princeton is still at the top of the list (although it moves down for some of the other columns). Next, Harvard is replaced by Chicago in the second-place spot, not just for column A (papers in journals with SJR above 99th percentile, per faculty member found in Scopus), but also for several other measures. After Harvard, for column A we have Columbia, Minnesota–Twin Cities, the University of California-Berkeley (Goldman), Georgetown, Michigan, Stanford, and Duke. This list for the top ten is overall quite similar to that from the previous table, consistent with the interpretation that variation in quality is not driven solely, or even mostly, by faculty size.
Looking at the other columns, we see that adjustment by faculty size also renders schools’ ranks more volatile. For example, while Harvard always ranked second in Table 5 (page 17), it now displays a ranking as low as 17th (for the quantity of articles measure). Similarly, while Columbia was consistently in the top four in Table 5, it now places as low as 14th. Arizona State is now ranked number one, for total number of articles, and the number of articles above 50th percentile in SJR. Cornell gains the number-one spot for books and book chapters written. In this table, a total of 21 schools make the top ten on some measure.
Performance by area
The next set of tables looks at scholarship in four areas that are central to the work of many faculty members at public policy schools: economics, natural sciences, political science, and public administration. The remaining tables have a similar structure: each column shows a ranking that is a function of the total number of articles published in the journal groups given by Table 4 (page 15). The column labels refer to these groups, and in each case the first column is used to sort the list of schools.
Table 7 (page 20) presents the rankings for economics, where each column gives rankings corresponding to the number of articles published in each group, with A being the most prestigious set of journals.
Economics is a field with relative stability in rankings. This is particularly so for the top-performing schools. For example, the four schools ranked highest—Harvard, Princeton, Columbia, and Chicago—rank exactly the same in all four columns. This consistency suggests that the precise allocation of specific journals to the three groups does not have a large impact on the rankings. In other words, faculty that are good at publishing in the top general interest journals (Group A) are also also prolific with respect to field journals (Group C). The other schools that appear in the top ten by at least one of the economics rankings are: Stanford, Michigan, Duke, Georgetown, Cornell, Berkeley, and Syracuse.
Table 8 (Page 22) presents analogous results for natural sciences, with the ordering again done according to the schools’ rank in Group A. The top three schools in this case are Princeton, Harvard, and Columbia. The fourth-ranked school, Minnesota, makes an especially big leap in this category relative to the other fields (22nd in economics and 39th in political science). The remaining schools that appear in the top ten in at least one column are Chicago, Cornell, Duke, George Mason, Indiana-Bloomington, Michigan, NYU, and Stanford.
Table 9 (Page 23) moves to analyzing schools’ performance in political science. Recall that in this case the columns are somewhat different from those that concerned economics or natural science. The first column ranks schools by the number of articles that have appeared in four general interest journals. The remaining columns refer to journals that are more (although in some cases not exclusively) focused on the subfields of American politics, comparative politics, international politics, and political philosophy. The schools are ordered according to their performance in the first column.
Princeton ranks first, followed by Stanford and the University of Georgia (School of Public and International Affairs), which makes a big rise here relative to other fields (26th in economics, 22nd in natural sciences). Outside the top two schools there is also variation in subfield-specific performance. For example, Georgia ranks third in American politics, whereas Harvard does so in political philosophy, and Columbia in comparative and international politics. The remaining schools that rank in the top ten in at least one subfield are American University (School of Public Affairs), the University of Arizona (School of Government and Public Policy), Chicago, Duke, Michigan, NYU, the University of Virginia (Batten), and Syracuse.
Finally, Table 10 (Page 25) presents results for public administration journals. In this case there are only two groups of journals, and there is relative stability in schools’ performance across columns. The salient point is that this ranking of schools is rather different from that prevailing in essentially all of the above tables. In this case the top five schools are Indiana-Bloomington, American, Georgia, Wisconsin-Madison, and NYU. The only one of these schools to have appeared in the top five before is Georgia (for Political Science). The stark difference between this and other areas is also seen in the fact that Princeton—often ranked first or second in the previous tables—now appears among the bottom five. Chicago and Columbia score similarly low given their earlier performance. In short, by our measures there is little overlap in schools’ research performance between public administration and several other fields, and hence between the public administration and the aggregate rankings (Table 5, page 17).
Finally, the list of the remaining schools to appear in the top ten in at least one of the Public Administration groups also includes several not seen in this category before. The list is: Florida State, George Washington University (Trachtenberg), the University of Kansas (School of Public Affairs & Administration), Ohio State University (Glenn), Rutgers-Newark (School of Public Affairs and Administration), USC, and Syracuse.
Conclusion
This paper has presented research-based rankings of public policy schools. Two main conclusions emerge, one relevant to all rankings and one relevant to the field-specific ones. First, on average our results differ considerably from previously available rankings, such as those published by U.S. News & World Report. To the extent that some market participants are interested in research publication performance, then our simple exercise adds information. For example, some deans may be interested in research output per se; advanced students interested in economics or political science instruction might desire some measure of faculty expertise in these areas.
Second, the rankings across the four specific fields we explored display notable differences, with the one relative to public administration an outlier. This again suggests that different rankings will likely better satisfy the demands of different participants. For example, a student or journalist looking for expertise in public administration might seek out different schools than one concerned with the intersection of public policy and natural science. Further, academic leaders and faculty hiring committees might use these rankings to guide recruitment as they address areas of need.
With respect to either set of results, our hope is that the policy-school market will benefit from greater data availability. Among higher education sectors in the U.S., public policy is one where there has been comparatively little information. Part of this benefit comes from the fact that, as mentioned, universities produce different outputs. To some buyers, they deliver instruction and peer quality; to others, extension services. We add more information on one specific (but important) product: published academic research.
Finally, further refinements of research-based rankings seem desirable. For example, robust per-faculty or per-student research output figures would be useful to at least some market participants. Generating these might require further guidelines to ensure uniformity of the criteria going into faculty or student counts. These are topics that might be coordinated via groups like NASPAA (Network of Schools of Public Policy, Affairs, and Administration) or APPAM (Association for Public Policy Analysis and Management), which might be able to provide uniform faculty-reporting criteria.
Notes
For example, Chetty et al. (2017) rank colleges according to different measures of their ability to produce income mobility. Diversity of rankings may also reflect that educational institutions use many inputs to produce multiple outputs.
Namely, these are the Academic Ranking of World Universities (ARWU), which is found at: http://www.shanghairanking.com/ARWU2016.html, and the CWTS (Centre for Science and Technology Studies) ranking, found at: http://www.leidenranking.com/ranking/2017/list.
The USNWR public affairs ranking is at: https://www.usnews.com/best-graduate-schools/top-public-affairs-schools/public-affairs-rankings. Perhaps reflecting the increasing popularity of this ranking, it has recently moved from every few years to an annual ranking.
This is produced by Foreign Policy and the Teaching, Research, and International Policy project, based on about 1600 survey responses, http://foreignpolicy.com/2015/02/03/top-twenty-five-schools-international-relations/.
The method used to produce the USNWR ranking is described at https://www.usnews.com/education/best-graduate-schools/articles/public-affairs-schools-methodology. Technically, UNSWR ranks “public affairs” rather than “publicy policy” schools, but our sense is that most scholars in the field treat these terms interchangeably. We emphasize that UNSWR technically is ranking an educational degree program, rather than the school, so our school-focused research ranking is taking a different approach.
There have been several reported instances of universities misreporting the data used to produce rankings, suggesting that at least some institutions consider their ranking to be a high-stakes outcome. The Washington Post, for instance, reviews five such episodes at: https://www.washingtonpost.com/local/education/five-colleges-misreported-data-to-us-news-raising-concerns-about-rankings-reputation/2013/02/06/cb437876-6b17-11e2-af53-7b2b2a7510a8_story.html?utm_term=.f6223370e2df.
Bilkchandani et al. (1992).
More specifically, USNWR reports a result of “Rank not Published” for 104 schools. The methodology provided simply states that: “Rank Not Published means that U.S. News calculated a numerical rank for that program but decided for editorial reasons not to publish it. U.S. News will supply programs listed as Rank Not Published with their numerical ranks if they submit a request following the procedures listed in the Information for School Officials.” Further, USNWR (at least in the output presented online) does not report the number of responses received for each school. It is possible that the result for even some schools with a score above two may be based on few responses. This may also contribute to the volatility in the observed rankings.
In many cases the schools in Table 1 (page 6) cover both public and international affairs. In a few cases, however, such schools are separate within a university, and only one is referred to in the table. For example, in the case of Georgetown, column 1 indicates the McCourt School of Public Policy but not the Walsh School of Foreign Service. Similar issues arise at the universities of Kentucky and Washington. We are thankful to Richard Betts for this observation. Where these instances arise in Table 1, they originate in the list of schools provided by the USNWR ranking. Since that is our starting sample, we opted not to correct the issue. We do note, however, that in our rankings this will tend to penalize the institutions concerned. Relatedly, a few schools (such as UNC) have both a public affairs and a public policy department; our list of faculty come from both departments, although USNWR excludes the latter.
The faculty count at Network of Schools of Public Policy, Affairs, and Administration is narrower and results in about 1500 fewer faculty.
We did not systematically download or check C.V.s. We did that only in a few cases while developing the matching procedure.
Specifically, under articles we consider publications in journals that Scopus classifies as either articles, articles in press, or notes. The types of publications we leave out of consideration include abstract reports (2), articles in press (545), conference papers (3706), conference reviews (2), editorials (1960), errata (281), letters (913), reports (4), reviews (5097), and short surveys (439).
This is a small subset of the 6054 journals with at least one matched entry. There are 471 journals with 20 or more entries.
Specifically, “Citations are weighted depending on whether they come from a journal with a high or low SJR.” See https://www.elsevier.com/connect/with-2013-journal-rankings-no-one-metric-rules-them-all for more information. As an additional journal ranking, we computed importance in the realm of policy schools by multiplying the SJR by the number of hits among our list of faculty. The top ten journals for policy researchers on this metric are Quarterly Journal of Economics, Nature, American Economic Review, Science, Proceedings of the National Academy of Sciences of the United States of America, Journal of Public Administration Research and Theory, Public Administration Review, American Political Science Review, Health Affairs, and New England Journal of Medicine.
In addition, it is possible to match the journals to the Web of Science “Impact factor,” the citation rate over a given time period for an average article in a given journal. In our sample the correlation between the Impact factor and SJR metric is 0.86. We note that an important factor determining variation in quantity and quality is the number of available pages for papers—which varies significantly across journals. We follow the previous literature in not using this information.
References
Bilkchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom and cultural change as informational cascades. Journal of Political Economy, 100, 992–1026.
Chetty, R., Friedman, J. N., Saez, E., Turner, N., & Yagan, D. (2017). Mobility report cards: the role of colleges in intergenerational mobility. Mimeo: Stanford University.
Hoekstra, M. (2009). The effect of attending the flagship state university on earnings: a discontinuity-based approach. Review of Economics and Statistics, 91(4), 717–724.
MacLeod, W. B., & Urquiola, M. (2015). Reputation and school competition. American Economic Review, 105(11), 3471–3488.
MacLeod, W. B., Riehl, E., Saavedra, J. E., & Urquiola, M. (2017). The big sort: college reputation and labor market outcomes. American Economic Journal: Applied Economics, 9(3), 223–261.
Saavedra, J. (2009). The learning and early labor market effects of college quality: a regression discontinuity analysis. Mimeo: Harvard University.
Williams, A. M., Slagle, D. R., & Wilson, D. (2014). Ranking universities for scholarship in public administration research. Journal of Public Affairs Education, 20(3), 393–412.
Zimmerman, S. (2016). Making the one percent: The role of elite universities and elite peers, Mimeo, National Bureau of Economic Research Working Paper No. 22900.
Acknowledgements
Open access funding provided by Swiss Federal Institute of Technology Zurich.
Author information
Authors and Affiliations
Corresponding author
Additional information
For useful comments we are grateful to Scott Barrett, Richard Betts, Steven Cohen, Page Fortna, Merit Janow, Wojciech Kopczuk, Bentley MacLeod, Dan McIntyre, Victoria Murillo, Justin Phillips, Cristian Pop-Eleches, Wolfram Schlenker, Joshua Simon, and Eric Verhoogen. For excellent research assistance we thank Kaatje Greenberg, Sanat Kapur, and Vu-Anh Phung. All remaining errors are our own.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ash, E., Urquiola, M. A research-based ranking of public policy schools. Scientometrics 125, 499–531 (2020). https://doi.org/10.1007/s11192-020-03625-z
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-020-03625-z