Advertisement

Ranking university innovation: a critical history

  • Adam R. Nelson
Original Paper

Abstract

Noting that university rankings are not new, this article reviews the effects of more than a century of higher education ranking schemes. It notes that a pursuit of high rankings has led not to “innovation” but rather to “imitation” and institutional conformity. It uses three US education thinkers—Thorstein Veblen, John Dewey, and Joseph Jastrow—to show how rankings drove American universities to pursue international publicity and profit rather than local service to the public good. Significantly, the result of a competition for external rankings was a loss of internal academic creativity and autonomy. The article concludes with two proposals, or thought experiments, concerning alternative rankings strategies.

Keywords

University ranking Marketization of higher education Profit motive 

Introduction

In recent years, university rankings and league tables have become a global phenomenon, with literally dozens of rankings promising to identify the best universities and the characteristics that make them “world-class” research and teaching institutions.

Some of the latest rankings have been designed (at least ostensibly) to measure university “innovation,” or universities’ contributions to national economic development, often in terms of the commercialization of ideas through patents and technology transfer.

In this essay, I’d like to offer an historical perspective on the rise of university rankings in the USA over the last hundred years or so. University rankings are not new, and their function has been remarkably consistent over time.

Anyone wishing to create a new ranking system—particularly a ranking of university innovation—might want to know how such rankings have worked in the past. What have they sought to measure? Have they succeeded or failed in promoting university innovation?

To answer these questions, I’d like to start with the American historian Laurence Veysey, whose famous book, The Emergence of the American University (1965), asked how the pursuit of rankings—or the publicity associated with rankings—affected institutions of higher education.

In his words: “The highly competitive struggle for reputation in which the new American universities indulged had important consequences for the style of their development” (Veysey, p. 330). More often than not, he argued, the pursuit of high rankings led not to “innovation” but rather to “imitation.”

Veysey noted that, after Johns Hopkins was established in 1876, Harvard copied its new form of postgraduate education, and when Stanford was founded in 1892, the University of California in Berkeley almost immediately copied its new approach to applied science.

Some would call this innovation, yet, before long, many other major universities such as Illinois, Michigan, and Wisconsin fell into line. The general result was not university innovation, but replication—not institutional diversity but conformity.

Veysey was critical of this outcome. He believed that, if possible, universities should be unique and should foster intellectual flexibility. Instead, the rise of comparative (and increasingly competitive) rankings fostered what he called intellectual “timidity.”

“[I]n contrast to the argument which would see competition as a basic cause of academic creativity,” he wrote, “as the American universities became more intensely competitive—in the 1890s and after—they became more standardized [and] less original” (Veysey, p. 330).

Before long, he noted, many universities were afraid to implement truly new or unusual programs. “Bidding constantly against one’s neighbors for prestige,” he explained, universities “soon found limits placed upon their freedom to implement unusual or experimental ideas” (Veysey, p. 332).

In the 1890s, the rise of rankings to facilitate comparisons between universities had undermined institutional diversity and led to “blind imitation” (Veysey, p. 331). Veysey felt this effect was detrimental to American higher education over the long run.

“Before 1890,” he noted, “there had been academic programs which differed markedly from one another…. Johns Hopkins, Cornell, and, in their way, Yale and Princeton, had stood for distinct educational alternatives” (Veysey, p. 339).

But the rise of rankings had ended this period of creative invention. “During the [1890s,]… the American academic establishment lost its freedom,” Veysey concluded. “To succeed in building a major university, one now had to conform to the standard structural pattern” (Veysey, p. 340).

Here was the negative effect of rankings. Increasingly, universities competed with each other for prestige as measured by rankings, and this competition led to conformity. As Veysey put it, “All contenders for high institutional honor had to follow the prescribed mode” (Veysey, p. 340).

This comment was revealing, for it was during the era Veysey described that American universities rose to prominence on the world stage. It was as if the pursuit of global recognition had prompted American universities to replace their original creativity with a new conformity.

Veysey was not, of course, the first to note the negative effect of rankings. A half-century earlier, the American sociologist Thorstein Veblen took up the same topic in his book, The Higher Learning in America: A Memorandum on the Conduct of Universities by Businessmen (1918).

Veblen, like Veysey, considered the influence of the University of Chicago and Stanford, the former originally known for its unique emphasis on advanced scholarship in the humanities, the latter for its “pursuit of scientific knowledge and serviceability” (Veblen, p. 265).

Unfortunately, as Veysey noted, the establishment of these new institutions had coincided with (perhaps even spurred) the rise of university rankings, and before long the pressures of institutional comparison led both Chicago and Stanford to abandon creativity in favor of conformity.

Veblen blamed the pressures of conformity on universities’ pursuit of “publicity.” While the 1890s had no formal league tables based on specific ranking methodologies, several national magazines published university comparisons that functioned very much like modern rankings.

According to Veblen, the real purpose of university rankings was not to foster innovative research or service to society but rather to facilitate publicity for its own sake. He felt both Chicago and Stanford had sacrificed their original commitment to public service for raw publicity.

“[I]n both cases” he wrote, “the policy of forceful initiative and innovation—with which both alike entered on the enterprise—presently yielded to a ubiquitous craving for… publicity; until… it became the rule of policy to stick at nothing but appearances” (Veblen, p. 265).

Veblen went on to note that, as early as the 1890s, when American universities first sought to claim a place on the world stage, they became preoccupied—some might say obsessed—with publicizing their achievements to others around the world.

They wanted recognition, so they embraced rankings. It was a fateful bargain, because, before long, universities like Chicago and Stanford focused more on narrow rankings than on the general innovations that had made them famous in the first place.

For example, it was during this era that Stanford “began publishing a list of its faculty’s [publication] output for the year in its annual reports. Clearly, it had become a necessity, from the administrator’s point of view, to foster the prestige-ful evidences of original inquiry” (Veblen, p. 177).

According to Veblen, the pursuit of rankings quickly became its own end. To gain recognition on the global stage, universities directed funds toward research projects that were likely to win publicity, even if such research produced only brief flashes of recognition.

Of course, most research garnered publicity only until a new ranking methodology came along to shift the focus of popular attention—at which point university leaders simply redirected their resources to meet the expectations of the new ranking system.

Veblen’s point was that a pursuit of rankings led universities to shift their focus away from internally (or locally) determined priorities toward externally (or globally) determined criteria of success. Rankings did not foster internal creativity; they fostered external control.

Slowly but surely, this shift toward external control undermined the university’s autonomy. Rather than deciding for itself what to research, or teach, universities allowed ranking systems—often foreign ranking systems and “statistics”—to make these decisions for them.

In this way, American universities became more internationalized even as they became more disconnected from local concerns and constituents. They became more recognized for their statistical achievements… but they also became less autonomous.

Veblen, writing in 1918, held that a pursuit of rankings posed a grave risk to universities, because the rankings’ focus on institutional comparability and conformity transformed scholarly work into standardized industrial labor. “The result,” he argued, was “inefficiency” and “waste” (Veblen, p. 117).

According to Veblen, scholars are most efficient when they are free to pursue their own ideas and direct their own work. If they are required to submit to standardized metrics imposed by external rankings, they become less creative, less efficient, and less innovative.

Put simply, rankings made faculty more like factory workers. “[T]he system of scholastic accountancy,” Veblen wrote, “has already reduced the relevant items to such standard units and thorough equivalence as should make a system of piece-wages almost a matter of course” (Veblen, pp. 117–118).

His industrial metaphor extended beyond the standardization of faculty labor to include the commodification of the research ideas that faculty produced. One way to get better rankings (and thus more publicity), he noted, was to seek patents for scientific research.

Veblen noted that some of the best publicity came to universities whose faculty patented research in the medical sciences. Patent medicines had the double virtue of making profit for the university and giving the impression that a university served the public good.

Even in 1918, universities rarely conducted research on medicines that were unlikely to be profitable, even if those medicines would improve public health. The real aim was not to improve health but rather to increase profits… and publicity.

Veblen, for his part, was disgusted by this pursuit of profits and publicity. These aims, he said, were “useful for purposes of competitive gain to the businessman,… not for serviceability to the community at large” (Veblen, p. 136).

While some asserted that patents spurred innovation by securing monopoly rights for the university and thus protecting its research investments, Veblen said patents did more to enrich universities than society in general. They led universities to hide knowledge rather than share it.

So, were patents a good indicator of university “innovation”? Veblen thought not. “These things… rather hinder than help the cause of learning,” he noted, “in that they divert attention and effort from scholarly workmanship to statistics and salesmanship” (Veblen, p. 137).

Veblen said that publicizing university patents was a form of commercial advertising, just like university rankings themselves. They may have fostered academic competition, but did they foster the common good?

Veblen was skeptical. In his words, any gain that accrued to a university by securing a patent was merely “a differential gain in competition with rival seats of learning, not a gain to the republic of learning or to the academic community at large”(Veblen, p. 137).

A century ago, Veblen saw the role that rankings played in university management, and he confronted anyone who sought to create a new ranking system with a fundamental question: should universities allow rankings to set their agenda for research and teaching?

To answer this question, I’d like to look back to American philosopher John Dewey, who took up this problem in his book, A Theory of Valuation, in 1939. Dewey had long worried about the ways in which rankings led universities to seek publicity and profit more than social service.

As early as 1902, he wrote that American universities were “ranked[, first and foremost,] by their obvious material prosperity, until the atmosphere of money-getting and money-spending hides from view the [real academic] interests for the sake of which money alone has a place” (Dewey, “Academic Freedom,” p. 63).

Dewey wanted universities to reclaim their autonomy—to assert their own values. His Theory of Valuation began with the idea that any theory of valuation should be a theory of social action, or a theory about how to make decisions to guide behavior.

His question—a philosophical question that applied not only to individuals but also to institutions—was not simply What is valuable? but, rather, What should our judgments about what is valuable lead us to do?

In other words, for Dewey, any process of university evaluation or ranking necessarily entailed prior judgments about what universities should accomplish in the world—that is, what their members believed would improve the overall quality of social life (Dewey, Theory of Valuation).

So, we might ask: what do today’s university rankings hope to accomplish in the world? What do they believe will improve the overall quality of social life? Here is a list of some of the criteria that guide current global rankings of university innovation:
  • SCI/SSCI publications and citations (QS World University Ranking and Shanghai Jiaotong ARWU)

  • Nature/Science publications and citations and recruit Nobel Prize/Fields Medal winners (Shanghai Jiaotong ARWU)

  • Patent counts (Times Higher Education World University Ranking, Thomson Reuters University Rankings, SCIMAGO rankings)

  • Research grants (Times Higher Education World University Rankings)

  • “Industry income” (Times Higher Education World University Rankings)

  • “Cross-over faculty” co-employed by corporate industries (Thomson Reuters University Rankings, SCIMAGO rankings)

  • International student/faculty recruitment (QS World University Rankings, Times Higher Education World University)

  • Internet/web publicity (SCIMAGO rankings)

We might ask: will a larger number of university websites improve the overall quality of social life, or will it mostly represent a new form of university publicity? Will the recruitment of international students or faculty improve local welfare in both sending and receiving countries, or will it simply benefit a global elite?

Interestingly, over a century ago, the journal Science took a very different approach to evaluating American higher education. Publishing a series of papers from a conference in 1906, the journal was very critical of contemporary university rankings.

Consider this article by the famous University of Wisconsin professor Joseph Jastrow. In his essay, “The Academic Career as Affected by Adminstration,” Jastrow lamented the ways in which university rankings had distracted scholars from truly innovative teaching and research to serve local citizens.

He pointed to professors whose devotion to “statistically measured success” caused them to “degenerate” into “a decidedly ‘business’ frame of mind” which led not to genuinely valuable activities but simply “to advertise his wares and advance his trade, eager for new markets…” (Jastrow, p. 568).

Jastrow called the university’s narrow quest for recognition through rankings a form of academic “drift.” As he put it, a “crippling sense of accountability” to rankings was “likely to crowd out… almost every other serious purpose” (Jastrow, p. 568).

In other words, rankings actually hindered innovation, because they led to standardization and commercialization. “There is at work among American universities a… desire for each to measure its own work by [external] standards of tangible material success,” he wrote (Jastrow, p. 570).

Jastrow told Science readers that a pursuit of rankings led scholars to seek institutional profit more than social benefit or intellectual progress. Often, departments sought to “outrank… other departments” in the same university just to get better rankings and, thus, a bigger share of institutional resources (Jastrow, p. 571).

The result was not institutional development but rather dysfunction. “Worst of all,” Jastrow wrote, the single-minded pursuit of rankings “contaminates the academic atmosphere so that all life and inspiration go out of it—or would, if the professor’s ideals did not serve as a protecting aegis” (Jastrow, p. 571).

Jastrow, like Dewey, said the real standard of success in a university should not be profit but, rather, social well-being. For this reason, he said, “It is against this false standard [of money] that the warfare [against university rankings] must be actively directed” (Jastrow, p. 571).

Of course, Jastrow lost this battle. In 2018—even more than in 1906—universities have shifted their criteria of evaluation from internal to external actors… from the judgments of the faculty to the latest ranking methodologies.

The result is that universities have external metrics but no internal mission; they perform evaluations but lack values; they are preoccupied with measures of global prestige even as they are perceived to be increasingly estranged from the public good.

Jastrow anticipated this drift—this loss of autonomy—more than a century ago, when the University of Wisconsin achieved high global rankings but was accused of being disconnected from local concerns and constituents.

Now, as then, Wisconsin politicians often criticize the university’s high ranking, because they fear that a global university will neglect local students or local development. And today, as in Jastrow’s day, the university has trouble answering this criticism.

Jastrow, like Dewey, Veblen, and Veysey, worried that American universities had lost their capacity to articulate their own local or internal values and merely accepted direction from external “others”—in most cases, from national or international ranking agencies.

The result was not innovation but imitation, not creativity but conformity, not service to society but mere publicity and profit. As the American university became more preoccupied with its global rankings, it became more standardized—more stagnant. It lost its autonomy.

Today’s university administrators, like those a century ago, ask how we should evaluate university innovation. I imagine that, if Joseph Jastrow were here today, he might suggest that universities seeking to promote genuine social welfare should resist evaluating innovation at all.

Such resistance seemed impossible to Jastrow (1906), just as it seems impossible today. But the consequences of conformity are clear. When universities leave the process of evaluation to standardized rankings, they lose their intellectual vigor and social vitality.

If universities are satisfied with the pursuit of publicity and profit, they should continue to pursue standardized global rankings. But if they are not satisfied with these criteria, they should look for new metrics of success and insist that universities cannot be evaluated by money alone.

In the meantime, I would like to conclude with a provocative suggestion for contemporary university ranking schemes. If, as I argue, the underlying purpose of these schemes is to generate publicity and profit for ranking organizations themselves, then a new approach may be in order.

In the USA, the publicity and profit motive behind today’s rankings is evident in the annual US News and World Report college and graduate school rankings. The presumption behind these rankings is that universities will pursue policies and practices to compete for high marks.

Another presumption behind these rankings is that competition between universities will benefit the higher education market overall by lowering prices and/or improving quality, because student (or parent) “consumers” with more complete “product” information can be expected to choose the “best” universities.

Efficient product choices require free consumer access to fully disclosed information, but US News and World Report does not provide full access to its rankings; instead, the magazine requires consumers to pay for complete university data. It seeks profit for itself.

Since most ranking publications have similar profit models, I propose two different ways to distribute these profits in ways that make clear to student (or parent) consumers the underlying commercial (rather than social or intellectual) motivations behind modern university rankings.

One proposal is to require universities to pay ranking organizations for the privilege of participating in a ranking from which they expect material benefit. The other is to require every ranking organization to pay universities for the risks (and/or rewards) of participating.

Under my first proposal, universities making a rational choice will pay to participate in a ranking scheme only if they believe its methodology will favor them. Put simply, universities will elect to pay ranking organizations for the chance of a favorable score relative to other universities.

Then, in turn, ranking organizations will share the profits they derive from their schemes, distributing money to universities in the order of their rank. Thus, for a participating university, a higher rank would yield a higher return (or larger share of the profits).

With this system, consumers (students and parents) would readily deduce that any given university’s motive for participating in a given ranking scheme would be not only to receive a high ranking but also to gain a larger share of the profit from that scheme.

Some might worry that such a scheme might be susceptible to pay-to-play manipulation, with universities paying larger amounts to receive higher rankings, but the capacity of wealthier universities to “buy” higher rankings is already endemic.

This system would expose the correlation between resources and rankings. If students and parents use rankings simply to purchase the prestige associated with rank (as considerable evidence suggests they do), then more transparency on this point would be welcome.

Moreover, under this plan, any rational university would join only the ranking scheme whose methodology was likely to give it a high score. The consequence would be clear: over time, each ranking would receive fewer and fewer participants as lower-scoring universities opted out.

Slowly but surely, consumers would come to understand that universities participate in a given ranking only if they believe its methodology is favorable. Eventually, the public’s trust in the rankings would decline, and they would collapse under the weight of their own uselessness.

Some might say that universities will be unwilling to pay ranking organizations for the chance of a high rating. If so, then a different approach might be preferable. Perhaps the ranking organizations should be required to pay universities based on the potential risk of a low rating.

Since all ranking schemes involve possible risks or rewards for participating universities, and since participation is voluntary, it seems only fair that ranking organizations should pay universities for the harm that comes from low ratings.

Just as pharmaceutical companies compensate participants in drug trials based on the potential risks they endure, so too ranking organizations should compensate universities for the potential harm inflicted by their various methodological “experiments.”

Under this proposal, ranking organizations would be required to share profits from their ranking publications with participating universities, with payments made in reverse rank order: the lower a university’s ranking, the greater the harm, and hence the greater the compensation.

With this plan, a rational university that agreed to participate in a ranking would have at least the consolation of proportional compensation for any risks incurred (or, conversely, any reward derived from the publicity of a methodology that happens to favor that university).

The effects of this approach would resemble those of my first proposal. Universities will choose to submit to “high-risk” ranking schemes only when sufficient compensation is offered. The greater the risk of a low ranking, the greater a university’s demand for compensation.

The results of this strategy would be obvious: gradually, each ranking organization would receive fewer and fewer participants as universities came to understand the risks associated with its ranking methodology.

If only “willing” volunteers join, their numbers would gradually shrink as universities look for schemes that either (a) favor them with the reward of a high ranking or (b) pay them a significant monetary compensation for enduring the shame of a low ranking.

Soon, the rankings organizations would have to pay unsustainably high premiums for participants and would collapse under the weight of their own profit models, wherein ranking organizations collect money in part by harming the reputations of poorly rated universities.

Each of these proposals—universities pay ranking organizations or ranking organizations pay universities—rests on the principle that rankings do much to generate profits for ranking organizations but do little to improve universities.

By exposing this profit motive, consumers will see, first, that universities participate in a given ranking only if they anticipate favorable scores, and second that, while “global” university rankings help a few highly rated institutions, they do not help (and may harm) many others.

Perhaps a time has come for a radically new approach to global university “performance” metrics—one that simply accepts rankings for what they have always been: vehicles for publicity and profit.

Notes

Acknowledgments

This paper was presented in April 2017 at the “International Conference on University Innovation and Evaluation” held at Zhejiang University. The author would like to thank the conference organizers and forum participants for their helpful comments.

References

  1. Dewey, J. (1939). Theory of valuation. Chicago: University of Chicago Press.Google Scholar
  2. Dewey, J. (1976). “Academic Freedom,” Educational Review 23 (1902), 1–14, reprinted in Jo Ann Boydston, ed., John Dewey: The Middle Works, 18991924, Volume 2: 19021903. Carbondale, IL: Southern Illinois University Press.Google Scholar
  3. Jastrow, J. (1906). The academic career as affected by administration. Science (April 13, 1906).Google Scholar
  4. Veblen, Thorstein. (1918). The higher learning in America: A memorandum on the conduct of universties by businessmen. New York: B.W. Huebsch.Google Scholar
  5. Veysey, L. R. (1965). The emergence of the American university. Chicago: University of Chicago Press.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Educational Policy StudiesUniversity of WisconsinMadisonUSA

Personalised recommendations