Introduction: the Cognitive Economy of Gestalt Shifts

The ‘normative’ dimension of my version of social epistemology flies under the flag of ‘knowledge policy’, a phrase that has always carried strong economic overtones (e.g. Fuller 1988: part 4; Fuller 2002, 2015: chap. 1). In this respect, I have walked in the footsteps of Charles Sanders Peirce, who invented a field called the ‘economics of research’ and his latter-day follower Nicholas Rescher (1978, 2006). However, it would be a mistake to think that regarding knowledge in economic terms is merely a late nineteenth century innovation. On the contrary, ‘economy’ in the sense of efficiency has been endemic to the Western pursuit of knowledge from its inception. Indeed, both Plato and Aristotle were interested in ‘explaining the most by the least’, though they interpreted this principle in rather opposing ways, which have served to determine the subsequent history of philosophy.

Plato interpreted the principle as calling for a unified understanding of reality, very much in the spirit of physicists who continue to seek a ‘Grand Unified Theory of Everything’. This meant that the diversity of phenomena that we normally experience constitutes an inefficient understanding of reality that must be ‘resolved’ in some fashion, say, in terms of the outworking of the laws of nature under specific conditions. Such laws are in character quite unlike the phenomena experienced because they range over not only actual but also possible states of the world. Thus, what we normally see as necessary features of the world—‘structures of the lifeworld’, as Alfred Schutz might say—turn out to be, under closer and deeper inspection, features that are contingent on certain prior opportunities and decisions—ones that perhaps faced some cosmic intelligence.

Plato and his immediate followers sought to grasp this play of hidden forces through the faculty of nous in a manner that is still associated with mathematical intuition and thought experiments. However, starting in the Middle Ages, these mental manipulations were increasingly performed, not in one’s head but on the world itself, in what we now call ‘experiments’ in the proper sense. Accompanying this development was a more specified understanding of the Platonic quest for unity, namely, that the test of our knowledge of the underlying laws is that we can use them to simulate the aspects of reality that we wish to understand—and quite possibly improve upon. This is what historians of science call the ‘mechanical world-view’, and it survives in all fields where model-building is taken seriously as a route to knowledge. For today’s descendants of Plato, the phenomena of the world are the outputs of some executed cosmic computer programme, in terms of which scientific inquiry amounts to reverse engineering or even hacking.

In contrast, Aristotle saw the diversity of reality as more directly indicative of nature’s efficiency. In that case, relatively minimal cognitive adjustment on our own part is required to be optimally attuned to reality. What philosophers nowadays call the ‘correspondence theory of truth’, first expressed by Aristotle’s staunchest medieval champion Thomas Aquinas, derives from this tradition. Indeed, Aquinas’ own formulation—veritas adequatio intellectus ad rem (‘truth is the alignment of the mind to the thing’)—suggests that such a state of mind could arise with relatively little deliberate effort. In that case, science is not about the search for hidden laws that are far from our normal way of seeing things. Rather, it is a glorified pattern recognition exercise whereby we come to see the various patterns which together constitute the world. The path from perception to cognition on the Aristotelian view is rather shorter than on the Platonic view.

In the modern era, this position has been often touted as ‘common sense’ realism, a more sophisticated and contemporary version of which is Nancy Cartwright’s (1999) ‘patchwork’ scientific ontology. Over the centuries, the sorts of hierarchies that represent ‘natural order’ in biological classification systems have most consistently expressed Aristotle’s sense of cognitive efficiency, insofar as the relationships between ‘orders of being’ are based on morphological resemblances—that is, how organisms appear to the eye. In this context, today’s controversy in taxonomy over whether morphological resemblance should yield to genetic overlap as the basis for organizing species in the ‘tree of life’ marks a Platonic challenge to a field traditionally dominated by Aristotelian sensibilities, as it would allow two similar-looking species to have been brought about by radically different genetic means (Wilson 2004). In effect, the taxonomic judgements of the field biologist (aka Aristotelian) would have to defer to those of the lab biologist (aka Platonist).

The Plato-Aristotle dispute over cognitive economy has been more central to Western intellectual history than is normally acknowledged. Here, minds should focus on the role of ‘substitution’ in both logic and economics, which has engendered the expression, ‘functional equivalence’. For the founder of modern symbolic logic (and arguably analytic philosophy), Gottlob Frege, the ‘Morning Star’ and ‘Evening Star’ are functionally equivalent because both can be used to refer to the planet Venus, but under somewhat different conditions. And similarly, two qualitatively different goods are functionally equivalent in the market if a price can be agreed to exchange them. The difference between the two situations is simply that the former involves a presumed identity, whereas the latter requires that the identity be constructed on the spot. (But keep in mind that there had to be an original ‘spot’ or ‘spots’ in which people realized that the Morning Star and Evening Star were the same entity.) In both cases, ‘truth’ or ‘reality’ is about the reduction of apparent possibilities. This is the Platonic stance in a nutshell, and Plato’s own concern was over who should take that decision—the one or the many. In the future, the Turing test is likely to be the proving ground for this sensibility, as artificial intelligence-based intellectual work begs the question of what is the ‘added value’ of being human (Fuller 2019). In effect, if something ‘not-human’ passes as ‘human’, then the Platonist would welcome this as educating us that what it means to be ‘human’ does not require what we had previously thought (in terms of substratum). In contrast, an Aristotelian would regard the mistaking of a computer for a human as simply a misjudgement, since the ‘human’ is intimately tied to what we normally take to be ‘human’.

Gestalt psychology provides an interesting lens through which to see this difference between Platonic and Aristotelian understandings of cognitive efficiency, one that was first surfaced by the social psychologist Kurt Lewin (1931) and later taken up by Edmund Husserl (1936/1954). (In both cases, the Platonic position is called ‘Galilean’.) The very idea that we tend to see the world as ‘always already’ patterned would suggest an Aristotelian sensibility, were it is not for the fact that the pattern we see can be so easily manipulated depending on the context of perception, which in turn suggests a Platonic sensibility. Thus, in a ‘Gestalt shift’ experiment, we may start by seeing an ambiguous figure as a duck but end up seeing it as a rabbit, yet at both moments, the image appears as an ‘efficient’ representation of reality, both in terms of directness of perception and comprehension of experience (i.e. the phenomena are ‘saved’). Aristotle may explain the experience of the subject, but Plato explains the behaviour of the experimenter. Unsurprisingly perhaps, in the mid-twentieth century, the Gestalt shift was a popular means—used by, among others, Ludwig Wittgenstein, Russell Hanson and Thomas Kuhn—to explain conceptual change, especially in science (Fuller 2018: chap. 6).

The meta-lesson of Gestalt psychology is that your perception of the world is rendered more changeable once you change your understanding of how that perception was brought about. This insight has made Gestalt psychology an endless fount of ideas for propaganda and advertising. It has been also used to explain how the people behind the early modern ‘Scientific Revolution’ came to shift from an Aristotelian to a Copernican (aka Platonic) world-view: that is, from the standpoint of the Earth to the standpoint of the Heavens. It involved thinking in radically different terms about the relationship between what we experience and what we know. In effect, these original scientists understood reality from the standpoint of the Gestalt experimenter rather than the Gestalt subject—where the experimenter was a proxy for a cosmic intelligence, aka ‘The Mind of God’. Optics was perhaps the main site of contestation for trying to explain how our senses filter reality, which the mind then actively reconstructs (Crombie 1996: chap. 16). To this day, philosophy frames most ‘problems of knowledge’ in terms of the higher order interpretation of visual data.

Rentiership as the Exercise of Modal Power

Now what does all this have to do with rentiership as an economic feature of academia? Let us start with an observation about the history of technology but is more generally applicable to any strategic decision-making, including knowledge policy. For any path-breaking innovation, such as the automobile, there are usually at the outset several competing prototypes, with various strengths and weaknesses. However, over time, one becomes dominant and then sets the pace for the rest. This suggests two complementary concepts: opportunity cost and path dependence. An opportunity cost consists in alternative states of the world that are made less likely if not impossible as a result of a decision taken—such as the market’s decision to back Ford’s way of doing cars in the early twentieth century. Path dependence refers to the efficiency gains that result from any such decision, as infrastructures develop to reinforce its efficacy, removing the original alternatives from serious consideration in the future. In the case of Ford, relatively low prices and simple design trump concerns about human safety and the environmental protection. These anti-Fordist concerns only resurface a half-century later, once the Ford-anchored automotive market has stabilized. By that time, they have become ‘negative externalities’ that need to be ‘managed’, but more in the spirit of an exception to a rule rather than a game changer.

At stake in opportunity costs and path dependence is what I call modal power, the manipulation of intuitions about what is possible, impossible, necessary and contingent. I regard modal power as the cornerstone of the ‘post-truth condition’ (Fuller 2018: chap. 2). Opportunity costs look at modal power from Plato’s second-order standpoint, as the logicians say, while path dependence sees it from Aristotle’s first-order perspective. Classical political economy’s default ‘free market’ mentality—especially its assumption that more competitive markets are ‘freer’—may be seen as part of a concerted second-order attack on rentiership as a first-order phenomenon, whereby rentiership is understood to be the power that accrues to sheer possession, by whatever means it was brought about and to whatever ends it might serve. While Marx correctly identified the classical political economists as capitalism’s original house theorists, their support of private property was focussed mainly on the opportunities for investment that it provided. In that respect, for them, venture capitalism is ‘capitalism’ in its purest sense. In classical political economy, land was valued not as a power base for its owner who could then create bottlenecks in the flow of capital, but as a platform for launching any number of projects from which all those involved in the transaction might benefit.

It is worth pausing momentarily to consider the peculiar—some might say alchemical—metaphysics that underwrites this alliance of scientific Platonists and free market capitalists whom I have portrayed as being so vehemently opposed to the modal power embodied in rentiership. A commonplace of the modern economic world-view is that humans harbour infinite longings but are constrained by limited resources. Such is the fate of spiritual beings trapped in a material world—at least that would have been the gloss given by Joseph Priestley, William Paley and Thomas Malthus, who were among several radical natural theologians who contributed to the foundations of classical political economy in the late eighteenth and early nineteenth centuries. Nevertheless, and this is the main point, even these natural theologians recognized that humanity had already managed to substantially improve its lot over that of other animals. They differed amongst themselves over the terms on which one might speak of ‘limits to growth’, but they agreed that the ‘Industrial Revolution’ dawning in their midst promised much overall growth for the foreseeable future.

Was this apparent human capacity to generate ever greater wealth in the face of absolute scarcity an illusion or reflective of some strategy that we had implicitly discovered to overcome our material limitations, if not our species finitude altogether? Adam Smith already started the ball rolling by suggesting that the secret lay in the rational organization of labour. A half-century later, Count Saint-Simon repaid the compliment by coining the word ‘socialism’ for the policy of governing all of society on this basis. The difference between Smith and Saint-Simon was that the former believed that people left to their own devices amongst themselves—without the imposition of legal restrictions on land transfers and labour mobility—could reorganize peacefully and profitably, whereas Saint-Simon thought that this required a special expertise—so-called captains of industry, professional organizers of humanity, who nowadays we might associate with ‘knowledge managers’ (Fuller 2002).

Notwithstanding their well-rehearsed differences, the founders of capitalism and socialism shared the belief that the way out of human finitude was to disembed people from their traditional social relations and re-embed them in contexts that made the most productive use of their talents, effectively releasing their hidden energies. This way of looking at people amounts to entire societies undergoing a Gestalt shift. In other words, people’s capacities remain constant across the shift but there is a productivity gain from ‘before’ to ‘after’ the shift as those capacities come to be more fully ‘exploited’ (a term that acquires negative connotations only after Marx). In Gestalt psychology terms, the ‘figure’ remains the same but the ‘ground’ has changed. Put crudely, when a slovenly serf is shifted from the field to the factory, he or she becomes a productive worker. Agriculture provided the model for this way of thinking: the starting shot for the Industrial Revolution was fired when the first person saw a relatively undisturbed part of nature as ‘raw materials’ for human use.

An implication of speaking about modern societies as ‘dynamic’ is that they try to minimize the opportunity costs of its members having been born a certain way—that is, at a particular time and place, to a specific family, with certain capacities, etc. They make people more ‘shiftable’ in the Gestalt sense. That someone was born of a family of serfs does not mean that he or she must remain a serf forever—and hence in an indefinite state of low productivity. One can become more productive than that—and thereby provide greater overall benefit—under the right conditions. To be sure, this leaves open how exactly those conditions are to obtain, answers to which capitalists and socialists then provide competing answers. In either case, change is in the cards, with path dependence cast as the ever present foe, as enslavement to one’s birth morphs into the slavery of routinized labour or the slavery of Big Brother. The phrases ‘creative destruction’ and ‘permanent revolution’ are vivid expressions of the contrasting capitalist and socialist antidotes to path dependence. The shared mindset is one for which the word ‘protean’ was coined.

This general line of thought gets complicated in the second half of the nineteenth century, as thermodynamics adds nuance to the idea of nature’s scarcity. It is no longer simply about the inherent finitude of material reality, which might have resulted from the Biblically fallen state of humans, which had been the hidden Calvinist stick to prod humans into self-improvement. In addition, our very efforts to reorganize matter to increase productivity also raise the level of material scarcity, now understood in terms of reducing the amount of ‘free energy’ available in the universe. This opened up two radically opposed horizons: on the one hand, pessimists who believe that the irreversibility of this loss of free energy—aka entropy—means that all our efforts are wasted in the long term; on the other, optimists who believe that the secret to beating this tendency—or at least indefinitely delaying the inevitable—is to become more efficient (Georgescu-Roegen 1971; Rabinbach 1990).

The latter option involves striving to do more with less, which the theologically inclined might associate with our heroic striving to mimic God’s original position of creating everything out of nothing (creatio ex nihilo), the ultimate feat of efficiency, which in secular guise is carried forward as ‘transhumanism’ (Fuller and Lipinska 2014: chap. 2). But it also inspired more downsized expressions in discussions of political economy. The drive to minimize ‘transaction costs’ is one of the more creative responses, especially as an economic argument for ‘institutions’ as agencies whose existence transcends the that of the particular agents involved in any set of market exchanges. As originally formulated by Ronald Coase (1937), institutions are designed to anticipate and mitigate costs so that trade can flow without substantial interference—that is, more ‘freely’ than it might otherwise.

And so began the ‘law and economics’ movement, which measures human progress in terms of minimizing the costs of harm without necessarily preventing the harm itself. For example, if there is an overall benefit to my building a house even though it could destroy the value of adjacent land, then I should compensate in advance, so as to avoid later complaints that could waste still more time, effort and money of all the parties concerned (Calabresi and Melamed 1972). Of course, such a scenario supposes that some superordinate party—typically a judge—takes a decision to optimize over all the parties’ interests, which are fungible with respect to the prospect of financial compensation. The bottom line is that everyone has got their price when it comes to neutralizing harms, and the only question is how to find it (Ripstein 2006: chap. 8).

While for many this is an intuitively harsh principle of justice, it is nevertheless future-oriented rather than past-oriented. There is no presumption in favour of retaining the status quo if it potentially blocks a better future for all concerned—at least in the aggregate sense of ‘all’. Those who stand in the way of progress should be willing to ‘cash out’ their stakes under the right conditions. And in a world innocent of the long-term effects of environmental degradation, it was easy to see how such a policy could hasten the conversion of farms to factories, resulting in the rentiers yielding to the capitalists as the dominant economic force. But it would be a mistake to understand this mentality as limited to ‘political economy’ as conventionally understood. It also extends to ‘knowledge production’. In this context, I will unpack the economic definition of public good to show that it implies a similar hostility to the conservative bias embodied in rentiership.

Plagiarism and the Problem of Academic Knowledge as a Public Good

For economists, something counts as a public good if it would cost more to restrict than to permit access to it. The availability of such a good is probably hard to restrict by nature, and increased access to it would probably serve to increase society’s overall wealth and well-being. An implication is that restricting access to the good in question means lower productivity. Interestingly, such considerations have historically persuaded judges to rule in favour of freeing markets, regardless of which party’s interests actually benefit—as in the anti-monopoly rulings in the US Progressive Era (Fried 1998). This is because the judges have thought of markets as ultimately about information, which by nature flows and spreads (Kitch 1981). It is an idea that Stewart Brand has popularized for the Silicon Valley set with the slogan, ‘Information wants to be free’, and one which in recent years the economist Philip Mirowski has demonized as the thin edge of the wedge for neoliberalism to colonize the academy (Mirowski and Nik-Khah 2017).

For capitalists and socialists, notably both David Ricardo and Karl Marx, public goods in the sense of goods presumed to be free are diametrically opposed to rent, which given our earlier analysis can be understood as the economists’ conception of evil, as it replaces efficient use with its two polar opposites—on the one hand, sheer idleness (i.e. non-use); on the other, excessive effort (i.e. high-cost use). Thus, starting with Coase, the law and economics movement has turned the eradication of this evil into a positive principle of justice by disincentivizing the rentier’s tendency to charge high tariffs to restrict the access of others who might use their property more productively. Such universal hostility to rentiership also explains the instant positive worldwide acceptance of Thomas Piketty’s (2014) Capital in the Twenty-First Century. That book focusses specifically on the role of inherited and unearned wealth as the most persistent source of inequality in societies across the world.

Some of course have gone farther than Ricardo and Piketty—but perhaps not Marx—to declare that the institution of strong private property rights is itself the economic version of Original Sin, as it creates the legal conditions for the restriction of resource use by the bare fact that what is mine is by definition not yours. Without private property, nothing would be rentable, and hence information flow would not be blocked at all. This anarcho-communist position, which traces its roots back to Jean-Jacques Rousseau and Pierre-Joseph Proudhon, should be kept in mind when we turn to contemporary academia’s obsession with plagiarism.

Of course, ‘rentiers’ do not present themselves that way at all. They see themselves as protecting an asset whose value might otherwise degrade from unmonitored use (Birch 2017, 2019). Thus, the landowners whom Ricardo and Marx held in contempt for impeding human productivity are reasonably seen as proto-environmentalists for the resistance they provided to factory building on their property. This issue of ‘quality control’, which Garrett Hardin (1968) made vivid to economists as the ‘tragedy of the commons’, recurs in academia through the idea of ‘gatekeeping’, which was originally a set of tolls to channel traffic in privately owned lands. The term was then repurposed by Kurt Lewin in the 1940s for the filtering of messages in a mass media environment. And nowadays ‘gatekeeping’ is routinely used to characterize the ‘peer review’ processes that characterize the evaluation and publication of academic research.

It is worth lingering here over the idea of ‘quality control’. It is a fundamentally uneconomic concept by virtue of assuming that the value of the good in question is intrinsic rather than extrinsic. More to the point, ‘intrinsic value’ means continuing to respect already known qualities of the good. Thus, the proto-environmentalist landowners in the early Industrial Revolution presumed that their property possesses a value—say, associated with its specific physical constitution or cultural context—that cannot be exchanged for anything else, which in turn justified the high tariff levied on any prospective user. These landowners did not see themselves as exploiting their relative advantage but as performing a hereditary role as stewards of the Earth. A more democratic version of this line of reasoning is familiar today in terms of the allegedly irreducible benefits of ‘natural’ over ‘artificial’ food ingredients, which informs the largely European resistance to the introduction of ‘genetically modified’ organisms into agriculture. To his credit, Scruton (2012) locates this mentality at the heart of ‘Conservatism’ as a political ideology, notwithstanding the more Left-sounding rhetoric of today’s self-styled ‘eco-warriors’.

However, economics is by nature a ‘Liberal’ science, and from that standpoint such ‘Conservative’ arguments for quality control look like attempts to discourage the flow of capital by making it harder for competitors to enter the market. After all, the lifeblood of capital is fuelled by the prospect that anything that is currently done can be done more efficiently and to greater effect—and quite possibly by someone other than the current owners of the means of production. The agent of change remains an open question: It may or may not be the current owners. The problem is that the current owners may not be motivated to step up to the plate. In that context, appeals to ‘intrinsic value’ simply grants ideological licence for rents to be added as an artificial layer of scarcity on top of nature’s scarcity to limit humanity’s ability to rise to the economic challenge.

A sign that academia has become more protective of its own rentier tendencies is its increasing obsession with plagiarism (Fuller 2016: 44–46). Plagiarism is ultimately about syntax fetishism, the heart of copyright, which confers intellectual property rights on the first utterance of a particular string of words or symbols, even though it could have been uttered by any other grammatically competent person under the right circumstances. (At least that is how Chomsky would put the matter.) The mystified legal expression for this privileging of the first utterance is ‘authorship’, which was subject to much criticism, deconstruction and even scorn in the 1960s and 1970s, especially in France (Barthes, Foucault, Derrida). Nevertheless, automated plagiarism detectors such as ‘Turnitin’, through which students nowadays must submit their papers prior to academic evaluation, uphold the syntax fetishism associated with ‘authorship’ in that principled way that only machines but not humans can.

Unfortunately, no credible philosophy of language supports the policy of projecting semantic value from syntactic originality. The meaning of any string of words or symbols is ultimately up to the context of use, which inter alia depends on the other strings in which it embedded, what Derrida would call its ‘intertextuality’. This even applies to students to cut-and-paste, say, bits of Kant into essays that are presented as ‘original work’. To be sure, there are intellectual grounds on which such efforts should fail. But they are less to do with the failure to acknowledge sources than simply the failure to meet the demands of the assignment. Indeed, this should be the basis on which the teacher is bothered by ‘plagiarism’ in the first place—namely, the inappropriateness of the cut-and-pasted bits of Kant to the topic that she has asked the student to address. However, a student capable of cutting-and-pasting Kant into their texts such that the teacher takes it be part of a good answer to an exam or essay question—even if the teacher does not realize that Kant himself originally said it—deserves praise, not condemnation.

Praise is due not merely because the student had outfoxed the teacher at her own game. That sort of response remains beholden to the syntax fetishism that fuels the academic taboo on plagiarism. Rather, praise is due because the student had demonstrated good judgement with regard to what sort of thing belongs in what sort of place—in this case, the bits of Kant that end up on a highly graded essay. If any ‘originality’ is involved, it consists in the ability to pour old wine into new bottles such that it tastes different, if not better. (Recall the Borges (1939/1994) short story, ‘Pierre Menard, Author of the Quixote’.) What we normally call ‘intelligence’ or ‘aptitude’ is primarily about that capacity, which has nothing to do with some ultimate sense of originality. Indeed, this strategic deployment of plagiarism underwrites the justification of multiple choice tests, where the value of one’s knowledge is presumed to be a function of its display in context, not its origin. Indeed, we underestimate the extent to which we ‘always already’ live in the world of the Turing test, where the cheaters and chancers cannot be distinguished from the geniuses on a principled basis. Drawing such distinctions requires the additional work of embedding test performance in larger narratives about how the test takers came to manifest such knowledge, on the basis of which the value of their performance might then be amplified or discounted, resulting in categories like ‘cheater’, ‘chancer’ and ‘genius’.

Learning from Plagiarism: Knowledge as an Artworld

This anti-proprietary approach to plagiarism recalls the ‘artworld’ theory of esthetics developed by Arthur Danto (1964), who stumbled upon it after wondering what made Andy Warhol’s ‘Brillo Box’ a work of art. After all, it would be easy to regard the box on display in an art gallery as Warhol’s cheap attempt to profit in violation of the ‘Brillo’ trademark (‘Brillo’ is a US brand of scouring pad). Of course, most aestheticians would recoil from such a crass judgement, even if they would not rate Warhol’s effort very highly. Yet that crass judgement does approximate contemporary academic attitudes to plagiarism. The student, like Warhol, would seem to be brazenly appropriating someone else’s intellectual property for their own purposes. Danto’s own approach was to say, first, that art is a matter of seeing something as art and, second, that this perception requires a context to recognize the work as art. In short, we need to inhabit an ‘artworld’.

It is perhaps difficult for epistemologists to comprehend that we might normally inhabit an artworld: life as one big exhibition. It will become easier in the future. The currently fashionable way of speaking about this is in terms of our possibly living in a ‘simulation’ (Bostrom 2003). The uncanniness of this proposal rests on the Cartesian assumption that someone other than ourselves—perhaps an evil demon or the ‘Matrix’—is running the simulation in which we live. But that assumption is really not necessary. Humanity may be routinely running different simulations on itself as history is imperceptibly revised to reconfigure future horizons. (In historiography, this is called ‘revisionism’, often in a pejorative light.) In Gestalt terms, we need to imagine that the same figures (from the past) are reorganized against a new ground (i.e. a renewed horizon for the future) to constitute a new perceptual whole—what Danto himself called the ‘transfiguration of the commonplace’. The previously suppressed comes to be salient—and vice versa. Inspired by Freud, Jacques Derrida (1978) likened the activity involved in this transfiguration to the writing on a palimpsest. He wasn’t far off the mark. A similar sensibility is also expressed in Eagleman (2009), a work inspired by recent research in neuroscience.

Plato’s anamnesis, the knowledge that comes from recalling what had been forgotten, provides a prototype for this kind of activity. However, the relevant sense of ‘memory’ is not the strict mental recovery of a determinate past but rather a liberalization of our thought about how the past might determine the future. Once again, it may simply involve a rearrangement of what is already known to form a new whole. Thus, Warhol did not uncover the hidden truth about a box of Brillo—that it was an artwork masquerading as a household product. Rather, he saw the box’s potential to be reconfigured differently and thereby perform a different sort of function—and in so doing, added value to the Brillo box.

The same principle underlies Kuhn’s (1962/1970) influential conception of ‘scientific revolution’. Even after the great paradigm shift from classical to relativistic physics, the achievements of Copernicus, Galileo, Newton and Maxwell did not disappear, but they did appear in a different light, which altered their significance vis-à-vis the new future horizon of physics. Nelson Goodman (1955) interestingly encapsulated the epistemology of the situation as the ‘new riddle of induction’, namely, how the same past can logically imply radically alternative futures. In Kuhn’s case, the alternatives were that classical physics carries regardless of its anomalies or that shift to a new foundation in relativity theory. After 1905 the physics community finally took the latter course, after having followed the former course for the previous 200 years. For his part, Goodman presented a more abstract example. He posited two competing properties—‘green’ and ‘grue’—that emeralds might possess. Both can account for the green colour of emeralds before a stipulated prediction, but they diverge thereafter, one predicting the next emerald will appear green, the other that it will appear blue. While it may be obvious which hypothesis will prevail, we really do not know until the prediction happens. But suppose the emerald does turn out to confirm the ‘grue’ hypothesis, what follows?

We might simply say that the test emerald was an outlier, an exception that proves the rule—or, we might say that emeralds had been ‘grue’ all along and we had been deceived by their superficially green appearance. The latter judgement would licence a rewriting of the past. That rewriting would be associated with our now having come to see something ‘deeper’ about the emeralds than we previously had. This is precisely how a scientific revolution is portrayed by those who back the winning hypothesis: That ‘crucial experiment’, as Popper would put it, that pits ‘green’ against ‘grue’ triggers the deeper realization that appearances are not all that they had seemed. Philosophers of science have yet to adequately answer whether the historical succession of these ‘deeper realizations’, this trail of ‘grue’ events we call ‘revolutions’, display an overall sense of purpose that amounts to a definitive growth in knowledge. Is it genuine scientific progress—or, is that conclusion just a figment of the narrative imagination, what Kuhn called science’s ‘Orwellian’ historiography, which uses history to justify current policies? Kant famously counselled agnosticism about all such ‘teleological’ judgements, introducing the concept of ‘purposiveness’, the mere appearance of purpose. But that may turn out to be ‘purpose’ enough, at least for purposes of promoting knowledge production.

The Historical Rise and Long-term Maintenance of Academic Rentiership

The rise of automated plagiarism detectors have nudged academics toward treating knowledge as intellectual property more than they otherwise might or should do. I have gone further to suggest knowledge would be better served by an artworld than the rentiership model that underwrites the fixation on plagiarism. However, academic rentiership is not simply a reactionary response to contemporary pressures from an increasingly sceptical public for whom ‘Don’t trust the experts!’ has proved to be an effective political rallying cry. Indeed, if academics were more concerned about spreading ideas than rewarding authors, plagiarism would not be the moral panic that it is today. But of course, academia is not simply about efficiently producing knowledge as a public good but also about properly crediting the producers. However, these two goals cut against each other, resulting in the rather tortured path-dependent ways in which academics are forced to make their knowledge claims in the professional literature, namely, by citing ‘proper’ precursors, which is the functional equivalent of paying rent to maintain one’s standing in a domain of inquiry. This ongoing need to publicize other authors ends up making academic knowledge claims more arcane than they might otherwise be, given that in most cases, one could reach similar conclusions by citing fewer authors.

The template for intellectual proprietarianism had been already set in the seventeenth century in the Charter of the Royal Society of London, according to which an agreed self-selecting and self-organizing group of members would decide on what counts as valid knowledge without government interference. In effect, the Crown gave the Society exclusive rights over the certification of public knowledge as long as they kept their own house in order. Accordingly, individuals would submit their knowledge claims to ‘peer review’, in return for which they would be able to claim limited intellectual property rights as a ‘discoverer’. This setup put a premium on being first, and latecomers would be expected either to build upon or work around predecessors, as they all staked out a common ‘domain of inquiry’ (Fuller 2013). This conception of knowledge informs Kuhn’s (1962/1970) account of puzzle solving in ‘normal science’. Thus, open conflict among scientists has increasingly focussed on priority disputes, which resemble the sorting out of competing entitlement claims to a piece of land (Collins and Restivo 1983).

In the long term, this arrangement changed the character of referencing knowledge claims and claimants. What might be called a ‘Westphalian system’ of epistemic authority emerged, with the aim of establishing the jurisdiction of every claim and claimant to knowledge. From this arrangement came the mutual recognition exercise—some might say mutual protection racket—that is associated with ‘expertise’, which is reflected in today’s credit-assigning academic referencing practices, aka ‘citation’. Meanwhile an older referencing convention—the footnote—has lost favour. It had been often used to openly contest the epistemic authority of rivals, which in turn suggested a sphere of inquiry with multiple overlapping jurisdictions always potentially at odds with each other—in short, a ‘pre-Westphalian’ world of knowledge (Grafton 1997).

One way to tell the history of academia would be in terms of alternating waves or rentiership and anti-rentiership. The onset of rentiership may be generally associated with the rise of ‘schools of thought’ whose authority is maintained by fidelity to a set of ‘original sources’ rather than innovation away from them. I refer, of course, to the commentary culture surrounding Plato, Aristotle, the Bible and their theological commentators in the Middle Ages. Spaces were introduced in the margins of books as new frontiers for claiming intellectual property rights, which others would then need to incorporate in their own books. This was the original context in which the famous quote normally attributed to Newton—‘If I have seen as far as I have, it is because I have stood on the shoulders of giants’—when it was first uttered in the twelfth century (Merton 1965). The balance of power between old and new academic rentiers gradually shifted over the centuries as marginalia first became footnotes and then completely internalized as citations embedded in the aspiring rentier’s text.

To be sure, less display—and arguably mastery—of past work is now required to advance one’s own knowledge claims, if one is already in the academic knowledge system. Citations, properly arranged, function as currency that one pays to be granted a lease on a staked out piece of intellectual property. Admittedly, that constitutes a triumph of ‘efficiency’ in a certain sense—but only for those who have paid the high entry costs (e.g. doctoral training) involved in knowing which works to cite. This in turn makes it harder for those who have not followed that particular path of inquiry to make sense of, let alone evaluate, the knowledge claims in question. Contrary to its own hype, academic knowledge is less a ‘public good’ than a club good (Fuller 2016: chap. 2). In a club, several forms of rent are paid, starting with the submission of oneself to background checks and sometimes examination, followed by periodically renewed subscriptions, alongside the expectation that, once accepted, one will provide preferential treatment to other club members in the wider society. The acquisition and maintenance of academic expertise follows a very similar pattern.

Indeed, the technological infrastructure of academic knowledge production has played a diabolical role in rendering the system more club-like. In the roughly half a millennium that has passed between the introduction of the printing press and the Internet, it has become easier for those already in the know to get direct access to the knowledge they need to advance their claims, while at the same time keeping out those who have not yet paid the high entry costs. The relative ease with which books and journals have been made available means that that each new knowledge producer needs to reproduce less of the work of their predecessors in their own—a citation and a verbal label will often suffice. This allows a quickening of the pace of academic knowledge production, a point that was already in evidence by the time of Scientific Revolution, less than two centuries after Gutenberg (Eisenstein 1979).

Until universal literacy became an official aspiration of nation-states in the late nineteenth century, the education of the general public was clearly lagging behind the advances of the emerging knowledge elites. Indeed, as we have seen, Saint-Simon’s original ‘socialism’ justified this new form of social distancing in the name of a more rational organization of society. It had certainly contributed to the Enlightenment’s self-understanding as a ‘progressive’ and ‘vanguard’ movement, which laid the groundwork for the modern culture of expertise, with Positivism as its house philosophy. Indeed, the philosophers generally supported the establishment of free-standing research institutes that trained future front-line knowledge producers—but remained separate from the ordinary university teaching which prepared the traditional administrative professions: the clergy, law and medicine. Its legacy remains in the academic cultures of such countries as France and Russia, where one normally has a dual appointment—one for teaching and one for research, each operating quite independently of the other.

It is against this historical context that one can begin to appreciate the radically anti-rentier cast of Humboldt’s early nineteenth century proposal to reconceptualise the academic vocation in terms of the integration of research and teaching in the same person called an ‘academic’. But before turning to academia’s historic tendencies against rentiership, let me close by noting that even today, the communicative efficiency of the academic referencing culture serves to erect higher intellectual trade barriers between academia and what has become a generally educated public. It is too bad that this point is not made more forcefully in the ongoing debates concerning ‘open access’ academic journals, which present the false premise that the primary obstacles to the public’s utilization of academic knowledge are the prices charged by publishers. This version of ‘open access’ primarily opens the door to financially disadvantaged academics to improve their own chances at epistemic rentiership. It does not remove the club-like character of academic knowledge production.

The Swings against Academic Rentiership from the Renaissance to Humboldt

The swing away from rentiership in academia had already begun in the sixteenth century Renaissance. In effect, the ‘Humanists’ reverse engineered and thereby undermined the entitlement structure of medieval scholastic authority by showing that the received interpretations of Christendom’s canonical texts were based on false translations. (It is worth recalling that the scholastics read nearly everything in Latin translation, an epistemic condition comparable to Anglophone academia today.) But two diametrically opposed conclusions were then drawn. Those who remained loyal to Roman Catholicism tended to adopt the relatively relaxed if not cynical attitude of ‘Traduttore, traditore’ (To translate is to betray), while Protestant dissenters adopted the more earnest posture of trying to find the original meaning of these texts. This led to a variety of signature modern epistemic moves, ranging from more first-hand acquaintance with authoritative sources to the search for corroborating testimony, or ‘evidence’, as we would say today, for the knowledge claims made by those sources. This sparked considerable innovation in the independent testing of knowledge claims, which became the hallmark of the ‘scientific method’.

Indeed, Francis Bacon made it quite clear that an important—if not most important—reason to submit knowledge claims to independent evaluation was to short-circuit the trail of commentary, whereby academics simply build or elaborate on misinterpretations of sources that may themselves not be very reliable. Bacon’s rather ‘post-truth’ way of referring to the problem was in terms of the ‘Idol of the Theatre’, a phrase that evokes the image of academics mindlessly playing the same game simply because everyone else does. In his magisterial survey of global intellectual change, Randall Collins (1998) observed that the periodic drive by intellectuals to level society’s epistemic playing field by returning to ‘roots’, ‘sources’, ‘phenomena’, ‘foundations’ or ‘first principles’ tends to happen at times when the authority of the academic establishment has been weakened because its normal political and economic protections have also been weakened. This in turn undermines a precondition for the legitimacy of academic rentiership, namely, an acceptance by non-academics of the self-justifying—or ‘autonomous’, in that specific sense—nature of the entitlement structure of academic knowledge production.

The Renaissance exemplifies what the turn against rentiership looks like in action: charges of ‘corruption’ loom large, most immediately directed at academics violating their fiduciary responsibilities as knowledge custodians by transmuting the public’s trust into free licence on their own part. This is the stuff of which ‘research fraud’ has been made over the past 500 years—and which of course looms large today across all academic disciplines, now as part of an ongoing crisis in the peer review system (Fuller 2007: chap. 5). But behind this, charge lurks a morally deeper one, namely, that academics are not merely protected by larger powers but collude with them so as to constitute that mutual protection racket known in political sociology as ‘interlocking elites’. The presence of scholars and clerics as royal courtiers in early modern Europe is the most obvious case in point. In this respect, Galileo remains a fascinating figure because he was always trying to turn that situation—his own—against itself (Biagioli 1993). The modern day equivalent is, of course, researchers whose dependence on either state or private funding agencies effectively put them ‘on retainer’, yet are then expected to act as ‘honest brokers’ in the public interest (Pielke 2003).

Nevertheless, it took the Protestant Reformation to begin to draw a clear line under these excesses of academic rentiership. And another quarter-millennium had to pass before the point was driven home in a philosophically principled fashion. What Immanuel Kant advanced as a ‘critique of pure reason’ was in fact a systematic deconstruction of the academic arguments normally used to justify the status quo—that is, the set of interlocking elites who would have the public believe that they live in the best possible world governed by the best possible sacred and secular rulers. At most such beliefs were ‘regulative ideals’ (aka ‘ideologies’) that served to buy time for the elites to carry on. The salience of Kant’s critique in his own day lay in the increasingly open contestation for epistemic authority in the royal court among the learned professions—the clergy, law and medicine—in a period of uneven secularization. Kant himself shone a light on this matter in his polemical 1798 essay, The Contest of the Faculties, a major inspiration for the Humboldtian University (Fuller 2013).

Humboldt’s great innovation was to harness cutting edge research to the reproduction of the society’s authority structure so as to convert the universities from conservers of tradition—that is, guardians of rentiership—to dynamos of social change. Thus, Humboldt’s renovated academic was to present the path of inquiry not as largely charted but as fundamentally open, with the expectation that the next generation will not build upon but overturn the achievements of the past. For a long time, I have characterized this radical sensibility—whose most eloquent expression is Max Weber’s famous 1918 speech to new graduate students, ‘Science as a Vocation’ (Weber 1918)—as the ‘creative destruction of knowledge as social capital’ (e.g. Fuller 2002: chap. 1; Fuller 2016: chap. 1).

By that phrase, I mean the multiple roles that teaching plays in undermining the competitive advantage—and hence the potential for rentiership—that is inherent in cutting edge research, which by its very nature is born ‘elite’, in the sense of being ‘ahead of the pack’. These roles include the presentation of difficult ideas in more ordinary terms, as well as streamlining the learning process so that students do not need to recapitulate the entire history of a field before being deemed capable of contributing to it. ‘Philosophy’ as the name of an academic discipline was introduced in this context as the subject whose business it is to level the epistemic playing field by forcing even the most established knowledge claims to be justified from first principles.

Reinventing the Rentiership Wheel After Humboldt: From Germany to America

Notwithstanding its ever changing fashions and often to the exasperation of fellow academics in other disciplines, philosophy—at least in its pedagogy—has stuck doggedly to the Humboldtian mission. But while Humboldt continues to inspire the idea of an anti-rentier academy to this day, it only took about a generation for his vision to morph into a ‘re-professionalization’ of academic life, resulting in the discipline-based structure of the university that persists to this day, albeit under increasing pressure. This return to rentiership is normally attributed to nation-building and a broadly ‘Positivist’ ideology that regarded academic experts as the secular successors of the clergy in terms of ministering to the needs of modern society. In German academia, it also carried more metaphysical implications—namely, that each discipline is regulated by a distinct ‘value’ that it tries to pursue, which in practice served as a quality control check on scholarship (Schnädelbach 1984). This ‘Neo-Kantian’ turn was associated with an extended training period for all disciplines, culminating in the ‘doctorate’ as the ultimate mark of accreditation.

And again like past rentiers, these new-style academic professionals became increasingly entangled with the affairs of state and industry, the high watermark of which was Germany’s establishment of the Kaiser Wilhelm Institutes shortly before the First World War. This turned out to be a fatal embrace, as virtually the entire academic establishment was behind the war effort, which resulted in a humiliating defeat. The Weimar Republic that followed has been often characterized as the ‘anti-intellectual’, or more specifically ‘reactionary modernist’ (Herf 1984). However, it would be more correct to say that it was an anti-rentier backlash that levelled the epistemic playing field between academic and non-academic forms of knowledge. Thus, folk knowledge, astrology, homoeopathy, parapsychology and psychoanalysis—all of which had been regarded as beyond the pale of academic respectability—acquired legitimacy in the 1920s and 1930s through popular acceptance. They were facilitated by the emergence of such mass media technologies as radio, film and tabloid newspapers, which provided alternative channels for the spread of information.

At the same time, academia responded by returning to ‘fundamentals’ in two rather distinct ways, which anchored the rest of twentieth century philosophy and much of cultural life in the West and beyond. One was existential phenomenology, which like the Renaissance Humanists and the Protestant Reformers focussed on the corruption—both deception and self-deception—that is involved in expert rationalizations of our being in the world. Here, what Heidegger originally called ‘deconstruction’ played an important role in recovering the language of being, which in his hands turned out to be the aspects of Greek to which German is especially attuned. However, I will dwell on the second return to fundamentals, which was via ‘logical positivism’ (in the broad sense, so as to include Popper and his followers), as it has been the more comprehensively influential. It is worth noting that while existential phenomenology and logical positivism are normally portrayed as mortal enemies, they were agreed in their near contempt for the leading philosophical custodian of academic rentiership in their day, the Neo-Kantian Ernst Cassirer (Gordon 2012).

The logical positivist strategy was to strip down academic jargon—the calling card of rentiership—to data and reasoning that in principle could be inspected and evaluated by anyone, including those who are not privy to the path that the experts took to reach their agreed knowledge claims. In this context, the prospect that the future might be like the past—‘induction’—was regarded with considerable suspicion: Even if it appears that the future will resemble the past, why should it? Our earlier discussion of Nelson Goodman epitomized this attitude. Indeed, Goodman (1955) called sticking with the current hypothesis in the face of an empirically equal alternative ‘entrenchment’, a not altogether positive term that implies a ‘path dependent’ mode of thought.

The larger point is that the act of projecting the future is a much more evaluative enterprise than, say, the phrase ‘extrapolating from the data’ might suggest (Fuller 2018: chap. 7). By the 1960s, this concern morphed into a general problem of ‘theory choice’, whereby at every step along the path of inquiry one needs to ask why one hypothesis should be advanced rather than another if both hypotheses account for the same data equally well. Thus, Bayes theorem rose to prominence in epistemology as a vehicle to inspire and record experiments that weighed on comparable hypotheses differently. In effect, it kept the opportunity costs of carrying on with past epistemic practices under continual review. In the philosophy of science, it led to a tilt toward theories that suggested more paths for further inquiry and other such horizon-broadening, future-oriented values (e.g. Lakatos 1978).

Much of the long-term cultural influence of logical positivism relates to its extramural interests in overcoming the ‘burden of the past’ in multiple senses, ranging from differences in natural languages to the tendency of art and architecture to reinforce historical memory. In this respect, the movement was ruthlessly ‘modernist’ and occupied the Left of the political spectrum, at least during its European phase, when it was seen as part of ‘Red Vienna’ (Galison 1990). Moreover, logical positivism’s transatlantic passage involved a further degree of ‘complexity reduction’ that its proponents largely welcomed—namely, the need to reformulate in English everything that was originally expressed in German. This process, common to other intellectual migrants in the interwar period, was somewhat easier than supposed, as the German connotations (excess baggage) were often gladly lost in English translation. In this respect, logical positivism’s much vaunted respect for ‘clarity’ of expression can be seen as valorizing the very features of translation that Heidegger would regard as ‘forgetfulness of Being’. In any case, this was how English quickly became the undisputed language of international exchange (Gordin 2015: chap. 5–6).

To be sure, against this general trend stood the Frankfurt School genius, Theodor Adorno, who returned to Germany as soon as he could after the Second World War to recover what Judith Butler (1999) has popularized for US academia as the ‘difficulty’ in writing that is necessary to pursue the deepest issues of concern to humanity. Yet, in many respects, Butler’s appeal to Adorno’s difficulty is no more than ‘rent nostalgia’, since Adorno himself was a creature of Weimar culture who easily mixed disciplines and worked in think tanks both in Germany and the US until his postwar German return. Indeed, much of what Adorno published in his lifetime was music criticism, which could be understood by a broad range of readers. (And was it not Adorno who wrote a book entitled, ‘The Jargon of Authenticity’ (1964/1973)?) Moreover, the ‘difficult’ postmodern French theorists on whom Butler has based most of her own work were not nearly as academically entrenched in their homeland as they came to be in the USA. The likes of Foucault, Deleuze and Derrida cycled in and out as public intellectual figures in France, each trying to grab attention from the others, which helps to explain their flashy style—but not why Americans should base entire journals and schools of thought around them (Cusset 2008). Notwithstanding her credentials as a ‘continental’ philosopher, Butler is more parochially American than she thinks.

The underlying inconvenient truth is that even though most of the German migrants were willing to abandon the academic rentiership of their homeland, they found themselves confronted with an American academic system that by the end of the nineteenth century had already come to emulate the doctorate-driven ‘Wilhelmine’ model of academic rentiership, again as part of a nation-building exercise. Most of the transatlantic migrants responded as the logical positivists did. Having been outsiders in the system they left, they became doyens in the system they entered. A key moment was the 1912 presidential election of that Bismarck-admiring founder of the American political science profession, Woodrow Wilson.

Once Wilson masterminded a late and decisive entry of the USA into the First World War, the sense that Europe was transferring stewardship for a valuable piece of intellectual real estate called ‘Western Civilization’ to the USA became palpable. The main academic indicator was the rise of American undergraduate courses in the ‘Great Books’, the core of a renovated liberal arts curriculum. By the 1980s, it became the intellectual terrain on which the ‘canon wars’ were fought—perhaps the clearest display of academia’s rentiership tendencies at the pedagogical level (Lacy 2013: chap. 8). The question explicitly put on the table was: ‘Who owns the curriculum?’ In this context, ‘identity politics’ opened the door to retrospective intellectual entitlement claims by ‘dispossessed’, typically ‘non-Western’ cultures, the result of which for better or worse has been to breathe new life into academic rentiership. A sign of the times is that ‘cultural appropriation’ tends to be seen pejoratively as ‘plagiarizing’ rather than extending the value of the appropriated culture.

Conclusion: Contemporary Prospects for Overcoming Academic Rentiership in the Digital World

The idea that the USA was the primary—if not the only reliable—standard-bearer for ‘Western Civilization’ reached its peak in the Cold War. Unsurprisingly, it was also the period when the ‘Great Books’ approach to liberal arts reached its peak, even though the launching of Sputnik in 1957 diverted much of the pedagogical focus toward the more usual research-based form of academic rentiership, in which students were encouraged to specialize in the physical sciences to help defeat the Soviets (Fuller 2000: chap. 4). However, I do not wish to dwell here on the remarkable faith in the projectability of academic expertise onto geopolitics that this diversion involved—or the role of the US Gestalt psychologist Jerome Bruner in midwifing this development, for that matter. In any case, it resulted in students flooding into physics and chemistry courses, the market for which then collapsed at the end of the Cold War. Rather, I would turn our gaze to the ‘auxiliary’ knowledge enterprises that capitalized by presenting cheaper alternatives to what passed as ‘mastery’ of the Great Books. They were the harbingers of the knowledge economy we currently inhabit.

I refer here to such Cold War creations as Masterplots, Cliff’s Notes and Monarch Notes. Although they presented themselves as ‘study guides’, they were basically in the business of supplying the sort of understanding that was needed of a ‘Great Book’ to pass an academic exam or sound intelligent in conversation. Implied in this multi-million dollar business, which took advantage of the cheap paperback market, was that the knowledge required of even the most classic of works (e.g. Shakespeare) is sufficiently patterned that it can be presented more efficiently to today’s readers than in the hallowed originals. Moreover, the strategy seems to have worked because an Internet-based version started by Harvard students—Spark Notes—has flourished since 1999, now as part of the New York-based bookselling empire, Barnes & Noble. But more importantly, the Internet has permitted a plethora of spontaneously generated summaries and commentaries, which when taken together undermine the finality of academic authority; hence the ascendancy of Wikipedia (Fuller 2018: chap. 5). I will conclude with a few observations about this development, which even now tends to be regarded as unseemly by professional academics, yet it may contain the most robust seeds of anti-rentiership to date.

This paper began with a reflection on the classical ideal of knowledge as ‘explaining the most by the least’. Historically, it has involved judgements about what is and is not necessary to include in such an explanation, or ‘modal power’. These judgements have been influenced by the communication technology available—and, more importantly, who has control over it. The modern threat to academic rentiership began with the introduction of a technology over which academics did not exert monopoly control, namely, the printing press. To be sure, academics generally welcomed the ensuing legislation—from papal imprimaturs to royal licences—that were designed to contain printing’s threat to their rentiership. However, the advancement of Protestantism on the back of mass Bible reading campaigns seriously undercut these relatively ineffectual attempts at censorship. As a result, the great modern political and economic revolutions happened typically first in the wider society already capable of making use of the new technology and then only grudgingly in academia.

The Internet promises to revisit this situation with a vengeance. A general feature of modern revolutions from Martin Luther onward is that their instigators were trained in the old regime but for whatever reason lacked sufficient personal investment to perpetuate it. We now live in a time with unprecedented levels of formal education yet widespread disaffection with the judgements taken by its academic custodians. Perhaps, that is how it should be. The emancipatory potential of education is not fully realized until the educators themselves are superseded. This would certainly explain the robust empirical finding that the more people have studied science, the more they take exception to expert scientific opinion—yet all along without losing faith in science itself. I have called this phenomenon ‘Protscience’, a contraction of ‘Protestant Science’ (Fuller 2010: chap. 4; Fuller 2018: chap. 5).

Like the original Protestantism, Protscience requires an alternative medium for expressing what the old regime represses. Step forward to the Internet, the true successor to the printing press (pace Marshall McLuhan), especially once the World Wide Web equipped personal computer interfaces with hyperlinks and audio/video displays, which have served to fuel the now hyper-educated imagination. This enhanced Internet publicizes, amplifies and democratizes the quiet elite revolution that had begun with what the media theorist David Kirby (2011) has called ‘Hollywood Knowledge’, whereby academic experts freely migrated to that most non-academic of media—film—in order to explore their own and their audience’s capacities to imagine alternative worlds that academia normally discourages as ‘improbable’ if not ‘impossible’. An early case in point is German-American rocket scientist Wernher von Braun’s admission that Fritz Lang’s 1929 proto-feminist Weimar film, Frau im Mond, inspired the idea for how to launch a rocket into space (Fuller 2018: 38).

If Plato were advising today’s defenders of academic rentiership, he would tell them that the biggest mistake that their political and economic protectors made was when they regarded emerging alternative communication technologies—starting with the printing press—as possibly being of mutual benefit. Thus, instead of Plato’s preferred route of censorship, modern power elites have tended either to tax new media or keep them on retainer as propaganda vehicles—and sometimes both, of course. While the power elites have thought of this as a win-win situation, placating their potential foes while presenting an outwardly ‘liberal’ image, historically it has only sowed the seeds of dissent, which eventually led to a turn away from any entrenched form of power, including academic rentiership. Here, it is worth recalling that most revolutions have occurred in societies that had already been liberalized to some extent but not up to the liberals’ promoted expectations.

The general lesson here is that tolerance for the dissenting views expressed in alternative media serves to dissolve the ultimate taboo with which Plato was concerned, namely, between ‘the true’ and ‘the false’. We all end up becoming ‘post-truth’. In this context, ‘fiction’ is ambiguous, in that in the first instance, it merely implicates a maker in whatever image of the world is presented. And of course, the mere fact that something is made does not necessarily mean that it is ‘false’. Thus, some further judgement is required, which in turn threatens to reveal the exercise of modal power over what people are allowed to think is possible. The modern accommodation has been to regard fiction as ‘literature’ with its own academic space that can be judged on its own terms, without reference to other forms of knowledge. In this respect, it has simply followed the path of other fields of inquiry that have been domesticated by becoming an academic discipline. But of course, lawyers and scientists who adopt an ‘instrumentalist’ attitude to their activities routinely regard their ‘fictions’ as possessing efficacy in the larger world regardless of their ultimate ontological standing. To be sure, Plato found this attitude desirable when restricted to the philosopher-king but socially destabilizing when permitted in society at large, which was his main complaint against the Sophists (Fuller 2018: chap. 2).

The Internet adds an interesting dimension to a key feature of this story, which requires a backstory. It has been often remarked that many ingredients of Europe’s Scientific Revolution were first present in China, yet the comparable Chinese scholars and craftsmen never interacted in the way they started to do in twelfth century European cities such as Oxford and Paris, which over the next five centuries paved the way for modern science and technology. The relatively free exchange between European scholars and craftsmen allowed ideas and formulas to be seen in terms of how they might be realized more concretely, as well as how machines and techniques might be extended and improved (Fuller 1997: chap. 5). This allowed for an empowering interpretation of the ‘maker’s knowledge’ principle, famously championed by Francis Bacon and which underlies the centrality of modeling in almost all branches of science today—namely, that one can only understand what one can make. (I say ‘empowering’ because historically ‘maker’s knowledge’ has been invoked to suggest both strong limits and no limits to human knowledge.) The Internet enables this largely self-organized activity to bear fruit much more quickly through explicit networking, especially ‘crowdsourcing’, whereby invitations are issued for those with the relevant aspirations to hook up with those with the relevant skill-sets. This now commonplace cyber-practice opens up the space of realizability by creating the conditions for the surfacing of ‘undiscovered public knowledge’ that allows the already existing pieces to combine to solve puzzles that were previously regarded as insoluble (Fuller 2018: chap. 4). In short, what looks like a glorified matchmaking service can render the impossible possible, and thereby diffuse modal power.

For a parting shot of academic rentiership, consider the following. The only difference between an establishment scientist like Richard Dawkins who says that the public opposes evolution because they do not know enough cutting edge science and an establishment humanist like Judith Butler who says that the public opposes postmodernism because they do not know enough cutting edge humanities is that Dawkins is more urgent than Butler about the prospective consequences for society if the public does not actively profess the faith (in his brand of Neo-Darwinism). Whereas Butler simply wants to continue reaping the rewards that she has been allowed from a relatively insulated (aka ‘autonomous’) academic culture, Dawkins insists on nothing less than a conversion of the masses. In a democratic knowledge economy that treats academic rentiership with suspicion, neither should be allowed to stand, though Dawkins may be admired for his forthrightness. But make no mistake, these are the two sides of academic rentiership: the parasite (Butler) and the colonist (Dawkins) of knowledge as a public good.