Generative Artificial Intelligence, Authorship, and Ethics in/of the Hypercommons

Generative Artificial Intelligence (henceforth GenAI) products have loomed large over academic life since Open AI launched ChapGPT in 2022. GenAI products work by mining available data and packaging it in response to individual requests in surprisingly lucid, if somewhat uncanny, prose. Across diverse texts, from academic papers, reviews and editorial letters, to emails, student assignments and memos, the suspicious question subsequently arises: who, or what, wrote this?

A subsequent proliferation of scholarship has attempted to evaluate the future of academic knowledge production in the context of GenAI (see Gatrell et al., 2024; Grimes et al., 2023; Kulkarni et al., 2023). What is less examined in these debates is how the social nature of the data commons feeding such programs. Moreover, as we argue, this data commons is related to moral agency through the notion of ‘the author’, a sacrosanct yet metaphysically fraught concept at the center of academic institutions. Both the concept of the commons, with its connotations of sharing and public good (but also enclosure and expropriation) and the concept of the author, with its connotations of individual rights, property and liberty (but also individualism and egocentrism), raise core themes for business ethics. At the Journal of Business Ethics, we have a responsibility to debate the potentials and dangers of GenAI on our field, and in this essay we do so in an exploratory fashion by interrogating its relation to authorship and what we call the “hypercommonsFootnote 1” of GenAI.

Hypercommons and the Ethics of Authorship

By “hypercommons”, we gesture to a tradition of thinking of digital technologies as constituted by, and drawing from, distributed inputs and activities from social communications mediated by digital platforms. In this tradition, everyday activities of internet usage and communications, from social media to searches to online blogs and publications, form a “digital commons” that early digital theorists saw as a source for open, democratic action and exchange (Benkler & Nissenbaum, 2006; Lessig et al., 1999). To be sure, the internet was never organized or managed as a commons, in the sense of a participative controlled, bottom-up endeavour; quite the contrary (Dean, 2010). Nevertheless, the distribution of content production and the promise of new forms of expression derived from a collectively sourced activity, constituting at least the potential for a digital commons. Yet, even from the early days of digital theorising, such hopes were haunted by worries of the “enclosure” of this commons and its exploitation for private interests, worries that have reached a crescendo in subsequent thinking (e.g., Dean, 2010; Zuboff, 2019). As GenAI emerged out of the mass usage of this common data pool as a “training” base for automated programs, the prospect appeared that digital applications could become substitutes for, rather than expressions of, social life, displacing the commons in a distorted mirror image of itself. We refer to this this updated sense of the commons feeding into GenAI as hypercommons, to signal this massively distributed and socially anchored distortion of the social. The virtue of the term is to recognize the social nature of GenAI, while retaining a critical eye on its social effects.

In the context of academic communities, the digitalization of the academic commons may affect one of the core institutions of scholarly production in particular– the concept of the author. Perhaps paradoxically, scholarly communities are constituted in part as webs of individual authorship and co-authorship. These webs form communities though systems of individual recognition and attribution, though valuing novel contributions and recognizing authors for their novel concepts and discoveries. In this sense, the author function produces a locus of moral agency through which scholars and their work can be evaluated. The individualism of academic authorship as part of the “game” of academic career has been widely critiqued (e.g., Butler & Spoelstra, 2023), and deepening this critique is not the purpose of this essay. However, we can broadly recognize academia as involving an ongoing tension between establishing and institutionalizing accumulated insights and challenging these institutions with novelty, curiosity and critique. The role of the author as moral agent is thus both individual and deeply social, allowing individuals to claim benefit from, and take responsibility for, their scholarship even as they integrate that scholarship into the common fund of ideas. The norms of academia thus attempt to balance the open and common production of knowledge with individual insight, accountability and recognition.

Well before GenAI, authorship had already been called into question at the conceptual level. Foucault (1984) argued that the notion of the individual author is specific to a small window in history and to certain discourses, and as a category was “relatively weak”. Arguing that the fiction of the unified individual author obscures the multiplicity of positions taken by writers and the collective nature of textual production, Foucault (1984, p. 119) attributes authorship to a specific historical epoch that is receding in the present. He notes: “I think that, as our society changes, at the very moment when it is in the process of changing, the author function will disappear, and in such a manner that fiction and its polysemous texts will once again function according to another mode, but still with a system of constraint-one which will no longer be the author, but which will have to be determined or, perhaps, experienced”. Once the author function – that of regulating the flow of discourse according to certain constraints – has passed, what will be system of norms regulating the production, legitimation and diffusion of texts and the ideas they contain?

Our ethical concern is situated at the convergence of GenAI as a hypercommons and the concern with the regulative function of the author as an institution of scholarship. The promise of an academic commons holds out a positive ethical potential for less individualistic, less profit-oriented forms of academia in which individual authorial credit is less important than collective knowledge production. The flip side, however, is that the erosion of authorial individuality can also reduce authorial accountability and responsibility, undermining the checks and balances of rigorous scholarship and unleashing a flood of pseudoscientific texts into circulation. One could argue that, outside of a relatively small circle of peer-reviewed journals, such an inundation of questionable discourses increasingly populates the literature in the form of white papers, academic platforms and the like. We fear, however, that this danger is potentiated and accelerated in a digital hypercommons built around and managed for private profit yet fed by collective efforts (Lindebaum et al., 2023).

Specifically, GenAI facilitates the automatization of a range of authorial activities, from operational issues such as spelling and language checks to more sophisticated editing and text production tasks. While the quality and accuracy of GenAI texts may be questionable when compared with expert-produced content, texts may be plausible enough to pass for academic production with slight editing or adjustment. Moreover, we do not know the extent to which GenAI can be used to supplement or even fabricate data production and analysis, and the ethics of using GenAI as a data-analytical tool is in need of urgent discussion. While GenAI results require author prompts, and thus is arguably under authors’ control in some manner, it is clear that authorial agency is deeply impacted by these functionalities. In their more intensive usages, it is plausible to say that such programs may supplant or substitute the author function as such, and that the boundary between the author and the program is dissolved. Yet what makes it possible for GenAI to perform these functions is its reliance on large data sets of pre-existing human (and possibly machine) authored data, which is fed into systems as “training” data, and upon which it extrapolates likely responses.

In that sense, GenAI is not generating anything radically new, but is using its enormous existing collective trove of information to iteratively predict and mimic expected responses to given prompts. Considered in this way, GenAI responses draw from the fund of the hypercommons to produce a semblance of authored texts, which are then attributed the names of the scholarly “authors”, who subsequently become the locus of responsibilities and rewards attributed to those texts. What are the consequences of this use of the hypercommons for the notions of authorship and its related aspects of moral agency?

The Ethics of Authorship and GenAI at Journal of Business Ethics

Journal of Business Ethics is typical of journals in the humanities and social sciences that strongly rely on norms of authorship, both in terms of publishing, but also in the identifiability of expert reviewers and editors. Norms on authorship and identifiability are institutionalized in scholarly publishing, linked to individual property rights, DOI designation, Vancouver ethical protocols and other forms of individual attribution. Such conventions have the advantage of traceability and a pretention to fair compensation for creators. They also establish moral agency at the level of individual scholars, in line with a rights and responsibilities regime aligned with a liberal social economic order founded on property ownership, individual merit and the sanctity of contracts. However, these private arrangements are put in check by the suppression of individual inputs through the mass extraction of data without attribution. Moreover, when these texts are fed into a common stock of texts, they can be leveraged by large data companies for their own gain. Indeed, Open AI has recently been sued by the New York Times and numerous individual authors, having admitted that its product could not exist without copyrighted material (Milmo, 2024).

In what ways is the moral agency characteristic of this regime undermined by GenAI? We can point to at least three related processes. First, in terms of external accountability, when authored texts are supplemented or supplanted by GenAI, it becomes difficult to attribute individual moral agency; indeed, it becomes more likely that agency is distributed technologically in ways that are difficult if not impossible to trace. Second, as an external motivation, authors’ choices are deeply dependent on those adopted by their colleagues and wider institutional conventions. If GenAI becomes normalized in academic production, it might be difficult or impossible to operate professionally without it. The gravity of such norms may simply be too great to allow individual ethical autonomy. Third, at the level of the internal subjectivity of authors, it may become impossible for authors to understand their own authorial agency. Ethical agency assumes that authors are able, to some extent, to identify themselves and their own values and desires as distinct even if not separable from the technologies that constitute their work environments (Greenwood & Wolfram-Cox, 2023). These values and desires may themselves become formed coextensively with GenAI tools, making it difficult for authors to know what is “theirs” or to blur the distinction between internal and external in the first place. For all of these reasons, it may be difficult to imagine ethical consideration outside of the entanglement of actors within their technological surroundings.

Why is this a problem? Given that moral agency always exists against a social background, are not all individual ethical decisions somehow related to a background commons? It may be so, and thus GenAI merely highlight the illusion of individual moral agency that is at the core of liberal conceptions of ethics. In a society based on the commons, without an overwhelming focus on property and attribution, perhaps moral agency would not have to be identifiable in this way at all. However, given the status quo scholarly economy, we remain tied to some extent to individually-based moral agency. To the extent that individual agents need to be demarcated, identified and compensated, this should be done in a fair way. Our current models are not based on common ownership, and we do not have in place the necessary safeguards to protect the commons from exploitation from unscrupulous freeloaders, whether individual authors using GenAI to produce texts or large companies profiting from extracting textual data. This digital hypercommons is not a democratic or socialized commons, but one subordinated to the extraction of collective inputs for private benefit. In its current form, the hypercommons requires the distributed input of many individuals while eroding the foundations of specific individual input, a form of contradiction marking digital capitalism (Fuchs, 2019).

Authors using generative AI might do well to ask themselves what they aim for in their writing and what are the ethical stakes involved in integrating GenAI tools. Such self-questioning presumes an ethical stance that recognizes the possibilities of GenAI but also their ethical dangers, including the macro effects of eroding individual moral agency and its concomitant institutional regime. Currently, it is left to each author to justify their own position and ethical standpoint, in the context of a collective shift in moral agency and action. While we acknowledge there are ethical choices involved for authors, we also caution against assuming a moral actor making autonomous ethical choices at the individual level, given the scope of the transformation under way.

Authorship at an Ethical Crossroads? Possible Responses

From the above observations, we can envisage two possible responses to the hypercommons at the base of digital technologies such as GenAI. These responses exist in a certain tension with each other but may allow combinations that involve more or less reconsideration of academic production more generally. The first would be to reinforce mechanisms of individual responsibility and reward that come from the idea of authorship. This would involve strengthening individual authorial responsibility by identifying and attributing authorial contributions with greater granularity, a trend derived from biomedical journals already adopted by many publishers (see https://www.elsevier.com/researcher/author/policies-and-guidelines/credit-author-statement). Some journals, asking authors to identify uses of GenAI or even banning it outright, attempt to pin down which parts of research derive from authorial creativity and which draw upon the hypercommons. Such demands, although difficult to police, attempt to re-establish author identification and preserve the integrity of individual merit and responsibility. They have the advantage of leveraging existing academic norms and reward structures, but require increasingly implausible attempts to separate individual and collective sources of academic production. Similarly, attempts to use GenAI “creatively” imagine the possibility of authorship in synthesizing and configuring GenAI outputs so as to demonstrate the ingenuity and intellectual contributions of authors. Such responses depend on the continuity of current institutional norms around authorship but search for ways to make these norms workable in the light of disruptive technologies.

A second set of responses would accept the erosion of individual authorship, and rethink what the social organization of scholarship would look like in their absence. Given the critique of individualism in current academic merit structures, it is tempting to welcome the end of authorship as a new form of collectivity in scholarly production. GenAI would force a form of collectivity that could ultimately enact a deeper rethinking of research. Nevertheless, several thorny problems result. For instance, given current conventions of naming and rewarding academic output, the erosion of authorship would require new ways of securing academic careers and motivating research without the individualized status it currently confers. This could require, for example, decoupling rewards from research outputs or finding ways to reward participation, service, and forms of collective contribution that are not “named” or easily quantified. Such a model would also need to find alternative ways to address responsibility and accountability for what is published and how it is published. Furthermore, it would also involve findings ways to use technologies – or develop new technologies – in which the hypercommons can be collectively organized and managed rather than using off-the-shelf solutions constructed by for-profit companies. Taking control of digital design and governance would be essential to ensuring that the hypercommons does not only draw on collective production, but reflects collective choices. Inspired by discussions of “platform cooperativism” that propose democratic responses to platform capitalism (Scholz, 2016), a democratic hypercommons would involve technologies in which the creation, training, and use of GenAI technologies, as well as the benefits of its operation, would be distributed in more transparent and fair ways.

Business ethics scholarship is far from an adequate scholarly response on either of these two fronts; early in the discussion around GenAI, scholars are largely on the back foot in devising post-hoc responses to a new status quo of GenAI. The risk of this situation is to reproduce the worst of both worlds – a collectively produced data hypercommons and technologies devised in opacity by large corporations that are trained in exploitative and non-transparent conditions (Lindebaum et al., 2023). In this scenario, collective inputs into the hypercommons are leveraged for individual gain by scholars attaching their names to outputs, with the distribution of gains from such platforms flowing freely to unaccountable for-profit companies. Continuing in this direction would retain a semblance of authorship as an illusion that masks the de facto exploitation of distributed actors. Both individual responsibilization and collective recognition are compromised in this scenario. In response, new ways of thinking the scholarly collective, and the individual scholar within it, are increasingly necessary. Faced with a problem beyond the scope of a single essay, we can only go so far as to indicate the ethical imperative to rethink moral agency in such a scenario. Some version of relocating moral agency, between researchers and the communities within which they operate, is the direction that this ethical discussion should take. We expect that the Journal of Business Ethics will be one of the forums in which this discussion will develop and thus have provided some preliminary thoughts on the direction it may take.