Philosophers until now have only interpreted the world in various ways. The point, however, is to change it. (Karl Marx, 1846)

As technology changes, society changes and so the way society produces knowledge and culture also changes. Yet, the predominant model of knowledge production continues to be one bound to the epistemology of last century’s industrial societies. In this book, I argued that to respond to the radical changes brought by the digital transformation of society and aggravated by the 2020 pandemic, the current model of knowledge creation must urgently be re-theorised. This means, I contended, pushing beyond mere observations of how higher education has been transitioning towards the digital and recognise that a more fundamental question needs to be asked. For example, it is no longer sufficient to reflect on how the digital transformation has required teachers to rapidly acquire digital skills to adapt and rethink their learning methods, or how the digital has affected branches of knowledge (e.g., humanities) or individual disciplines (e.g., history), or how differently academics now think about sharing their research findings (e.g., end-users) or how their research is increasingly dominated by data rather than by sources, including having to consider issues of storage, archival, transparency, etc. A different critical awareness is now required: the shift has been in—as opposed to towards—the digital.

Claiming that the shift has been in the digital acknowledges conclusively that the digital is now integral to not only society and its functioning, but crucially also to how society produces knowledge and culture. My argument for a new model of knowledge production therefore starts from recognising that persisting binary modulations in relation to the digital—for example, between digital knowledge creation and non-digital knowledge creation—are no longer relevant in that they continue to suggest artificial, irrelevant divisions. Such divisions, I contended, not only slow down progress and hinder knowledge advancement, but by fragmenting expertise, they sustain a model of knowledge that does not adequately respond to a reality complexified by the digital. It has been the argument of this book that the digital transformation of society requires a more problematised understanding of the digital as an organic entity that brings multiple levels of complexity to reality, many of which have unpredictable consequences. Our traditional model of knowledge creation based on single discipline perspectives, hierarchical divisions and competition is no longer suited to meet the unprecedented challenges facing societies in the digital.

In this book, I developed a new theoretical and methodological framework, the post-authentic framework, which critiques dominant positivistic and deterministic views of technology and computational methods and offers new terminologies, concepts and approaches in reference to the digital, digital objects and practices of knowledge production in the digital. The post-authentic framework breaks with dialectical principles of dualism and antagonism and with the rigid model of knowledge creation that divides knowledge into disciplines and disciplines into two areas: the sciences and the humanities. Dual notions of this kind, I argued, are complicit of an assiduously cultivated discourse that has historically exalted digital methods as exact, rigorous, neutral, more relevant and funding-worthy than critical approaches. This includes the cosy and reassuring myths that data is unarguable, bias-free, precise and reliable, as opposed to sources and human consciousness which have been more and more sidelined as carriers of biases, unreliability and inequality.

My reframing of the digital through the post-authentic framework helps us recognise that the narrative simplification around computational techniques and consciousness sidelining cannot be afforded to continue because knowledge does not respect the limits of disciplines and the implications of being in the digital transcend such artificial boundaries. This is a reality we can no longer ignore and which can only be confronted through a reconfigured model of knowledge creation that would reconceptualise it as happening in the digital. The world has entered a new dimension in which higher education can no longer afford to opportunistically see technology and its production as instrumental and contextual to knowledge and teaching or simply as an object of critique, admiration, fear or envy. The post-authentic framework that I proposed in this book functions as a radical critique of such outdated conceptualisations of the digital and argues that the current model of knowledge creation with its established boundaries between disciplines and specialisations is not suited to respond to the complex challenges of a world in the digital.

Instead, the framework advocates a notion of knowledge as fluid, in which differences are not rejected but welcomed according to the principles of symbiosis and mutualism (cfr. Sect. 2.2). Symbiosis and mutualism oppose models of reality that support individualism and separateness as inevitably leading to conflict and competition; one such model of reality is the division of knowledge into monolithic disciplines. Borrowed from biology, the concept of symbiosis breaks with the current conceptualisation of knowledge as separate, linear and fragmented into multiple disciplines and that of the digital as a static, inconsequential entity. To the contrary, symbiosis evokes ideas of close and long-term cooperation between different organisms and the continual renegotiation of interactions; past, present and future systems; power relations; infrastructures; interventions; curations and curators; programmers and developers.

Mutualism opposes interspecific competition, that is, when organisms from different species compete for a resource, resulting in benefiting only one of the actors involved. I maintained that our model of knowledge creation based on hierarchical separations between disciplines resembles an interspecific competition dynamic as it has forced knowledge production to operate within a space of conflict and competition. This model, I contended, is outdated and inadequate, it traps curiosity into rigid categories, and it is unsuited to rethink and explain the transformative effect the digital is having on our culture and society; to use Virginia Eubanks’ words, it contributes to automate inequality and it can therefore make society worse. I therefore argued that any re-modulation still operating within the current disciplinary model of knowledge creation is no longer sufficient; to this end, I proposed the notions of symbiosis and mutualism to help us reconceptualise knowledge as fluid and inseparable. Symbiosis and mutualism shape a model in which curiosity is finally given the long overdue free rein, in which the different areas of knowledge do not compete against each other but benefit from a mutually compensating relationship. When asking ourselves the questions ‘How do we produce knowledge today?’ and ‘How do we want our next generation of students to be trained?’, the concepts of symbiosis and mutualism may guide our answers.

Symbiosis and mutualism are central notions also for the development of a more problematised conceptualisation of digital objects and digital knowledge production. The post-authentic framework re-examines the digital as situated and partial, an extremely convoluted assemblage of factors and actors, themselves part of wider networks of situated components, processes and mechanisms of interaction and the various forms of power embedded in computational processes and beyond. As such, far from being mere immaterial copies of the originals, digital objects are acknowledged as bearing consequences which transcend traditional questions of authenticity; digital objects are never finished nor they can be finished; countless versions can endlessly be created through processes that are shaped by past decisions and in turn shape the following ones. Thus, the post-authentic framework engages with both products and processes which are understood as never neutral, as incorporating external, situated systems of interpretation and management and therefore bearing consequences which go beyond the object-centred culture of authenticity.

To exemplify this complexity of conflating humans, entities and processes and past, present and future experiences, I used ChroniclItaly 3.0, a digital cultural heritage collection of Italian American newspapers published between 1898 and 1936. Specifically, I examined and illustrated how the application of the post-authentic framework can inform the creation, enrichment, analysis and visualisation of a digital object. By redefining our understanding of both the conceptual and concrete dimensions of digital objects, tools and techniques, the post-authentic framework provides theoretical and methodological criteria that recognise the larger cultural relevance of digital objects and the methods to create them, analyse them and visualise them it affords an architecture for issues such as transparency, replicability, Open Access, sustainability, data manipulation, accountability and visual display.

Central to the framework is the recognition that illusory, positivistic notions of the digital are ill-suited for the problems of the digital societies we live in. The post-authentic framework exposes aspects of knowledge creation in the digital that oppose both the mainstream fetishisisation of big data and algorithms and an unproblematised understanding of the digital, it addresses issues such as ambiguity and uncertainty, and the subjective and interpretative dimension of collecting, selecting, categorising and aggregating, i.e., the act of creating data. In pursuing my case for a novel model of knowledge creation in the digital, in the book, I presented a range of personal case studies and examined how the application of the framework in my own work helped me address aspects of knowledge creation in the digital such as transparency, documentation and reproducibility; questions about reliability, authenticity and biases; and engaging with sources through technology. Using ChroniclItaly 3.0 as digital object, I applied the post-authentic framework to a variety of applied contexts such as digital heritage practices, digital linguistic injustice, critical digital literacy and critical digital visualisation and I devoted specific attention to four key aspects of knowledge creation in the digital: creation of a digital object in Chap. 2, enrichment of a digital object in Chap. 3, analysis of a digital object in Chap. 4 and visualisation of a digital object in Chap. 5. This auto-ethnographic and self-reflexive approach allowed me to show how a re-examination of digital knowledge creation can no longer be achieved from a distance, but only from the inside. Ultimately, the book demonstrated that it is only through the conscious awareness of the delusional belief in the neutrality of data, tools, methods, algorithms, infrastructures and processes that the biases embedded in these systems and amplified by their ubiquitous use can in fact be identified and addressed.

In Chap. 3, for example, I showed how from pre-processing to data augmentation, the application of the post-authentic framework to the task of enriching digital material can guide each action of an enrichment workflow. Using the case examples of DeXTER and ChroniclItaly 3.0 (Viola and Fiscarelli 2021a) and informed by symbiosis and mutualism, Chap. 3 illustrated how the post-authentic framework can guide the interaction with the digital, not as a strategic (grant-oriented) or instrumental (task-oriented) collaboration but as a cognitive mutual contribution. In particular, I unpacked the ambiguities and uncertainties of methods such as optical character recognition (OCR), named entity recognition (NER), geolocation and sentiment analysis (SA) and showed how the post-authentic framework can help address these challenges, for instance, through a thorough understanding of the assumptions behind these techniques, constant update and critical supervision. The framework recognises curatorial practices as manipulative interventions which especially in the case of cultural heritage material, bear the consequence of being a source of knowledge for current and future generations.

This book was also a reflection on the implications of the digital transformation for our perception of the world. Drawing on the mathematical concepts of discrete vs continuous modelling of information (cfr. Chap. 4), I discussed some of the repercussions of the transformation of continuous material into discrete form due to the discretisation of society, that is, binary sequences of 0s and 1s, especially consequential for the notions of causality and correlations in relation to knowledge creation. In discrete systems, causality is hidden because information is discretisised into exact and separate points, which must be categorised and made explicit. As a result, we are given a digitally mediated image of the world, meaning that the relational causality of continuous information is replaced by predictions of correlations. Thus, societies in the digital in which the ‘big data philosophy’ reigns, I argued, are offered countless patterns but no explanations for them. Us—the digital citizens—are left to deal with a patterned, yet a-causal, way of making sense of reality.

Closely related to this point is the use of metaphorical language to name computational techniques, such as topic modelling, sentiment analysis and machine learning (ML); this phenomenon can be seen as a way to make sense of an a-causal reality. Indeed conflating specific mathematical concepts such as discrete vs continuous modelling of information with such familiar notions has created reassuring expectations, that machines can learn to understand language and somewhat provide neutral, precise and understandable accounts from large quantities of textual material. In the case of SA, this altered image is that the subjectivity of human emotions can be reduced to two/three categories and quantified according to probabilistic calculations; in the case of ML, the unique, holistic human process of experiential learning and of connecting logic with contextual factors is discretisised into probabilities’ scores of huge, yet partial, quantities of discrete data; in the case of topic modelling, the text itself disappears and so does its continuous structure, i.e., the wider context that produced it. The computational dissembling of the causal structure by the dualistic system of 0s and 1s hides the original continuous nature to which the data refers. The use of metaphorical language such as ‘sentiment’, ‘learning’ and ‘topic’, I argued, has therefore certainly contributed to make these methods extremely popular, especially outside their fields of origin, but at the same time, by obfuscating the precise mathematical laws upon which these techniques are based, it has created unrealistic beliefs.

The post-authentic framework can be a useful tool to guide the unpacking of properties and assumptions of computational techniques used to analyse a digital object. Using topic modelling as an example, in Chap. 4, I showed how the framework can be applied to engage critically with software. At the core of the framework is the importance of maintaining a close connection with the digital object; for example, in the chapter, I stressed how aspects such as pre-processing, corpus preparation and choosing the number of topics typically reputed as unproblematic are in fact fundamental moments within a topic modelling workflow in which the analyst is required to make countless choices. The example of topic modelling demonstrates how the post-authentic framework can guide the exploration, questioning and challenging of the interpretative potential of computation.

Operating within the post-authentic framework crucially means acknowledging digital objects as living entities that have far-reaching, unpredictable consequences; the continually changing complexity of nets involving processes and actors must therefore always be critically supervised. The visualisation of a digital object is one such process. The post-authentic framework opposes an uncritical adoption of digital methods and points to the intrinsic dynamic, situated, interpreted and partial nature of the digital. Despite being often employed as exact ways of presenting reality, visualisations are extremely ambiguous techniques which embed numerous human decisions and judgement calls. In Chap. 5, I illustrated how the post-authentic framework can be applied to visualisation by discussing two examples: efforts towards the development of a user interface (UI) for topic modelling and the design choices for developing the app DeXTER, the interactive visualisation interface to explore ChroniclItaly 3.0. I specifically centred my discussion on how the ambiguities and uncertainties of topic modelling, network analysis (NA) and SA can be encoded visually. A key notion of the post-authentic framework is the acknowledgement of curatorial practices as manipulative interventions and of how it is in fact through exposing the ambiguities and uncertainties that knowledge creation in the digital can be kept honest and accountable for current and future generations.

Through the application of the post-authentic framework to these four case examples, the book aimed to show how an uncritical and naïve approach to the use of computational methods is bound to reproduce the very opaque processes that the publicised algorithmic discourse claims to break, but more worryingly, it contributes to make society worse. The book was therefore also a contribution to working towards systemic change in knowledge creation practises and by extension, in society at large; it provided a new set of notions and methods that can be implemented when collecting, assessing, reviewing, enriching, analysing and visualising digital material. It is this more problematised notion of the digital conceptualised in the framework that highlights how its transcending nature makes old dichotomies between digital knowledge creation and non-digital knowledge creation no longer relevant and in fact, harmful.

The digitisation of society already well on its way before the COVID-19 pandemic but certainly brought to its non-reversible turning point by the 2020 health crisis has brought into sharper focus how the digital exacerbates existing fractures and disparities in society. Unable to deal adequately with the complexity of society and social change, the current model of knowledge creation urgently requires a re-theorisation. This book is therefore a wake-up call for understanding the digital as no longer contextual to knowledge creation and for recognising that a discipline compartmentalisation model sustains an anachronistic and not equipped way to encapsulate and explain society. All information is now digital and algorithms are more and more central nodes of knowledge and culture production with an increased capacity to shape society at large. As digital vs non-digital positions have entirely lost relevance, it has become increasingly futile to create ultra-specialised disciplines from other disciplines’ overlapping spaces or indeed to invest energy in trying to define those, such as in the case of DH; the digital transformation has magnified the inadequacy of a mono-perspective approach, legacy of a model of knowledge that compartmentalises competing disciplines. Scholars, researchers, universities and institutions must acknowledge the central role they have to play in assessing how knowledge is created not just today, but also for future generations.

The new theoretical and methodological framework that I proposed in this book moves beyond the current static conceptualisation of knowledge production which praises interdisciplinarity but forces knowledge into rigid categories. To the contrary, the framework offered novel concepts and terminologies that break with dialectical principles of dualism and antagonism, including dichotomous notions of digital vs non-digital, sciences vs the humanities, authentic vs non-authentic and computational/neutral vs non-computational/biased. The re-devised notions, practices and values that I offered help re-figure the way in which society conceptualises data, technology, digital objects and the process of knowledge creation in the digital.

My re-examination of the current model of knowledge includes not just scholarship but pedagogy too. And whilst this is not the main focus of this book, the arguments I put forward here for scholarship equally apply to pedagogy. In order to achieve systemic change, academic programmes must be updated to include opportunities for critical reflections on the pressing issues stemming from the ubiquitous underpinning of AI in our societies. Through real use cases similar to those illustrated throughout the chapters of this book, students would learn about the deep implications of digital technologies on contemporary culture and society. In the words of Timnit Gebru, the research scientist who was recently fired by Google after exposing how strongly biased Google’s AI systems are (Bender et al. 2021), ‘The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many’. Indeed, as technology is a central locus of knowledge and culture production and AI technology in particular is dominated by a white, mostly male workforce, the culture that is produced replicates the biases of the almost entirely male, predominantly white workforce that is building it.

Although there may not be any initial intention of using biased models, tech companies become immediately accountable, at least from an ethical perspective if not yet a legal one, as soon as they refuse to acknowledge and correct such biases even when these are clearly exposed. If it is true that governments are spectacularly behind in creating rules for the ethical use of this technology, it is equally true that big tech companies shouldn’t wait for laws to be passed. Because of the serious social repercussions of the technology they create, they have a responsibility to bring this issue at the centre of their organisations. Meanwhile, universities also have a responsibility to train in ethical digital management the next generation of thinkers, scholars and academics as well as of digital citizens at large. Equally, research funding agencies must specifically require that the issue of digital ethics is explicitly addressed by researchers in their projects, for instance, by demanding a digital critical component in their proposals. As users and co-producers of technology, our responsibility is to counterbalance the main AI discourse with new, more honest narratives, to critically reflect on how we are producing knowledge today and for tomorrow, and on how we educate the next generation of students and digital citizens to be like. The post-authentic framework of knowledge creation in the digital provides a framework to communicate and incorporate values of honesty, accountability, transparency and sustainability into knowledge. It reminds us that a racist, sexist, homophobic digital society is not so much a reflection of human subjectivity in data and algorithms but proof of its pretend absence.