Copies and facsimiles

  • 495 Accesses


The concepts of original and copy, of source and facsimile, always convey particular understandings of the process of reproducing documents. This essay is an analysis of these concepts, in particular copies and facsimiles, framed within the context of digital reproduction. The activities and cases discussed are picked from two areas: digital scholarly editing and cultural heritage digitization performed by research libraries. The conceptual analysis draws on three fields of scholarly inquiry: scholarly editing, library and information science, and philosophical aesthetics.

When we talk about digital scholarly editing and library digitization, we frequently use words such as originals, sources, copies, and facsimiles. Each of these words, however, points to varying understandings of documents, texts, media, and art. What is an original, what is a facsimile, and in what way is a digital facsimile a reproduction, if it is indeed? This article takes a look at some of the connotations of these labels. It starts with a discussion of the generic concepts, gradually moves into the particular realms of scholarly editing and library digitization, and ends by framing the discussion of copies and identity within philosophical aesthetics.

Media and copies

To begin, let us recognize that the terms original and copy are mutually dependant. The everyday idea of an original supposes the existence of copies (or forgeries) of that original, and there would be no copies if there were no such original. So between original and copy there is always a supposed relationship, or perhaps movement. For, inasmuch as it refers to a process of transmission, for instance when a copy of a printed book is chosen as a source document, digitized and made available on the web, the relationship between this source and copies derived from it seems to imply the kind of simple linearity which, as citizens of the print regime, we have come to take for granted. Digital culture dissolves this linearity in more ways than one, and suggests spiral, recursive processes in place of linearity. When the perceived contents of printed books have been re-represented as bits and zeroes and have been shipped into the plural streams of the web, the questions of which texts, documents, and displays constitute originals and which constitute copies largely depend on which streams one is looking at and on where in the recursive processes one starts looking.

A prominent characteristic of documents is that they are unique yet at the same time indefinitely repeatable. One example of this would be the signature, which is highly personal but which derives its function by being repeated in various public settings (Ferraris 2013, 298ff.). An even more striking example would be the DNA signature or code, which is unique for each individual but yet repeated identically in each and every one of the individual’s billion cells.

To most of us, notions of copies and originals are rooted in the mechanics of print and analogue media. As we all know, digital media challenge such notions. I will not go into lengthy discussions about how the mechanics of print and digital text cultures fundamentally differ, as there is an abundance of scholarly literature on this topic already. Let us just remind ourselves that although digital documents are sometimes characterized as immaterial, this characterization is misleading at best. Many will recognize the writings by Matt Kirschenbaum (e.g. 2008), who refers to the hard drive as perhaps the stickiest form of physical media ever invented.

Furthermore, digital media depend on a distinction between storage media and presentation media. This separation is not new, nor is it specific to digital media, and it does not mean that these media are immaterial. On the contrary, for both storage and presentation, we make use of material, tangible media. This is a feature shared by many families of media, including several analogue ones, e.g. the gramophone, where the vinyl records are the storage for the notation and loud speakers are among the devices used to perform the music. We would hardly refer to the gramophone as an immaterial medium.

Finally, an important distinction should be made between monoform and polyform artworks. Monoform artworks consist of a unique and single item, such as da Vinci’s Mona Lisa. Polyform works consist of a set of items that claim to be identical, such as the many copies of Graham Greene’s Our Man in Havana. A common enough version of the polyform mode is the distinction between type and token, where a type can be e.g. the blueprint for a car model, whereas the many cars manufactured according to the model make up the tokens of this type.

Issues such as media materiality, storage versus presentation modes, and monoform versus polyform modes affect the relationship between original and copy for the particular documents of these art forms, and they even affect the question of whether or not such a thing as a copy can be said to exist.

Let us further recognize that copy has many connotations, as does copia, its Latin original (pun intended). In everyday discourse, copy refers either to imitation or to instance (i.e. a copy within an edition of a printed book). We will return to this double connotation later.

In a much quoted essay, Walter Benjamin discussed how the force of mechanical reproduction steadily grows within media and art (Benjamin 1936). To Benjamin, this was not necessarily a blessing, given what he saw as the corresponding loss of the aura of the artwork. In other words, the mechanical increase in copies could basically be detrimental to the original. Bruno Latour and Adam Lowe, on the other hand, have an interesting turn on this:

The intensity of the search for the original, it would seem, depends on the amount of passion triggered by its copies. No copies, no original. To stamp a piece with the mark of originality requires the huge pressure that only a great number of reproductions can provide. (Latour and Lowe 2011, 278).

So rather than draining a work of art of its aura or originality, perhaps the very abundance, or copia, of copies is what consolidates the aura of the work and in the end assures its survival.

Digital reproductions

Latour’s and Lowe’s turn on the issue of aura is something to bear in mind when thinking about digital reproductions of documents. Here, the relationship between original and copy, between a reproduction and the thing that the reproduction aims to reproduce, is put to the test. And as with other forms of culturally sanctioned reproduction, digitization adds a status to the document being digitized. The document is granted an entourage of digital copies.

Specifically, I am thinking of scholarly text editing, where there is not only awareness that reproducing means changing, but where there is also an age-old toolbox of measures and principles with which to come to terms with such change. In scholarly editing, transcription editions reproduce documents as strings of texts, whereas facsimile editions reproduce them as graphs. As for facsimile editions, a distinction was proposed quite early by W. W. Greg (in Pollard et al. 1926) between typographic and photographic reproduction. To Greg, photographic reproduction would ensure a higher degree of authenticity vis-a-vis an original, whereas the typographic reproduction would better support such things as collation or calculation. At that time there was no technology to support both. But digital scholarly editions can help dissolve the distinction between the authentic and the executable. Indeed, the presence of photographic reproductions in scholarly editions, or digital facsimiles, has increased tremendously the last few decades. Digital image facsimiles are becoming a standard feature in digital scholarly editions as something that is expected by users. This is logical. An objective for scholarly text editors has always been to bring the reader as close as possible to a set of source documents by providing a thorough and reliable representation of these sources. Digital facsimiles offer an enhancement of that purpose.

Another field engaged in the systematic and methodical reproduction of culturally significant sets of documents is the digitizing of library collections. The process forces the library to specify the nature and borders of the source document, the original, if you will, and to prioritize some features and aspects of the original at the expense of others. Curiously enough, these kinds of digital reproduction seem to be regarded both as means for intellectual analysis performed by a critical subject and as mechanical, non-critical, objective processes.

This is certainly the case for reproduction work in libraries, which repeatedly seems to suggest the idea of the perfect copy or the clone, i.e. that it is possible to capture everything in a source document and then transfer all of this information onto another document, the copy. The copy is equivalent to the source document, which can basically then be discarded (something which also regularly takes place in libraries, see numerous examples in Baker 2001). The idea again comes to the fore in so-called mass digitization, and it rests firmly on a view of media as channels or conduits where information is transported without being affected. But to some extent, this idea is also embraced by scholarly editors.

On the one hand, working with facsimiles in scholarly editions has traditionally been regarded by textual critics as a non-critical activity, where the editor recedes into the background and the user is brought closer to the source documents by being given direct access, as it were, to the originals. On the other hand, perhaps more than any other editing phase, digitization and the subsequent editing of images has the potential to make editors recognize that virtually all parameters in the process (image size, colour, granularity, bleed-through, contrast, layers, resolution etc.) require critical intellectual choices, interpretation, and manipulation. These are interpretative work processes which are not that different from the steps and interpretations of the critical editing of texts.

And if one looks closely at high-quality digital imaging projects in libraries, it is clear that teams of conservators, technicians, and photographic experts constantly make series of decisions informed by critical and bibliographical analysis and by a highly specialized knowledge of the graphical, historical, and other research aspects on the object they are digitizing. This critical work, elsewhere referred to as critical digitization (Dahlström 2015), is far from always recognized by, or even known to, scholarly editors. It comprises activities such as critical discrimination or collation between varying source documents; image editing and emendation; critically matching the reproduction to the source with respect to exhaustiveness and faithfulness; and producing large amounts of metadata, descriptive encoding and bibliographical information. In doing so, scholarly work is embedded in the objects. Visual fragments from different sources can even be critically selected to form an eclectic virtual facsimile, somewhat akin to how classical textual criticism establishes a text through an amalgam of readings from several different versions (an early printed example would be Inger Bom’s 1974 edition of a sixteenth-century Hortulus synonymorum, see Kondrup 2011, 68). And finally, the photographic teams regularly produce many versions of the facsimiles (varying in colour, light, resolution, size, formats) to serve different aims, both internally and externally.

Be this as it may, it is still a plain fact that any method for transmitting perceived content from one representation to another, be it ever so thorough and critical, necessarily means that some information and aspects are prioritized at the expense of others. There will always be losses, additions, and changes. This is why it is worth trying to counter such changes and losses by providing more modes of representation than one in the reproduction. We are all aware of the value of a digital image of a manuscript as a representation which enriches and is enriched by a searchable, encoded transcription of its text. This is increasingly the case in digital scholarly editions and digitized library collections, where users are presented not just with a transcription or a digital facsimile, but a synoptic access to both modes, or perhaps a representation which has been enhanced with even further views, such as editorial comments or the TEI markup layer, all integrated within the same technical platform. Offering these various views and entrances to the edited work supports editorial transparency. But the various view modes also support one another. The transcription can be a key to or a map of the facsimile and thus can shed light on it, and vice versa. Using Elena Pierazzo’s (2014, 4) phrase, the transcription as interpretation enters into ‘competition’ with the facsimile.

Several recent editing projects even go to considerable lengths to accommodate the need for and interest in graphical information about the source documents, and they display the entire source document, as it were, i.e. not just the sections of the document bearing text, but also covers, margins, blank pages, etc. In fact, this is an area in which we are only beginning to take the first steps to go beyond the textual transcription and the 2D flat graphical reproduction to represent the source document and to provide a large array of access and views: 3D simulations of the material object or minute photographs down to a microscopic, molecular level to serve analyses of cellulose, skin nerves, and fibers (Björk 2015, 197). And in the other direction, vast amounts of abstracted information in the form of linked data to serve various kinds of work at the macro level.

Critical facsimiles

I mentioned that digital facsimiles are regularly edited and manipulated. For instance, colour is adjusted, images which have been warped or distorted in the capture phase are adjusted, and the background is often manipulated digitally in the post-processing phase. High-quality digital imaging in library digitization and in digital scholarly editions should really provide the user with links to the uncompressed raw files as they were prior to being manipulated and edited. In addition, a transparent account of the production history, versionality, technical parameters, and editing history of the image files is needed. Although these concerns may seem peripheral or may rarely come up, they constitute important issues, particularly when we are addressing the relationship between originals and copies. This kind of transparency contributes to the authenticity of the reproduction. In archival science, this is sometimes referred to as digital validity (e.g. Duranti 1995).

A digital document not only carries an implicit and interpretable history of production in the form of its graphical and textual display (as printed objects do), but also an explicit documentation of its production, usage, and version history, embedded in its technical layers and paratexts. And this does not just apply to textual documents, of course. During image capture and processing, the image can be edited at bit level without a human eye being able to discern that a change has been made from one instance to the next. Our concept of authenticity is different in the case of digital photographs than in the case of analogue photos. Or perhaps digitization has meant that our entire ‘truth contract’ towards images has been renegotiated. This is why photographers are beginning to embed family trees into digital images. In other words, they include metadata about the history, versions, and updates of the object in order to provide transparency and strengthen authenticity. The user will thus be better equipped to discern the steps in the production process and the degree to which the image has been edited. In digitization projects, related tools would be the calibrating stick, the ruler, and the colour chart, which all enable the user to check the reproduction. Again, these tools are only rarely provided to the user in digital scholarly editions.

But not only can digital facsimiles be retouched and tidied up in order to come closer to the source document and in that sense be said to be more virtual. They can also be used, quite literally, to shed new and different light on the source documents, a much cited example being the digital edition of the fragmented manuscript Codex Sinaiticus, which allows us to dig deeply into the layers of the object.Footnote 1 This is even further accentuated when the source documents have a more complex material spatiality and texture, e.g. cunei form tablets or ancient seals.

Facsimiles can of course also be used to uncover details contained in the document but invisible to the naked eye, often with the aid of infrared light or x-ray or multispectral imaging, such as in the David Livingstone Spectral Imaging Project and the Archimedes Palimpsest Project.Footnote 2 Similar techniques are being used to uncover historical layers in documents damaged by fire or other accidents or even by the earlier good-intentioned efforts to uncover damaged contents, e.g. the British Library’s work to restore one of the four extant Magna Carta manuscripts (Duffy 2015).

The concept of facsimile

So digital scholarly editing and critical digitization has increasingly and sometimes quite literally opened up the graphical and material source documents, with the potential to enhance new scholarship and research and to enable us to see familiar objects through new lenses. Be that as it may, a facsimile is of course never a perfect copy. It is a kind of simulacrum. And both terms, fac-simile and simul-acrum, have a common stem, meaning ‘as’ or ‘as if’.

The Latin word simulacrum has interesting meanings: semblance, imagination, phantasm, shadow, ghost. A related word is ‘simulamen’, meaning imitation or even subterfuge (deception, to hide something). Another is ‘simulator’, which can mean magician. These references to ghosts and spirits living in the shadows, hidden behind something else, connects to the way many of us think about and talk about the digital object as a genie released from his bottle, a spirit in the material world.

In his dialogue The Sophist, Plato points to a distinction between the making of likenesses (‘eikon’) and the making of semblances (‘phantasma’). The making of likenesses involves creating a copy that conforms to the proportions of the original, whereas sculptors in Plato’s time who made e.g. colossal works often altered the proportions to accommodate the perspective of the viewer, e.g. making sure that the upper parts did not look too small and the lower parts too large, somewhat akin to trompe-l’œil paintings. Plato also questioned the status of a painting of an object as an original. He referred to such paintings as simulacra which strove for an effect of authenticity. Incidentally, Aristotle, in contrast, saw a representation as a creative act, not as an inferior recreation of a model. One could take this further in claiming that a representation demonstrates a value of knowledge of its own, knowledge that exceeds the knowledge one had prior to the existence of the representation. Again, this trail of thought can certainly be brought to bear on acts of digital reproductions.

What Plato’s argument basically boils down to is two forms of image making, or simulacra:

  1. 1.

    the exact, proportionate reproduction of an original;

  2. 2.

    a reproduction which has been intentionally manipulated and skewed to produce the perception of a ‘correct’ copy or instance.

Although many of us probably think about digital facsimiles in Plato’s first sense, the second approach is certainly not unfamiliar to the world of digitization and scholarly editing. Rather than providing an exact copy, the facsimile offers just a sample of characteristics of the source document. The selection may be the result of careful and deliberate consideration, as I suggested earlier, but it might also be due to random or incidental factors, factors over which the editor or digitizer does not have full control or even understanding.

How can we define the concept of facsimile? If we strictly follow the encyclopaedic definition of facsimile, any reprint of e.g. a book could in principle constitute a facsimile. But this is not what we mean by a facsimile, neither in everyday discourse nor in scholarly work. In fact, it is quite difficult to come up with a definition that covers both our daily understanding of the term and the scholarly concept. As book historian Kristina Lundblad explains (Lundblad 2015), not only can the term facsimile in scholarly editions designate a whole range of graphical phenomena and levels, but a graphical rendering of a source document can even be labelled differently within one and the same edition, either as ‘facsimile’ or as ‘illustration’, or simply as ‘image’. Usually, there is a qualitative difference between these labels. Unlike ‘illustration’ or ‘image’, the term ‘facsimile’ in a scholarly edition seems to require that historical and perhaps even text-critical work has been done to produce it. But this text-critical level is rarely if ever reached in the case of photographic reproduction in editions. Perhaps this is because we have not yet been witness to the emergence of an image criticism on par with centuries-old textual criticism.

A facsimile is usually a 2D photographic reproduction of the graphical appearance of the source, but it can also be a 3D reproduction of the source as a material document in the form of book facsimiles or scans and prints using 3D printers, where each page is carefully scanned and the printed pages then physically pieced together. But no matter how faithfully this is performed, the resulting facsimile, as book historians remind us (Ridderstad 2003), will never be a facsimile in the strictest sense, and for a 3D reproduction, only very rarely is a facsimile manufactured with an identical combination of types, tools, colours, materials, and paper quality as that of the original document. Were a facsimile to aspire to this level of identity and authenticity, it basically would be a forgery.

A representation, both a text transcription and a facsimile image, is something other than the original it represents. It is akin to the principle of a map. A map is a representation of a landscape, not the landscape itself. Even Borges’ both humorous and nightmarish notion of a world map at the scale of 1:1 is about a representation which has the degree of a copy, but which is not the landscape itself. At the same time, we must look at it pragmatically. We cannot, of course, refrain from representing and reproducing, only because the reproduction cannot do total justice to the original. A map will always be a schematic reduction, but that is also precisely what makes the map useful. The reduction may be strategic, highlighting specific details or features of the original objects at the expense of others. The facsimile functions as a comment on the original, or even an exhibition. Kathryn Sutherland (2017) has even referred to it as a curatorial installation.

The facsimile aims to invoke the virtual presence of the source, so the bond between reproduction and source is not only graphical and material but is also defined by a retrospective relationship between two points in history, the then and the now. In doing this, however, the facsimile simultaneously makes use of subtle effects to highlight this historical relationship, to mark a difference towards the source. Lundblad (2015, 100) refers to it as an accent, distortion, or warpedness in the facsimile towards to the source. Again, we are reminded of Plato’s ideas of a simulacrum as something deliberately skewed.

That the representation is something different from the source is of course obvious to us, but still something we tend to forget when things turn industrial in scale, which is often the case with library digitization and digital scholarly editing. I mentioned earlier the trope of the clone, induced by mass digitization. But high-resolution digital facsimiles, in library digitization activities as well as in digital scholarly editing, also tempt us to espouse this fallacy. If we are careful to create digital representations of sufficiently high quality and then provide them with quantitatively and qualitatively adequate meta-information, we will avert any future need for new digital renderings because every relevant aspect of the original is already represented in the digital collection we created. We would, in other words, have achieved a kind of ‘definitive reproduction’. Even more to the point, state-of-the-art technologies for image capture and screen rendering, especially in high quality digitization projects, tend to be so dazzling that one might think of the quality of digital facsimiles as insurpassable. History however informs us that this is never the case. Given a decade or two, the facsimiles, once thought to be definitive, call for being replaced by better ones through redigitization.

We know, of course, that pictures can lie and that with modern digital photography the truth contract between the photographer and the viewer has been challenged. The retouching of photographic items is an easy task for anyone with a smartphone. Our relationship with digitally born or edited images and their authenticity is changing fundamentally. But even the high-quality digital facsimile can be treacherous in its claim of authenticity, that what we receive in our hands, or at least on screen, is the very original object. Something illusory enters the scene. The way in which digital facsimiles simulate an original as a spatial object is increasingly seductive. For several years, there have been plenty of digital applications with which we can flip through a digitized book using mouse and pointer or by touching the screen with our fingers and pulling the edges and margins. It is becoming easier to twist, turn, rotate, reduce, enlarge, and move around the object on the screen. Audio additions, such as the sounds of pages turning, are common in smartphone or tablet applications to enhance the simulation, and haptic interfaces can further strengthen the sense or experience of authenticity. Even lo-fi digital facsimiles of manuscripts expose us to gutter shadow images, responsive animation, and other tools with which they attempt to prompt us buy the simulation, like the magician in a circus (cf. the meaning of simulator as magician). The techniques aim to convey a sense of authenticity, or rather, they violate one kind of authenticity, that of being faithful to the original appearance of the page, in order to uphold another kind of authenticity, that of simulating the spatial object. The experience presupposes or generates a kind of suspension of disbelief. And here we touch upon Plato’s second meaning of a simulacrum: a reproduction deliberately distorted so as to achieve the effect of an authentic copy.

Facsimiles and scholarly editing

Digital scholarly editing can retouch images or strengthen their contrasts to convey the appearance of an original in better shape and readability than it actually is. Although these practices could be thought of as innovations of the digital age, they were in fact used by scholarly editors already in the 1920s (e.g. the 1927 facsimile edition of the Codex Argenteus, Andersson et al. 1927). Facsimiles thus have a long history in scholarly editions, stretching back more than a century. What roles do they play, and are they changing scholarly editing in any fundamental way? Until recently, facsimiles have largely played the subordinate role of illustration to the transcription text, an add-on. Usually, only a few sections in the source were photographically reproduced. Now, however, almost all digital editing involves image capture, even when the editors aim for a text transcription edition. Not only can OCR turn the images into machine-readable and codeable text, the edition can also display images in full alongside the edited transcriptions. The facsimiles are then no longer just tools for internal work, but a form of publication mode. This has proven to go hand in hand with particular types of scholarly editing, such as genetic criticism, material philology, versioning, documentary editing, text sociology, and editing when transcribing the text is particularly difficult or the graphical aspects of the source have significant relevance to the interpretation of the text. Some editing projects are beginning to focus not only on the edited work and its variant texts, but also on the context of the work. They strive to include large amounts of ancillary materials, such as photographs, contracts, letters, ads, paintings, music recordings, and newspaper articles and clippings, either by digitizing these materials themselves or by depending on libraries and archives to digitize such material and make it available.

Documentary editing, the editing of primary source documents, is much in line with this development. It is a field of practice that seems to sit between critical editing and advanced library reproduction work. But documentary editions differ from digitized collections in some respects, for instance the kind of reunification of physically dispersed fragments of a document that I mentioned before. Digital documentary editions also link the reproductions with external resources to form a virtual network in a way most collections digitized by libraries do not. Furthermore, digital documentary or genetic editions can establish links between or even within documents, using facsimiles in a way that was previously impossible with printed editions (Pierazzo 2014).

One of the reasons some people are optimistic about documentary editions is that digital facsimiles of source documents might be republished and reused to a higher degree than other segments in an edition. But this is far from certain and will depend on what approach the digitizing agent (such as a library) will take with respect to putting e.g. Creative Commons Public Domain Mark (PDM) seals on the digital files. It will also depend on the development of orphan works legislation (Martinez and Terras 2019).

Again, a heightened awareness of this on the basis of a dedicated image criticism would serve as an incentive for scholarly editors to increase the transparency of the production history of such images and to subject their degree of authenticity and (un)certainty to better scrutiny. Digital facsimiles are still being treated with less rigour and interest than expected in the field of scholarly text editing. The paradigmatic mode of this field still is the text transcription, and the role of the picture as a subordinate illustration has probably been transferred from the printed to the digital realm.

In order for this to change, editors and users need to treat the images with the same demands for authenticity, detail, and transparency that we apply to text transcriptions. Exhaustive metadata for the images, for instance, will be of paramount importance, providing information about states, production history, and digital provenance. What is missing for many current digital facsimiles in editions and digitized collections is the historical-bibliographical link between, on the one hand, what we see on the screen and, on the other, a particular identified artefact in a physical collection. In other words, which document was actually used when producing a given digital facsimile?

Autography, allography and reproductions

In the concluding section of this article, let us return to some of the fundamental concepts put forward in the beginning. It seems obvious that issues of identity, imitation, and authenticity surface within digital scholarly editing and library digitization. Difficult problems occur when we are faced with questions such as how can we distinguish an original (in two senses of the word) from a “mere” copy, how similar are the two, and how can we measure the degree of authenticity. One way of pondering those problems is to turn to the realm of philosophical aesthetics. Let us briefly have a look at a couple of discussions concerning such problems.

A famous question raised by Frederick Bateson (referred to in McLaverty 1984, 82) was ‘If the Mona Lisa is in the Louvre, where is Hamlet?’ Or put more bluntly, if I break into the Louvre, get my hands on the Mona Lisa, and burn it to ashes, the work Mona Lisa would be forever gone. If, however, I pick up my own pocket book copy of Hamlet (or even the original manuscript, had it survived) and burn it to ashes, no one would claim that the work Hamlet was forever gone. This is because the two works belong to different regimes of art, tangible and intangible, and hence relate differently to the media categories mentioned earlier, such as repeatability, monoform/polyform, materiality, and storage/representation. The identity of a tangible work of art (e.g. painting or sculpture) is usually closely tied to a particular physical artefact, but this does not apply in an equal sense to intangible regimes of art, such as literature or music. The identity of tangible artworks is in this way linked to a specific production history, while the identity of intangible artworks is linked to an ahistorical code (or notation, see below), whose relationship to any represented empirical object can be more or less arbitrary. Another characteristic is that the tangible work of art usually consists of unique items which identify the work, while the intangible artworks consist of multiple manifestations (editions, performances, etc.), each of which can be said to represent the underlying work, what we previously referred to as monoformity versus polyformity. In the case of tangible arts, thus, copies and works generally coincide. Intangible works of art are also extended in time. When preserved in a sequential code, we have, in a sense, a text.

Nelson Goodman (1976) suggested the conceptual pair of autography and allography to address this phenomenon. Autographic works of art are e.g. paintings and sculptures, whereas allographic works of art are literary works, films, music, or architecture. Allographic works are identified as being represented by a series of instructions, a representation Goodman calls notation: ‘an art seems to be allographic just insofar as it is amenable to notation’ (Goodman 1976, 121). According to this view, notation is separate from the production conditions and history of the work of art, enabling the work to be recreated at another time and place. Notation thus signifies allographic art forms and is not something you would find in arts such as painting or sculpture. Every replay, recitation, and performance of a correct notation with exact relative positioning of notation characters generates a new instance of the work, which identifies and defines the work. Each new correctly performed instance reproduces the work as much as any other correct instance. In this way, a text is not a written copy of a work. It is (along with other text instances) the work itself. In such a particular context, the distinction between original and copy is meaningless.

What is left out of this equation is obviously the problem of variants and versions. Goodman only talks about exact notations and correct instances, but we all know that there can be minor or major textual differences between two texts which do not in any way prevent us from identifying them as instances of the same work of art. The whole discipline of textual criticism and scholarly editing, of course, is largely devoted to this problem and operates at a higher level of complexity than this.

Still, this Goodmanian view of works of art and copies has some peculiar implications. Let us consider two.

One is, according to Gérard Genette, that it would be impossible to imitate a text directly; it can be imitated only indirectly, by practicing its style in another text. This is typical of literature and music, Genette says. The visual art forms, however, allow for direct imitation:

copies are routinely done ... To imitate directly – i.e., to copy – a painting or a piece of sculpture means an attempt to reproduce it as faithfully as possible by one’s own means, and the difficulty and technical value of the exercise are obvious. To imitate directly – i.e., to copy – a poem or a piece of music is a purely mechanical task, at the disposal of anyone who knows how to write or to place notes on the staff, and without any literary or musical significance (Genette 1997, 84).

So the reproduction of text and music requires neither subject skills nor advanced artisan skills. The imitation, on the other hand, ‘supposes a more complex operation, the completion of which raises imitation above mere reproduction: it becomes a new production’ (ib.).

Let me make one observation here. Using this kind of vocabulary, we would have to say that a digital facsimile of a printed textual document is not a reproduction but an imitation. A facsimile, after all, is a visual art form. Whole new sets of (binary) notation systems have been put to use to accomplish the digital facsimile, which is a new document that interprets and imitates the visual appearance of the analogue source, and the new notation set is far from a mere correct replay of the notation set in the source document.

The other implication is Goodman’s claim that only autographic works can be faked:

a work of art [i]s autographic if and only if the distinction between original and forgery of it is significant; or better, if and only if even the most exact duplication of it does not thereby count as genuine (Goodman 1976, 113).

If the notation of an allographic work is rendered exactly, for instance at a performance of Beethoven’s fifth symphony or a recital of Cervantes’ Don Quixote, where the performer claims that this is indeed the fifth symphony by Beethoven or Don Quixote by Cervantes, you would not have accomplished fakes of the work, but new instances of the work. But an exact rendering of all the characteristics of van Gogh’s Wheatfield with Crows, if that were possible, of which the performer actively claims that it is indeed van Gogh’s work, would not be considered the work proper but a fake. The attentive reader realizes that with this line of reasoning, we might be forced to say that a high-quality digitization of Wheatfield with Crows is a fake. Or are we then rather facing a case where an autographic object has been ‘allographed’?

Goodman suggests that there were originally only autographic arts, but a need arose eventually to materialize intangible artworks, and notation serves that purpose. With notation, a work of art is being allographed, and vice versa: an art form is allographic if its works can be allographed through notation. The notation is an attempt to represent the qualities and characteristics that can be considered significant for a work. By transmitting the notation, we hope that we also have transmitted the work. Although an autographic work cannot be copied without loss, any allographic work can in principle be reproduced eternally without loss, as long as the notation is preserved and presented correctly.

To me, one of the most challenging questions arising out of Goodman’s claim would be whether the process of digitizing an object necessarily means turning it into an allographic object. As I mentioned, Goodman assumed a development towards increased allography, and, arguably, the binary notation of ones and zeros making up any digital file, regardless of what art form or expression is represented, could be conceived of as a notation in Goodman’s sense and a prolongation of the development. If this is the case, then it would follow that as an allographic entity, a digital file cannot be subject to counterfeit. If its notation is rendered exactly, we do not have an imitation of it, we have a new instance.

A counter argument could however point out the need to separate storage form from presentation form. We recall the property of digital media: a distinction between storage and presentation. Very few of us come into close encounter with the binary storage notation, instead we use presentation devices. The storage notation is rarely of primary interest to us. Several technical factors come into play to produce what we perceive as the document on a screen. Equating the binary code with Goodman’s notation as identifying the work, at the expense of other factors, pulls our attention away from the displayed document’s digital textuality at the perceptual level. This example shows that even conceptual models aspiring to be generic are always framed by particular historical media settings. Goodman’s distinction refers to particular kinds of arts and media (primarily print culture), but this distinction does prove equally applicable in the case of digital media.

Furthermore, the specific appearance form of a digital document can be generated by many different technical storage architectures, one of which is binary notation, or rather several possible sets of binary notation. Every time we choose to display the textual work in a software environment, there is an adaptation of the work to this particular software and interface setting, and a virtual and temporary set of ones and zeros is generated which thus also partly reflects the tool which one is using. It is, I think, unsatisfactory to claim that the binary notation is what identifies the work. In the case of a digital image, the binary notation is also a storage form. It could be argued that it also provides an appearance form and can be “read” as notation per se, but that is, I think we would agree, an odd way of accessing the work. Furthermore, in this case, we would be hard put to talk about the work as an image at all. The binary notation occurs in sequential allographic form, but the form of presentation is another thing. To the user, the presentation form is as stationary and momentary as any analogue picture.

There certainly seems to be much to say about Goodman’s conceptual model and the premises on which it is based, and much in his model to which one might object. However, autography and allography are, I would argue, most fruitfully thought of not as empirical classes but as perspectives on works of art and media. For instance, Goodman’s approach suggests that there cannot be any notational system for painting. But does not the phenomenon of ‘painting by numbers’ at least suggest the possibility of allograph painting? Furthermore, let us return to what we discussed at the outset of this article: a copy of a printed book. Is it autographic or allographic? It is both. It is autographic as a tangible, graphical artefact, but its text is allographic as an identity test for the work it represents. The distinction between autography and allography can never be sharp, even less so in the digital realm.

As is often the case, theories and conceptual models tend to lose some of their generic aura once they are applied to families of media beyond the historical time frame of their origin. But this should not stop us from making critical use of them in our analytical inquiries. They help us ask better questions.


  1. 1.

  2. 2.,


  1. Andersson, H., Brolén, C. A., Grape, A., & von Friesen, O. (1927). Codex argenteus upsaliensis jussu Senatus Universitatis phototypice editus. Uppsala: Almqvist & Wiksell.

  2. Baker, N. (2001). Double Fold: Libraries and the assault on paper. New York: Random House.

  3. Benjamin, W. (1936). Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit. Zeitschrift für Sozialforschung, Paris. [Engl. translation at:]. Accessed 5 May 2019.

  4. Björk, L. (2015). How reproductive is a reproduction? Digital transmission of text-based documents. (Doctoral dissertation, The Swedish School of Library and Information Science, University of Borås, Borås. Retrieved from Accessed 5 May 2019.

  5. Dahlström, M. (2015). Critical transmission. In P. Svensson & D. T. Goldberg (Eds.), Between humanities and the digital (pp. 467–481). Cambridge, Mass: MIT Press.

  6. Duffy, C. (2015). Revealing the secrets of the burnt Magna Carta. The British Library. Retrieved from Accessed 5 May 2019.

  7. Duranti, L. (1995). Reliability and authenticity: The concepts and their implications. Archivaria, 39, 5–10.

  8. Ferraris, M. (2013). Documentality: Why it is necessary to leave traces. New York: Fordham University Press.

  9. Genette, G. (1997). Palimpsests: Literature in the Second Degree. Lincoln: University of Nebraska Press.

  10. Goodman, N. (1976). Languages of art: An approach to a theory of symbols. Indianapolis: Hackett.

  11. Kirschenbaum, M. (2008). Mechanisms: New media and the forensic imagination. Cambridge, Mass: MIT Press.

  12. Kondrup, J. (2011). Editionsfilologi. Copenhagen: Museum Tusculanum.

  13. Latour, B., & Lowe, A. (2011). The migration of the Aura, or how to explore the original through its facsimiles. In T. Bartscherer & R. Coover (Eds.), Switching codes: Thinking through digital Technology in the Humanities and the arts (pp. 275–297). Chicago: University of Chicago Press.

  14. Lundblad, K. (2015). Återge eller återskapa? Faksimilen som verktyg och konstverk. In E. Nilsson Nylander (Ed.), Mellan evighet och vardag. Lunds domkyrkas martyrologium Liber daticus vetustior (den äldre gåvoboken). Studier och faksimilutgåva. Skrifter utgivna av Universitetsbiblioteket i Lund. Ny följd. 10. (pp. 79–102). Lund: Lund University Library.

  15. Martinez, M. & Terras, M. (2019). 'Not Adopted’: The UK Orphan Works Licensing Scheme and How the Crisis of Copyright in the Cultural Heritage Sector Restricts Access to Digital Content. Open Library of Humanities, 5 (1). Retrieved from: Accessed 17 May 2019.

  16. McLaverty, J. (1984). The concept of authorial intention in textual criticism. The Library, 6 (6th ser.), 121–138.

  17. Pierazzo, E. (2014). Digital documentary editions and the others. Scholarly Editing, 35. Retrieved from: Accessed 5 May 2019.

  18. Pollard, A. W., et al. (1926). ‘Facsimile’ reprints of old books. The Library, 6(4th ser), 305–328.

  19. Ridderstad, P. (2003). Hur dokumenteras ett dokument? Om kravspecifikationer för materiell bibliografi och immateriell textkritik. In P. Forssell & R. Knapas (Eds.), Varianter och bibliografisk beskrivning (pp. 113–130). Helsingfors: Svenska litteratursällskapet i Finland. (Nordiskt Nätverk för Editionsfilologer, Skrifter 5).

  20. Sutherland, K. (2017). Making Copies. In P. Boot, A. Cappellotto, W. Dillen, F. Fischer, A. Kelly, A. Mertgens, A.-M. Sichani, E. Spadini, & D. van Hulle (Eds.), Advances in digital scholarly editing: Papers presented at the Dixit conferences in the Hague, Cologne, and Antwerp (pp. 213–217). Leiden: Sidestone Press.

Download references

Author information

Correspondence to Mats Dahlström.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dahlström, M. Copies and facsimiles. Int J Digit Humanities 1, 195–208 (2019) doi:10.1007/s42803-019-00017-5

Download citation


  • Digitization
  • Digital facsimiles
  • Scholarly editing
  • Library and information science
  • Philosophical aesthetics