Keywords

1 Summary

In “The modalities of media II: An expanded model for understanding intermedial relations”, I proposed an extended and developed model—or, rather, a conglomerate of models—designed for systematic investigation and analysis of the basic features of all forms of media and their interrelations (Elleström 2020a). Instead of beginning with a limited set of established media types and their traits and interrelations, the model is founded on a delineation of the concept of media product and an explanation of media modalities—types of basic media traits—that are shared by all media products and therefore also media types. This allows the model to account for the crucial fact that media products and media types are both similar and different.

I also emphasised that the point of models is to put aside specific details to enable a view that is more generally valid. I want to offer a broadly applicable, well-developed and distinct but flexible theoretical framework. Although the model encourages methodical analysis of media and their interrelations, it does not offer a proper methodology. It suggests ways to tackle things but does not imply any fixed methods for investigation. Nevertheless, it is hopefully helpful for developing such methods and methodologies that must ultimately be formed based on one’s specific aims and goals and in the service of various research problems and questions attaching to mediality at large and, more specifically, media interrelations.

This was demonstrated in how the model was applied in the contributions to the two volumes of Beyond Media Borders: Intermedial Relations among Multimodal Media. The chapters, written by scholars from various disciplines, tackled a broad range of basic and more or less established qualified media types, such as theatre, architecture, film (with or without embedded sound), live and online performances, audiobooks, writtenliterature, novels, book illustrations, posters, music, opera, comics, graphic novels, journalism, biographies, scientific articles and maps. The chapters also dealt with various sorts of moving pictures, a variety of radio and television programmes, and different forms of oral and writtenverballanguage.

Furthermore, the model was combined with a variety of complementary theories and methods selected for dealing with the more specific issues of the chapters. Likewise, the chapter authors used it in the service of a range of different research questions, aims and goals. Mark Crossley (2020) investigated the challenging relationships between theatre and other qualified media types. Andy Lavender (2020) scrutinised how the work of actors and performerscan be comprehended in terms of multimodality. Andrea Virginás (2020) analysed the representation of electronic screens in feature film, which has recently become common in film culture. Chiao-I Tseng (2020) looked into the issue of storytruthfulness and affective intensity in feature films in the light of how they use historical archive footage, found footage and various film techniques. From a context and user perspective, Iben Have and Birgitte Stougaard Pedersen (2020) researched how audiobooks are experienced and what it means to ‘read’ an audiobook. Focussing on mobile languagelearning, Heather Lotherington (2020) explored multimodal encoding of contemporary digital mobile communication. Mary Simonson (2020) investigated historical and contemporary media products that communicate less straightforwardly, partly because of their intermedial configurations. Kate Newell (2020) examined different forms of movement of pictures across media platforms and contexts, and the mechanisms that facilitate such movement. Based on thorough considerations of the modality modes of architecture, Miriam Vieira (2020) analysed the presence of architecture in literature. Ana Munari Domingos and José Rodrigues Cardoso (2020) scrutinised the construction of ‘reality clues’ in journalismcomics and biographycomics. The aim of Jørgen Bruhn (2020) was to combine the ideas of ecocriticism with the analytical strength of intermedial concepts. Liviu Lutas (2020) scrutinised the broad transmediality of metalepsis, the paradoxical transgression of boundaries between logically distinct represented worlds. Finally, Øyvind Eide and Zoe Schubert (2020) explained the various mechanisms that connect different media types representing landscapes.

Whereas the authors used different parts and aspects of my model in their chapters, they also employed some central concepts that, in different ways, attach to the model in important ways without being properly developed in “The modalities of media II” (Elleström 2020a). This is simply because doing so would lead too far and disrupt the structure of the already voluminous chapter; there are several other issues that would also deserve more extensive treatment. To slightly compensate for this, and hence further clarify the bonds between my theoretical framework, other broadly used concepts and the many chapters in Beyond Media Borders, I will elaborate here on three of the concepts that were neglected in the opening chapter, although in other publications I have integrated them in the framework of my model. The first concept is adaptation, which several of my colleagues have discussed in their chapters. The second and third concepts are narration and language, which were present in most of the contributions to Beyond Media Borders in one way or another. Because of this omnipresence, I will concentrate on those two concepts. What follows here, therefore, is not at all a summary of the contributions to Beyond Media Borders; it is rather an enlargement of some of its conceptual nodes.

2 Adaptation

In the context of humanities and mediality scholarship, adaptation is generally understood as a phenomenon involving different media types (typically theatre or writtenliterature and film). Therefore, I have argued that adaptation should be seen as a form of transmediation (Elleström 2013, 2017). I also briefly explored transmediation in “The modalities of media II” (Elleström 2020a: section on “Media transformation”). As stated there and elsewhere, I use the term ‘mediate’ to describe the process of a technical medium of display that realises presemiotic (potentially meaningful) sensory configurations. For instance, a piece of paper is able to mediate visual sensory configurations that are (once perceived and rudimentarily brought into semiosis) taken to be a food recipe, a bar chart, a scientific article or a musical score. If equivalent sensory configurations (sensory configurations that have the capacity to trigger corresponding representations) are mediated for a second (or third or fourth) time and by another type of technical medium of display, they are transmediated. In our minds, some of the perceived media characteristics of the target medium are, in important ways, the same as those of the source medium; they form a recognisable virtual sphere. This enables us to think that the musical score that is seen on the paper is later heard when it is transmediated by the sounds of instruments. In other words, the score’s vital characteristics are represented again by a new type of sensory configuration (not visual but auditory signs) mediated by another type of technical medium (not a piece of paper emitting photons but sound waves generated by musical instruments) (Elleström 2014: 20–27).

Just as there are many different media types and communicative situations, there is a plenitude of transmediation varieties. Although this has the potential to make the study of transmediations an exceedingly broad field, research has only paid full attention and given names to a few forms of transmediation. In the study of art forms, the general term for transmediation of media products to other media products is ‘adaptation’. However, for good reason, researchers such as Kamilla Elliott have argued that, in effect, adaptation studies must consider not only transfer of represented media characteristics between single media products, but also among media types in general (Elliott 2003: 113–132). This is because one cannot normally reduce vital media interrelations to the connection between only one source media product and one target media product.

Nevertheless, one does not even tend to call all types of transmediation of specific media products ‘adaptations’. For instance, one seldom refers to transmediations from written, visual and symbolic (verbal) media products to oral, auditory and symbolic (verbal) media products—that is to say, the reading aloud of texts, or the other way around—as adaptation (however, see Groensteen 1989). The same goes for transmediations from non-temporal to temporal visual and iconic media products—from still to moving images (as discussed by Dalle Vacche 1996)—and for transmediations from written, visual and symbolic (verbal) media products to oral, auditory, iconic and symbolic media products; that is, the ‘setting’ of text to music (Lin and Chiang 2016). We also generally exclude transmediations of media types such as libretti, scores and scripts from the domain of adaptation.

Instead, the archetypical adaptation is a novel-to-film transmediation. However, few scholars have delimited the concept of adaptation exclusively for this specific type of transfer. Adaptation studies today frequently work with not only theatre, literature and film but also qualified media types such as computer games, opera, comics and graphic novels. This is the case in, for instance, Linda Hutcheon’s A Theory of Adaptation (2006). The authors in Beyond Media Borders who write about adaptation attach to the general idea of transmediation and focus on adaptation from novel to feature film (Lutas 2020), from verbalbiography to comics (Domingos and Cardoso 2020), from opera to film (Simonson 2020) and from novel to illustrated and graphic novels, television and radio programmes, ballet, opera, theatre and film (Newell 2020). However, Hutcheon and the authors in this volume only address media with developed narration, which is a common way of implicitly delimiting the notion of adaptation.

3 Narration

As stated (Elleström 2020a: section on “Heteromediality and transmediality”), transmediality means that media products and media types may (or actually do) communicate equivalent things; they can, to some extent, represent the same or similar objects (in Peirce’s sense of the notion). This implies that there may be transfers in time among media. Because narration is transmedial in the strong intermedial sense that it transgresses basic as well as qualified media borders, many narratives can be transmediated: their characteristics may be represented again by different media types and, despite the transfer, be perceived to be virtually the same. For many adaptation scholars, the transmediation of narratives is indeed the central concern. However, one does not have to refer to one’s work as adaptation studies in order to examine transmediation of narratives. Transmediation of narratives is exceedingly common, not only in everyday communication but also in more complex and official systems of communication such as education, research and legal processes. It also flourishes in religion, art and entertainment.

I have developed my concept of transmedial narration (Elleström 2019) from Marie-Laure Ryan’s work on transmedial narratology (Ryan 2005), but framed it within a broader concept of communication and attached it to the model of media modalities. Circumscribing this, one must first note that narration does not exist by itself; it happens when we communicate with each other. Consequently, narratives and stories are not something that we find floating around independently, but something that minds communicate. Therefore, one can and should compare narration with other forms of communication. Narration is a communicative form that is specific and important enough to deserve special attention. At the same time, it is only a variation of, and in practice sometimes not at all clearly delimited from, producing virtual spheres in general.

Nevertheless, I have proposed to define a narrative as a virtual sphere, emerging in communication, containing events that are temporally related to each other in a meaningful way. Thus, the core of a narrative is exactly this: represented events that are temporally interrelated in a meaningful way. As this core consists of several elements, one could also describe it as a scaffold. I have also suggested that a whole virtual sphere containing such a core and normally also other media characteristics should be called a narrative and that the scaffolding core should be called a story. Thus, one can simply describe narration as the communication of narratives.

It follows from this that what one perceives to be the same story may be realised in dissimilar settings in different narratives. What one recognises as the same story can be narrated in different ways. For anyone acquainted with areas such as literature and film narratology, this conclusion does not come as a surprise. However, scholars have debated the nature of the sameness of stories. For transmedial research, the crucial point is that it is possible, common and often useful to perceive that vital core constituents of some narratives—certain events being temporally related in certain ways—are more or less similar to vital core constituents of other narratives, possibly represented by other media types. This is scrutinised particularly by the authors in Volume 2, Part I of Beyond Media Borders, “Media Transformation” (Simonson 2020; Newell 2020; Vieira 2020; Domingos and Cardoso 2020; Bruhn 2020; Lutas 2020; Eide and Schubert 2020).

Given these conditions for transmedial narration, one must also emphasise that stories may either be construed for the first time by the perceivers of media products (because of salient structures emerging as the narratives develop in the mind) or recognised (from earlier encounters with narratives or events in the world). In other words, the story may be based either mainly on intracommunicational objects arising in the virtual sphere, or on extracommunicational objects in the form of already known stories or perceived events. In any case, stories have no autonomous existence despite what one might be led to believe by certain narratological discussions. Stories are always results of some sort of interpretation performed by certain people in particular communicative circumstances—never objective existences, but possibly intersubjectively construed (cf. Thon 2016).

The theoretical distinction between a complete narrative and its scaffolding core story is essential for understanding transmedial narration: stories are embedded in narratives and, to a certain extent, may be realised by dissimilar media. However, the surrounding narratives and the representing media products are often conflated in narrative theory and sometimes termed discourse (but not by Seymour Chatman in 1978: 23–24). Yet, there are not only two levels here—called, for instance, story and discourse—but rather three (cf. Genette 1980 [1972]; Bal 2009 who also suggested three-layer distinctions, although quite different from mine). I suggest that (1) a media product with particular basic media traits and other formative qualities provides certain sensory configurations that are perceived by someone; these sensory configurations come to represent (2) media characteristics forming a complete narrative with all its many specific details and features; furthermore, the perceiver comprehends that this narrative surrounds (3) a scaffolding core, the story, consisting of represented events that are temporally interrelated in a meaningful way.

Based on these definitions and distinction, one can more clearly describe why stories and parts of their surroundings in the whole narrative may often be realised fairly completely by several kinds of media. It is because many media types have the capacity, to some extent, to represent events, temporal relationships, meaningful relationships and an abundance of other media characteristics. The story is normally only one of several transmedial media characteristics in narratives. The complete narrative of a certain media product can include a multitude of different media characteristics that may be more or less transmedial. However, as a rule, a story, consisting of the essential temporal structure of a narrative, is more transmedial than the complete narrative, although probably never wholly transmedial.

Considering that narration involves comprehension of represented events that are temporally interrelated in a meaningful way, it is clearly a cognitive process that depends on collateral experience and, more specifically, often on cognitive schemata. Sensing interrelations to be meaningful is at least partially a question of being able to relate them to things with which one is already familiar. Narratives are not objectively construed but depend on the mind’s inclination to form gestalts.

However, it is ultimately the more inherent factors of media products that trigger the mind-work of communication and, to some extent, determine how and to what degree various media forms may realise narration. It is clear that the same perceiving mind, harbouring a certain set of knowledge, experiences, values, memories and schemata, will interpret different media products in very different ways even if it perceives them in comparable circumstances. This is obviously because the media products are unalike in various ways and because the divergences are highly relevant. In order to understand how dissimilar media types can communicate narratives, one must scrutinise the fundamental similarities and differences among media types and the extent to which these differences in modality modesmatter (Elleström 2019: 53–58, 2020a).

Although differences in modality modes are largely responsible for differences in the kind and degree of narration in various media forms, examining them does not offer a convenient shortcut to full understanding. Thinking in terms of media modalities is not a quick fix. The basic presemiotic and semiotic traits are always embedded in complex surroundings, which means that they generally need to be analysed in their interactions with each other and with additional factors. Nevertheless, modelling narration in terms of media modalities facilitates a methodical approach to the issue of transmediality. Having different material, spatiotemporal and sensorial modes implies having partly dissimilar capacities for narration and, similarly, the use of different sign types has consequences for narration.

The material modality is perhaps the least crucial category of media traits for determining narrative capacities. Solid media products such as writtenverbaltexts, as well as non-solid media products such as spoken verbaltexts, clearly have very high narrative capacity, as decades of intense research has demonstrated. Furthermore, organic media products such as moving human bodies, as well as inorganic media products such as dolls in motion, may form complex narratives.

The spatiotemporal modality is much more critical for narration because the scaffolding core of narratives consists of represented events that are temporally interrelated. The key issue then becomes the extent to which the representation of a temporal object requires a representamen with certain spatiotemporal qualities. There is not much to indicate that media products should have specific spatial traits in order to be able to narrate successfully. Moving human bodies and dolls in motion are three-dimensional and suitable for narration. Writtenverbaltexts are two-dimensional but also have the potential to be superbly narrative media products. Spoken verbaltexts emanating from a singular source are spatial only in a limited way, but are still well suited for narration.

However, there are some relevant differences between temporal and static media products. Moving images that are inherently temporal may effortlessly represent sequences of events and form elaborate narratives (cf. the discussions of narration in silent film, sound film, opera, theatre and performancein Lavender 2020; Lutas 2020; Simonson 2020; Tseng 2020; Virginás 2020). This is not to say that one must understand the represented events to be interrelated in precise accordance with the temporal unfolding of the media product. In contrast, still images are, by definition, static and therefore incapable of representing events that one inescapably perceives in a certain temporal order. However, this is not the same as being incapable of representing temporally interrelated events. It only means that the scope of possibly represented events is reduced (assuming that the size of the still image is not huge) and that the perception of possibly interrelated represented events is not strongly directed by the physical interface of the media product (cf. the analysis of narration in architecture, a non-temporal and very much iconic qualified media type, in Vieira 2020).

Nevertheless, the difference in spatiotemporal modes reduces the narrative potentiality of still images compared to moving images—at least if one considers media products constituted by single still images. However, it is possible to construe media products consisting of a whole set of still images. In itself, this does not enhance the narrative capacity, but it does open the way for the use of a special kind of symbolic element, namely the convention of sequential decoding. Perceivers who have learnt to process parts of certain kinds of static media products in a regulated order may distinguish represented events in temporal sequences that are as stable as those produced by physically temporal media products (as demonstrated in the narratological analyses of illustrations, comics and graphic novelsby Domingos and Cardoso 2020 and Newell 2020).

This line of reasoning is also applicable to the difference between spoken verbaltexts and writtenverbaltexts: the distinction between temporal and static media products cuts through both images and verbaltexts. Spoken verbaltexts are temporal because the sensory configurations of such media products change constantly. Writtenverbaltexts are static because the sensory configurations of such media products remain the same from one moment to the other (unless, of course, one perceives the text while it is being written or is a part of a temporal, visual media product such as a film). This means that spoken verbaltexts, just like moving images—given that one allows for a certain volume of temporal extension—readily represent sequences of events and may therefore produce intricate narratives (as seen in the analysis of audiobooks and radiospeech, respectively, by Have and Pedersen 2020 and Simonson 2020). In contrast, writtenverbaltexts are normally static and if we think of writtenverbaltexts in rough analogy with solitary still images—namely as consisting of single entities such as one letter or one word—writtenverbaltexts are equally handicapped when it comes to representing events that are inevitably perceived in a certain temporal order.

In the case of language, however, the convention of sequential decoding is so strong that one normally understands writtenverbaltexts as consisting of large sets of subordinate symbols that are bound to be decoded in a highly regular manner. As in the case of sequential decoding of still images, this can lead to the discernment of represented events that are temporally interrelated in a manner that is as stable as those formed by physically temporal media products (cf. the discussions of narration in non-temporal, language-based media types such as various forms of writtenliterature [Have and Pedersen 2020; Lutas 2020; Newell 2020; Vieira 2020] and popular scientific articles [Bruhn 2020]). This is why so many researchers—misleadingly, I would argue—have claimed that writtenverbaltexts are temporal. Such a conception obscures the difference among the physical appearance of representamens (the traits of media products), the process of perceiving the physical appearance of representamens, and the virtual appearance of represented objects (the traits of virtual spheres).

Thus, the fact that one perceives all kinds of media in time has some bearing on their capacity to represent temporally interrelated events: conventionalised orders of decoding may strongly enhance the narrative capacity of static media types. However, this does not erase the substantial differences between inherently temporal and static media.

The sensorial modality also plays a role for the narrative capacity of media products. This is mostly because the senses (understood here as the external senses) are not developed cognitively to the same degree. Sight and hearing are our two most advanced senses, in that they are strongly connected to complex cognitive functions such as knowledge, attention, memory and reasoning. This means that sight and hearing are both well suited for narration. Indeed, all contributions to Beyond Media Borders that engage in narration deal almost exclusively with visual and auditory media types.

However, this does not exclude in principle the other senses. One may use the faculty of touch for reading braille, for instance, or sensing the forms of reliefs and three-dimensional figures forming narratives. It is also fully possible to consider successions of interpersonal touches that form casual, narrative media products. Children playing and adults having sex may well communicate elementary narratives by way of sequences of touches that are performed and located differently.

I presume that it would also be possible, in principle, to construe language systems mediated by taste or smell. In practice, however, they would probably be inefficient, as a speedy decoding of symbols requires quickly performed sensory discriminations. However, one can use taste and smell to create at least rudimentary narratives. A well-planned meal with several courses served in a certain order may be construed as narrative, to the extent that tastes and taste combinations may be developed, changed and contrasted in such a manner that gives a sense of meaningfully interrelated events. A series of scents may be presented in such a way that represents, say, a journey from the city through the woods and to the sea, including encounters with people and animals with smells that reveal certain activities.

The three main modes of the semiotic modality are iconicity (based on similarity), indexicality (based on contiguity) and symbolicity (based on habits). All of these semiotic modes are immensely important for the realisation of narration. The majority of the more acknowledged basic media types that are commonly reasonably well defined and have accepted names in ordinary language are dominated by iconicity or symbolicity. One can clearly characterise most of the recent examples of potentially narrative media types by a semiotic hallmark. Verbaltexts, whether they are visual, auditory, or tactile, rely heavily—although certainly not exclusively—on symbolicity: the conventional meaning of letters, sounds, words and so forth. Moving and still images, whether they are visual, auditory or tactile, are understood to signify primarily through iconicity, based on perceived similarities between representamens and objects. Although series of touches, tastes and scents are hardly acknowledged as media types in common parlance, one could make a case for recognising them as basic media types dominated by indexicality: real connections between the perceived sensory configurations and what they stand for.

For the sake of clarity, I have tried to isolate the possible contributions of various media modes to narration. By highlighting modal differences, it is possible to discern media traits that contribute to narration existing in different degrees. However, media products are normally more or less multimodal—in very different ways—which makes the above generalisations fuzzier, the differences among media types more subtle and the issue of transmedial narration more multifaceted. The model of media modalities does not offer a lexicon of transmedial narrative capacities as much as a methodical approach to examining narration in a wealth of dissimilar media products and media types. In each specific media product and media type, the present modes of the modalities add, in profound interaction, to the forming of virtualspheres and possibly narratives. In a certain media product, all of the various presemiotic modes contribute to forming certain sensory configurations: a cluster of physical representamens that together come to represent—iconically, indexically or symbolically—a certain cluster of objects that possibly forms a narrative.

Therefore, I support Karin Kukkonen’s conclusion that “[i]f, with Ryan, we understand narrative as a cognitive construct, different modes in multimodal media work together to provide the reader with clues to fill gaps and formulate hypotheses” (Kukkonen 2011: 40). Importantly, however, I go beyond the rather coarse notion of mode used by Kukkonen and in so-called social semiotics in general: modes understood as text, image, gesture and so forth. For me, multimodality is a more fine-grained concept that can be more precisely circumscribed as four kinds of multimodality: multimateriality, multispatiotemporality, multisensoriality and multisemioticity. As a rule, actual media products and media types have many modes of the same modality. For instance, media products that consist of both organic and non-organic materiality are multimaterial. Media products that are both spatial and temporal are multispatiotemporal. Audiovisual media products are multisensorial. Furthermore, many media are multimodal in several ways simultaneously.

Finally, most media products are multisemiotic to the extent that sign types typically work in collaboration. In an early article advocating the value of applying Peircean semiotics to the study of narratives, Robert Scholes suggested, “we cannot understand verbalnarrative unless we are aware of the iconic and indexical dimensions of language” (1981: 205), which is certainly true. Even though symbolic signs are clearly the most salient ones, verballanguage does not work solely through symbolicity. In visual language, for instance, elements such as lineation, letter size, letter form and empty spaces may create iconic meaning. In auditory language, iconicity is often produced by certain sound qualities, intonations, rhythms and pauses. By the same token, most media types signify through iconicity, indexicality and symbolicity in combination, although they are typically dominated by certain kinds of sign functions. However, one can find instances of communication and narration characterised by such extreme multimodality that virtually all kinds of modality modes, both presemiotic and semiotic, are included.

4 Language

Having emphasised the multisemioticity of language and noticed that language may be visual as well as auditory, it is time to get into the details of the concept of language itself (Elleström 2020b, forthcoming). How does it fit into the theoretical framework presented in “The modalities of media II: An expanded model for understanding intermedial relations” (Elleström 2020a)? Should the concept be understood as a media type? Not really. In her contribution to Beyond Media Borders, where she investigates a broad range of language forms in the context of language learning, Heather Lotherington accurately notes that “[l]anguage has traditionally been described as a medium: of communication, and of learning”, although “[l]anguage is, in fact, an abstract until it is materialized: mediated physically, in speech and signed conversations, and technologically, in printed documents, social media sites, roadside signs, movies, games, and suchlike” (Lotherington 2020: 219).

If language is an abstraction, then how should it be circumscribed? In line with a common practice, I suggest that language might be comprehended as a communicative sign system. Given that a system is something that is highly organised, a communicative sign system must rely strongly on robust habits or, in other words, conventions. In terms of Charles Sanders Peirce’s semiotic vocabulary (Peirce 1932: CP 2.297 [c. 1895]), this reliance suggests that communicative sign systems are symbolic sign systems. Therefore, I define a language as a system of symbolic signs used for communication.

Given such a definition, it is clear that there are several forms of language. Apart from so-called verbal language, humans have developed a plenitude of symbolic sign systems involving a broad variety of signs, such as mathematical and logic symbols, maritime signal flags, declamatory gestures in theatre and even flowers that communicate socially through their kinds, colours and arrangements. Symbolic sign systems may consist of anything from thousands of symbols and very elaborate and detailed conventions to considerably fewer symbols connected more roughly through limited sets of habits. In their contribution to Beyond Media Borders, Domingos and Cardoso (2020) thoroughly analyse the combination of verbal and nonverbal languages in comics, involving both highly structured visual verbal language (involving conventional use of letters, punctuation, grammar, etc.) and less formalised nonverbal language (based on the habits of forming and combining visual, static images).

Verbal language, understood as language involving features such as words and grammar, is based on extensive and very systematic conventions. However, verbal language is not really ‘one’ system, even disregarding the crucial differences between language forms such as Hindi, Spanish and Afrikaans. Some decades ago, Jan Mulder argued in detail for the notion that written and spoken languages constitute separate semiotic systems. Based on the simple but far-reaching observation that what is seen (written words) is different from what is heard (spoken words), Mulder argued that it is “terribly wrong” to see written and spoken languages as “mere variants of the same thing”; they are “entirely different semiotic systems” (Mulder 1994: 43). I think this idea is correct, even though the two different systems of symbols have been developed to allow for advanced mutual transmediation: changing from writing and reading to speaking and listening is often an efficient although definitely not seamless procedure. Hence, in more general discussions, it is fully comprehensible to treat verbal language simply as a common system of symbols (as Vieira 2020). At other times, however, it is crucial to be more specific about how communication is realised, which requires specifying whether writtenlanguage (Bruhn 2020; Lutas 2020; Newell 2020), spoken language (Lavender 2020; Simonson 2020) or both (Eide and Schubert 2020; Have and Pedersen 2020) are at stake.

There are actually at least three different verbal systems of symbols. Sign language also interconnects strongly with written and spoken verbal language, although it is clearly a sign system of its own and not only a variant of the same thing. Each of these three or more systems can be subdivided into even more specific systems such as written Swedish, spoken English or Chinese sign language.

Although the definition of language as systems of symbolic signs used for communication captures the core of languages, it is naturally not an exhaustive description of how language can be conceptualised. Whereas writtenverbal language, my primary example here, is certainly always per definition based on systems of symbols, writtenverbal language may also include other sign types. In accordance with Peirce’s notion of sign types always being mixed (Peirce 1932: CP 2.302 [c. 1895]), there are probably also always at least some elements of iconicity and indexicality in semiosis dominated by symbolicity. Depending on factors such as history and culture, nonprofessionals and scholars may see these other sign types as more or less inherent or alien facets of what they understand to be normal writtenverbal language. Even though one can give writtenverbal language a transhistorical and transcultural core definition in terms of symbolic sign systems, it is clear that its realisations vary considerably in relation to time and space.

To exemplify this, I will focus here on the presence of iconicity in writtenverbal language. Iconic meaning-making is not based on habits or conventions, as is the case for symbolic meaning-making. Iconicity is grounded in perceived similarities among sensory perceptions and cognitive structures. In the domain of writtenverbal language, seen from a Western perspective, this means that symbols such as letters, words and punctuation, as well as spatial arrangements of these, may to some extent produce meaning. This is not only because of learned, habitual connections between the visual impressions and what one takes them to represent, but also because of perceived similarities between what one sees and what the visual impressions are understood to signify. For instance, visual empty spaces between words and sentences suggest semantic spaces or differences. A more blatant example can be provided with a sentence like this: ‘I can see two mOOns.’

Most Western scholars have long been strongly inclined to dismiss iconicity as, at best, only a peripheral part of verbal language, oral as well as written. Since the spread of Ferdinand de Saussure’slinguistic dogma of the arbitrariness of language in Course in General Linguistics a century ago (2011 [1916]), belief in iconicity’s existence in or relevance for language has sometimes been considered naïve and has even been subject to scorn and ridicule.

However, this has not been the case for Eastern scholars. According to a recent article by Ersu Ding, Saussure’s quick dismissal of iconic signs in language as peripheral and linguistically uninteresting has been met with much greater scepticism in China (Ding 2014: 121). As Ding succinctly put it, “his holistic scheme of sign formation simply could not accommodate anything other than a systemic pairing of the signifier and the signified as a result of structural differentiation” (Ding 125). In China and other Eastern countries, such as Japan, the basic concept of language, including writtenverbal language, has thus been more inclusive—even though the core of language as a system of symbolic signs used for communication has not been denied, to the best of my knowledge. Therefore, one could dare to say that the concept of writtenverbal language in the West and the East has largely been the same, but not really the same. In any case, there is now massive support from empirical research, from both the West and the East, that iconicity, in varying degrees, is a solid component of verbal languages from all over the world.

What, then, is the relation between the two concepts of language and medium? As I define it, a media product is a material entity—or, more broadly, a physical, intermediate entity—that makes communication among human minds possible (Elleström 2018). Thus, media products are material objects, actions or processes that are indispensable for connecting human beings mentally. Consequently, they must be conceptualised in terms of both physicality and cognitive capacities. In other words, media products have both pre-semiotic traits and semiotic traits, meaning that they manage to trigger meaning-production in various ways based on their material, spatiotemporal and sensorial traits.

Furthermore, nonprofessionals and scholars tend to group media products into categories. People sometimes pay attention mainly to the most basic features of media products and classify them according to their most salient material, spatiotemporal, sensorial and semiotic properties. For instance, we think in terms of still images (most often understood as tangible, flat, static, visual and iconic media products). Still images are an example of what I call a basic medium (a basic type of media product).

However, such a basic classification is sometimes not enough to capture more specific media properties. We then qualify the definition of the media type that we are after and add criteria that lie beyond the basic material, spatiotemporal, sensorial and semiotic properties; we include all kinds of aspects regarding how the media products are produced, situated, used and evaluated in the world. One may wish to delimit the focus to still images that, for example, are produced with the aid of cameras—photographs. Photographs are an example of what I call a qualified medium (a qualified type of media product).

In contrast to media types, which involve pre-semiotic and semiotic traits, I comprehend languages somewhat more narrowly as systems of symbolic signs. From these dissimilar definitions, it follows that one cannot simply equate languages with media types. Media are material, and, as Lotherington puts it, language is something abstract “until it is materialized: mediated physically” (Lotherington 2020: 219). However, languages and media types are clearly interconnected. One way of putting it is to say that a language can be a vital semiotic part of a media type. For instance, language is an essential part of email and telephone conversations as well as of theatre and stand-up comedy. Put another way, specifying the definition of a language from an abstractsymbolic sign system to the concrete use of a symbolic sign system brings you closer to understanding the language as a media type. Fully concretising how a symbolic sign system is mediated—that is, how it is realised in specific material, spatiotemporal and sensorial ways—makes a symbolic sign system equivalent to a media type, as I conceive it.

This equivalence is actually partly achieved in the distinction between at least three different verbal sign systems mentioned earlier: written, spoken and ‘signed’ verbal language. The notion of writtenverbal language definitely includes a specific sensorial mode, visuality, and often a spatiotemporal mode: the visual signs being realised on a flat surface. Likewise, the notion of spoken verbal language definitely includes a specific sensorial mode, audibility, which requires materiality capable of transmitting sound waves and temporal extension. The notion of ‘signed’ verbal language in effect fully covers the criteria for being understood as a media type: sign language is mediated by human bodies acting in time and three-dimensional space to produce movements that form visual signs; these signs are clearly part of a verbalsymbolic sign system involving strong facets of iconicity and indexicality. Considering that sign language is also designed specifically for face-to-face communication with and among people with hearing impairments, it is a qualified rather than a basic media type. Other fully defined verbal sign systems include Braille writing, designed for tactile verbal communication with people with visual deficiencies and, by the same token, a qualified rather than a basic media type.

However, one can hardly describe ‘normal’ writtenverbal language (based on vision) as a qualified media type. Instead, I would say that this category includes several closely related basic media types. As mentioned, writtenverbal language is definitely visual and normally realised on flat surfaces, but it can also occur on rounded and irregular surfaces, and letters and words may be three-dimensional in themselves. Writtenverbal language is most conveniently mediated through solid and inorganic materiality, but it is fully possible to write in gas and liquids or with the use of organic material such as living bodies. Thus, writtenverbal language is quite an open category that includes a variety of basic media types, depending on whether the more exclusive Western or more inclusive Eastern comprehensions are included.