1 Automated journalism is not journalism

Journalists are people with pencils and microphones, meeting informants in dark parking lots, drafting ground-breaking reports and discussing feverishly with their editors—right? No, journalistic actors don’t even have to be human: Reporting on weather (cf. Glahn 1979), finance or sports (cf. Automated Insights 2022), and nowadays also election results (cf. Washington Post 2016) or Covid19 (cf. Retresco 2020) is increasingly done by automation technology. Additionally, text generation model GPT‑3 has already had its take at writing opinion pieces for The Guardian (2020). Technology is not only a tool for journalists anymore, but has come to generate own content.

Research-wise, whereas content-generating automation and AI in journalism seems most relevant for journalism scholars at first glance, it is also a timely topic in the field of human-machine communication (HMC). To this day, HMC has mostly focused on communication in smaller groups, such as the interaction of a voice assistant and a user. However, technology that produces journalistic content which is then distributed to consumers is stepping into the role of a human journalist, taking part in mass communication to not a single human communication partner, but a big public audience. This makes content-generating technology a mass communicator which is highly relevant to investigate: To this day, we are yet largely unaware of its ontology, its communicative role, its effects on societal opinion formation, and much more, which could ultimately impact the way in which media and our society are intertwined. Besides, it is also relevant how these new, artificial mass communicators relate to the established, human ones—the journalists in the newsroom—and how this affects journalism and journalistic work. Recently, scholars have begun to address such questions, in HMC as well as in journalism research (cf. Clerwall 2014; Dörr 2016; Guzman 2021; Latar 2015, 2018; Lewis et al. 2019; Thurman et al. 2019; van Dalen 2012). Other forms of automation in other journalistic parts are relevant, too, as they transform the ways in which news are gathered, researched, or distributed (cf. Diakopoulos 2019).

Meanwhile, the growing importance of automation in journalism is contrasted by the fact that research has so far only looked at the topic in a piecemeal fashion. This is particularly evident in the fact that so far, there is not even a uniform terminology that is clearly delimited and defined according to scientific standards. Many terms encompass more than just automation, and the ones that focus on it are sometimes misleading, such as “robot journalism”. Better suited terms such as “automated journalism” mostly just describe text automation instead of a whole journalistic process, and the automation itself is not specified any further when we talk about very different technologies which might require finer differentiation. Meanwhile, “automated journalism” could also include lots of other meanings, including everything from partial to full automation in research, production, and publication. This is confusing for ourselves as scholars as we do not have the terms and concepts to describe the amount and sophistication of the automation—which ultimately hinders our attempts to expand our theories and knowledge about the similarities and differences between the different technological and also human communicators. Furthermore, it impairs our communication about our research with scholars from other fields—or also with journalistic practitioners.

However, to solve these problems, it is not sufficient to only suggest a few new terms. Rather, we must be able to distinguish between different terms and concepts for any form and level of automation in journalism, which allows us to differentiate between different automation technologies with different properties and create related terms for each of them. Furthermore, we must properly refine which part of the journalistic process we actually mean instead of using “journalism” as a general term. This is only possible by using an encompassing and differentiated conceptual basis.

For these reasons, I present such a basis, building on Rammert and Schulz-Schaeffer’s (2002) concept of distributed and gradualised action. This allows a new terminology by combining the level of automation and the part of the journalistic process: I suggest to call automated journalistic text generation just this—“automated journalistic text generation”—instead of “automated journalism”, and to use “automated journalism” only to describe the full automation of a full journalistic process. I also argue that we need to specify how high the level and agency of automation actually is, instead of throwing cloze-like template-based texts and highly flexible, variant-rich programs into one pot. The necessary terms and concepts are based on a theoretical understanding of automation, how we can gradualise it in terms of amount, control, coherence, and agency, and how we need to apply it to specific journalistic tasks rather than to “journalism”.

Apart from specifying terminology, this concept also offers a differentiated approach towards communicator status and agency of technology by viewing agency as gradualised. Furthermore, it gives the community a new tool for investigating the interconnection of different human and machinic communicators in a mass communication process through its look on activities that are distributed between different actors. Both implications advance HMC’s theoretical foundations and enable us to take a new look at both the nature of machine communicators and the nature of their relations with others.

2 Current terminology and conceptualization

Terminological and conceptual problems concern the modifying term combined with journalism as well as the understanding of journalism itself. Whereas most modifying terms incorporate more uses of technology in journalism than just automation, the ones which are focused on automation are often misleading or not specific enough. Meanwhile, “journalism” is added to these modifying terms regardless of whether the subject is actually just text generation. Both problems will be described in the following.

2.1 Computers, robots, and automation

A wide variety of modifying terms is added to “journalism” in the literature to describe the use of advanced technology in journalism. However, not all are equally suitable to describe automation and specifically software-generated content (an overview can be found in Table 1).

Table 1 Modifying terms used with automation in journalism

On the one hand, encompassing terms are used in relation to automation in journalism. Computer-assisted reporting (CAR) refers to journalism-specific technologies which are not accessible to other persons and which are less sophisticated than computational journalism (cf. Thurman et al. 2019, p. 182). In turn, computational journalism originally represents the use of computers, technology, and social sciences in journalism to strengthen journalism’s “watchdog” abilities by enabling it to search great amounts of data (cf. Hamilton and Turner 2009, p. 2). Later uses of the term are wider, seeing computational journalism to be the advanced use of software in all steps of the journalistic process (cf. Thurman et al. 2019, p. 180), sometimes also including the reporting about algorithms and their functionality (cf. Diakopoulos 2019, p. 27). Hybrid journalism has often been conceptualized very widely as an integration process, and even when it is focused on technology, it always “occurs when a technological innovation is critical in changing socio-professional modes of news production […] and journalistic discourses” (Splendore and Brambilla 2021, p. 54).

Generally, scholars understand CAR, computational journalism, and hybrid journalism as significantly wider than just automation, including advanced analysis methods, data mining, or in fact almost every advanced technology, and describe humans as the central journalistic actors who apply technology to support them. Related to automation in journalism, these terms can be a useful lens to describe the innovation of journalistic practices through automation, and to represent the “bigger picture”.

Meanwhile, as Danzon-Chambaud (2021, p. 4) notes in a systematic review, the term “computational journalism” is sometimes also used to refer specifically to “automated journalism”. However, if computational journalism is generally used in an encompassing and human-centered manner, but at times also for the very specific and technology-centered description of automated text generation, unclarity can arise for scholars as well as other stakeholders. A shared terminological foundation is central to ensure a common understanding of the central phenomenon.

On the other hand, specific terms exist to describe just automation in journalism. Whereas these allow a closer look at the technology itself, they also have their pitfalls.

Noticeably, technological buzzwords like robot and AI are commonly used. Robot journalism and similar terms occur most frequently in media coverage and early literature on the topic (cf. Beckett 2015; Clerwall 2014; Fanta 2017; Latar 2015; Newman 2016; Peiser 2019; Reichelt 2017; van Dalen 2012). Mostly, there are no explicit definitions for the term. Instead, descriptions of automated text generation follow (cf. Newman 2016, p. 36) or it is used as another expression for text automation (cf. Clerwall 2014, p. 519; van Dalen 2012, p. 649). Reichelt (2017, p. 16) defines it by explicitly citing Carlson’s definition of automated journalism: “The term denotes algorithmic processes that convert data into narrative news texts with limited to no human intervention beyond the initial programming” (Carlson 2015, p. 416). Carlson, in turn, titled the paper containing this definition “The Robotic Reporter”. Thus, the terms are mostly regarded as synonyms. AI in combination with journalism or writing is less common in research (cf. Tatalovic 2018; Bailer et al. 2022) but occurs frequently when software companies describe their products (e.g., Retresco, Arria NLG, AX Semantics).

However, both terms do not capture the essence of the matter. “Robot” describes a technology entwined with a physical object containing sensors and motors which is automatedly carrying out physical tasks and could not fulfill its task in the same way without the embodiment (cf. Guizzo 2020; Moravec 2005). Thus, it implies an embodied, technical, sometimes human-like being that is typing on a computer keyboard; an imagination Latar depicts as a cover image (cf. 2015, 2018). What actually happens is nothing of the sort: “Media reports may discuss ‘robots’ in journalism but more precisely, algorithms are used to generate text” (Dörr 2016, p. 703). Thus, at its core, the software is not bound to a physical, technological object. Only a few exceptions to this exist, e.g., a robot by Matsumoto et al. (2007) that can move around a room, identify basic unusual states of its environment via a camera and create an article based on this research. In this specific application, the embodiment is necessary as an “interface with the real world” (Matsumoto et al. 2007, p. 1235), making the machine an actual robot journalist. AI journalism faces similar problems because whereas AI has not been clearly defined to this day (cf. Russell and Norvig 2016), the term implies more intelligence than current template-based algorithms possess. AI in the sense of machine learning mostly occurs in grammar and language modifications (cf. AX Semantics 2022a) or in exemplary but not widely used experiments like the Guardian essay (cf. GPT‑3 2020). Thus, both terms have a buzzword characteristic—marketable and timely, but not precise.

Algorithmic journalism, a more modest term, means

“[…] the (semi)-automated process of NLG [natural language generation] by the selection of electronic data from private or public databases (input), the assignment of relevance of pre-selected or non-selected data characteristics, the processing and structuring of the relevant datasets to a semantic structure (throughput), and the publishing of the final text on an online or offline platform with a certain reach (output). It is produced inside or outside an editorial office or environment along professional journalistic guidelines and values […] and thus establishes a public sphere.” (Dörr 2016, p. 702)

“Algorithmic” seems more appropriate than the terms cited before as algorithms are used throughout the whole process, the term is not implying an embodiment and the definition Dörr proposes is tailored towards software doing journalistic work, thus not being too wide or narrow. However, Dörr directly addresses automation, and—along the common understanding of automation “as a device or system that accomplishes (partially or fully) a function that was previously, or conceivably could be, carried out (partially or fully) by a human operator” (Parasuraman et al. 2000, p. 287)—his whole understanding of algorithmic journalism points in the same direction. Other terms like computer-generated (cf. Graefe et al. 2018) are used in similar ways, indicating automation as the centrally interesting term.

Thus, the term automated journalism could be used to unify the manifold terminology and describe the essence of the matter. However, “automation” must be specified in more detail. This is mainly because “automation is not all or none, but can vary across a continuum of levels, from the lowest level of fully manual performance to the highest level of full automation” (Parasuraman et al. 2000, p. 287). Furthermore, automation in journalism needs a normative differentiation of its abilities, even within the same level of automation, which can already be seen in the different possible functionalities of NLG (cf. Reiter 2016). However, to this day, “automation” is usually just applied as a modifying term to journalism without further differentiation, neglecting to provide an insight into what is automated to what extent. Therefore, it must be distinguished between different levels of automation as well as between different levels of sophistication within single automation levels, making an underlying concept of automation necessary.

2.2 Problems with classic automation taxonomies

As Dörr’s definition shows, it is sometimes acknowledged that different levels of automation in journalism exist, but they have only been specified once to this day (cf. Bailer et al. 2022). However, there are numerous examples for other technology like automated driving (cf. Flemisch et al. 2011; NHTSA 2022; SAE International 2021), aviation (cf. Parasuraman et al. 2000), automation in manufacturing (cf. Frohm et al. 2008), and more. Basically, automation of physical tasks (mechanization) and of cognitive tasks (computerization) can be distinguished (cf. Frohm et al. 2008, p. 7; Williams 1999, p. 159). Automation in journalism adheres to computerization almost exclusively as no physical activity is involved but mostly, data and information are processed and transformed.

Automation levels are usually delimited by the number of tasks that technology and human perform as well as the level of human involvement or control (cf. Flemisch et al. 2011, p. 271; Frohm et al. 2008, pp. 8–9), which often involves the coherence of the automated tasks. With only a few exceptions, taxonomies start with a level 0, describing “no automation”, and end with “full automation” where the human has no tasks and no control anymore. In between, a varying number of levels describes different amounts of task allocation between human and technology (cf. Frohm et al. 2008, pp. 17–18; Vagia et al. 2016). Especially for mechanization, this approach appears to be the most prominent (cf. Frohm et al. 2008, p. 17). For cognitive tasks, Frohm et al. noted that Parasuraman et al. (2000) extended this level system, basing their concept on a simplified model of information processing in the human mind. They distinguished between four stages which are possible to be automated independently and on different levels but are reliant on the completion of the preceding stages: Sensory processing (= information acquisition in automation), perception and working memory (= information analysis), decision making (= decision selection) and response selection (= action implementation).

Even when only regarding taxonomies that include computerization, they show issues that make it difficult to apply them to journalism. First, several dimensions are often mixed together within each level. In Bailer et al.’s (2022) levels of automation in journalism as well as in the SAE International (2021) taxonomy of automated driving, the number of automated tasks increases with each level, the amount of human control decreases, the tasks taken over by the software become more interrelated, and the complexity of decisions and tasks increases. However, quantity, complexity, and control are not necessarily related when human journalists and automation collaborate in the newsroom. The control of a human journalist can be small if a program combines sentence components and publishes the result automatically, performing the complete production and publication. But if the same program offers the option to stop it and override its results, human control rises while amount and complexity stay the same.

Furthermore, some taxonomies refer to individual tasks and no information is given on how complete processes can be described (cf. Sheridan and Verplank 1978). Other taxonomies cover processes (such as driving), but not much information is given on single tasks and their interrelation (cf. NHTSA 2022). However, since journalism usually consists of many tasks that build on each other (at least research, production, and publication, see Sect. 2.3), the description of tasks and their interrelations is useful and necessary—especially when one software takes over several steps. Other taxonomies like from Bailer et al. (2022) try to cover both at a time by equaling the rising interrelation of tasks with rising levels of automation, which, again, hinders adequate differentiation between different dimensions related to automation. An exception to this critique is the approach of Parasuraman et al. (2000, see above), who derived four sequential steps from human perception, but note that it might be useful to represent them as a cycle rather than a linear process. Thus, the information acquisition in one task can be dependent on the results (e.g., the decisions made or the information collected) of a previous task or several parallel previous tasks in a process (Fig. 1).

Fig. 1
figure 1

Circular process of information processing. (The image contains the cycle of information processing according to Parasuraman et al. (2000), including vertical arrows that stand for the independent level of automation of each stage)

Last, the existing approaches exclude the description of complexity in task fulfillment. An example for complexity differences can be found in Reiter (2016) who describes five “levels of sophistication” in NLG, ranging from a “fill-in-the-blank system” to the dynamic creation of whole documents without pre-written templates or even text structures. All levels result in texts based on data, but the latter form would represent a higher complexity, since the degrees of freedom are higher, circumstances and alternatives are considered and a goal is pursued independently. As this shows, complexity can increase within the same amount and coherence of fulfilled tasks, requiring a normative description. Such alternatives are not obviously addressed in automation taxonomies. However, Hancock (2019, p. 482) conceptualizes autonomy (more indeterminate machinic action) as a successor to automation, thus indicating a continuum of different sophistication levels within full automation.

These differences mean that classic taxonomies cannot be easily adapted to journalism. Thus, it is necessary to find a new approach towards “automation”.

2.3 What is journalism, anyway?

The second terminological and conceptual problem concerns the concept of journalism in existing “automated journalism” definitions. In contrast to the modifying term that describes the technical aspect, the “journalism” part has received less attention and critique in literature. However, concerning the shallow understanding of journalism displayed in these definitions, this is a necessity.

The views of journalism in scholarship vary greatly; it can at least be seen “as a profession, as an institution, as a text, as people, and as a set of practices” (Zelizer 2004, p. 32). However, journalistic self-images (cf. DJV 2017; SPJ 2014) as well as scientific understandings (cf. Kovach and Rosenstiel 2021, p. 7; Shapiro 2014, p. 561) frequently include the central normative goal of providing the public with relevant information to enable their informed participation in democratic processes (cf. DJV 2017, p. 4) or freedom and self-governance (cf. Kovach and Rosenstiel 2021, p. 7). This indicates three activities where each can consist of one or several tasks of varying complexity. First, the information must be known to the journalist—otherwise, it could not be transmitted. The process of obtaining information will be referred to as “research” or “newsgathering” here. Second, the information must be deemed relevant and subsequently transformed into content suitable for the respective medium, which will be called “(content) production”. Editorial changes are also included in this step. Third, the produced content has to be published in an appropriate form. Those three steps of research, production and publishing are recognized in the literature, sometimes split up into more precise parts (Domingo et al. 2008, p. 333; cf. Marconi 2020, p. 6; Thurman 2019, p. 180), and are essential for journalism: An information which has not been researched cannot be processed nor published. If an information is not selected and transformed into content, it cannot be published. Without publishing, information will not be intentionally transmitted to an audience, thus making it debatable (cf. Primo and Zago 2015, p. 48) if the un-published product can be considered journalism.

What is discussed as “automated journalism”, however, is mainly concerned with content (specifically, text) production. This can not only be seen in Carlson’s original definition, which is exclusively focused on text generation. Moreover, most studies using the term “automated journalism” research aspects connected to the text (e.g., its perception by readers). The term “journalism”, as it is used there, is reducing the field of journalistic work to mere copywriting, maybe based on some data analysis and following publication (which is neither specified nor even addressed in most definitions), and is therefore problematic. This might partially be due to the current state of automation technology that is actually used in journalism, which indeed mainly consists of text production—which, then, is the most obvious technology to investigate. But still, it does not only neglect other media forms such as automated videos, but also other essential steps of the journalistic process. This critique has been addressed by Danzon-Chambaud (2021, pp. 4–5) and Wu et al.: The term’s meaning could also include “anything from the machine aggregating and funnelling of content, to data scraping and auto-publication of stories” (Wu et al. 2019, p. 1453) as it is defined so loosely.

There are two approximations to a solution for this problem. On one side, researchers use narrower terms to describe the focus on text generation, such as Clerwall (2014), who calls the results that the software produces “automated content” and “software-generated content”, respectively (p. 520). Interestingly, Carlson (2015), who coined the widely cited definition of automated journalism, uses the phrase “automated news content creation” in the same article.

On the other side, Dörr’s definition as cited above encompasses the whole journalistic process and shows that being more inclusive is another possible solution. However, it has limitations, too. First, it appears to be based on the capabilities of technologies that existed when Dörr published the paper in 2016 and leaves no room for possible future developments, e.g., other research methods or the creation of own databases. Second, due to Dörr’s focus on NLG as a central technology in automation in journalism, it is exclusively focused on journalism in text form. However, automation is also used for the creation of short journalistic videos from existing text (cf. Newman 2016, p. 19) or audio articles (cf. Amazon Web Services 2021) and it is reasonable to assume that there will be more advancement in such fields. As Dörr explicitly leaves these aspects out of his definition, this leads to the same critique as above: Excluding parts of the journalistic process or media used for journalistic publishing is too restraining for “journalism”.

Furthermore, current terminology is only focused on data-based content (mostly text) generation. However, this limits automated content production to factual reporting whereas journalism also includes opinion pieces. The language generation model GPT‑3 has already been used to create such an opinion piece on the question of “robots’” danger for journalism (cf. GPT‑3 2020), not based on the analysis of datasets but merely on a general topic and the software’s machine-learning-based text generation. Even though the final text was edited and its paragraphs were hand-picked from eight different results that the software produced (cf. GPT‑3 2020), it demonstrated the possible future use of such technology. Thus, other methods than data-to-text must be considered in the terminology and definitions.

In order to conduct more specific research, to discuss and compare results appropriately and to generally obtain a clearly differentiated scientific terminology, two main points must be clarified. First, “automation” has to be considered: What is automated, what is semi-automated, and are there different levels of complexity possible for the same level of automation? Second, the focus of the term “journalism” needs to be widened, allowing the inclusion of more journalistic forms and all parts of the journalistic process into the definition.

3 The concept of gradualised action within socio-technical constellations of distributed action

To provide a theoretical base for automation in journalism and a more refined approach towards differentiated definitions, a sociological perspective will be adopted as classic automation taxonomies offer too little differentiation in various important aspects and can only with difficulty be applied to journalism and its individual parts. Thus, as a theoretical basis, the concept of gradualised action within socio-technical constellations of distributed action by Rammert and Schulz-Schaeffer (2002) will be used.

Fundamentally, the concept is directed at the question of technology’s agency and ability to act. The authors believe that both is applicable to technology, not only to humans, which makes the approach useful in a HMC context where humans and machines are both understood as communicators, respectively communicative actors, and analyzed for their ontology and their communicative interaction. To explain their position, the authors argue that humans are also not always “acting” (in a sense of autonomy and independence) which can be seen in unconscious, repetitive or reactive behavioral mechanisms. On the other hand, machines and technologies are not just passive objects as they cause intended effects and become increasingly (inter-)active, especially via the developments in AI and agent-oriented programming. Additionally, the authors understand the social meaning of actions to result from their reception and interpretation, not from the action itself. Thus, Rammert and Schulz-Schaeffer argue that fundamentally, humans and non-humans should equally be able to be seen as actors in a socio-technical context. In this respect, the approach is related to actor-network theory: ANT conceptualizes everything from ideas to machines to humans as part of a network of relationships, meaning that everything that causes a difference in the natural flow of events can be seen as an actor. Therefore, ANT uses a symmetric description language to create conceptual equality between human and non-human actors (cf. Latour 2005).

However, while Rammert and Schulz-Schaeffer also strive for symmetry, they distance themselves from the ANT as they argue that it is not only problematic to pre-assume ontological differences between different actors but also to pre-assume their equity (as the ANT does through its use of a weak concept of “action”, see below) as this would hinder the discovery of differences. Thus, Rammert and Schulz-Schaeffer “frame the theoretical categories in such a way that the question of the similarities and differences of interest between human and technical activities is not answered by conceptual pre-decisions, but can be posed as an open empirical question” (2002, p. 39, translated by author). To achieve this, they analyzed existing approaches towards technology’s agency, identified three fundamental differences between them, and developed a concept by bridging them. This concept is not only helpful for understanding technological agency but can also give insights into automation as it breaks up agency into different dimensions and levels which are transferrable into an automation taxonomy.

3.1 Descriptive versus normative concepts: distributed action

The first distinction is made between agency as a descriptive instance or as a normative constitution. This means that some concepts observe agency at the object level, for instance by understanding the attribution of agency by users or the existing properties of technologies as an indication of existing agency. In contrast, other concepts generate agency themselves, e.g., by using conceptual and research strategies that force some form of agency to be attributed to a technology in the absence of alternatives.

To resolve this difference, the authors present their concept of distributed agency, consisting of two dimensions. The first dimension contains their fundamental assumption that individual activities combine to form a stream of action in which these activities are not sharply delineated but relate to one another. Accordingly, an action is not singular, but is distributed among an “overall complex of activities” (Rammert and Schulz-Schaeffer 2002, p. 42; translated by the author). The first dimension of distributed action therefore displays the foundation of Rammert and Schulz-Schaeffer’s concept: breaking down actions into activities that are interconnected and thereby allowing researchers to look at single activities, their execution, and their entanglement with other activities. In doing so, the authors understand actions in the simplest sense as creating a difference in contrast to conforming to existing structures.

In the second dimension, the authors state that “actions in their execution are distributed among different, precisely human and non-human instances” (Rammert and Schulz-Schaeffer 2002, p. 42; translated by the author). These different instances—actors—can mutually influence each other and only their interaction constitutes the final action. The conceptual understanding of “action” and its decomposition into individually observable, but closely interconnected sub-areas makes it possible to grasp the various actors, the type and quantity of activities they undertake, and the connections existing between the activities and actors. This solution makes it possible to describe both empirically and normatively what degree of agency exists for each actor and for each activity. The empirical description is thereby covered by the description of the different activities and their “distribution among human and non-human instances” (Rammert and Schulz-Schaeffer 2002, p. 43; translated by author). The authors further develop a possibility for the normative description in their resolution of the second difference.

3.2 All technologies versus advanced technologies: gradualised action

The second distinction is whether a concept can fundamentally understand any technology as an actor or only certain advanced technologies. The first can be seen in ANT: Latour (2000) even understands the “Berlin key” as an actor because the key’s shape, which includes two different modes of the operation of doors (open only or close only), evokes a daily routine generated by the opened or closed doors that causes a change in the behavior of Berlin citizens. In contrast, the more advanced technologies addressed by other concepts would not include keys, but developments like machine learning. Rammert and Schulz-Schaeffer criticize both approaches: Theories like ANT seem too undifferentiated as they ascribe the same kind of agency to a key as to a human making well-reasoned decisions and taking appropriate action. On the other hand, only viewing someone or something as an actor when they actively and intentionally pursue a certain goal or display some kind of intelligence seems too exclusive as, e.g., humans could not be considered actors in a moment when they react with an automatism.

Rammert and Schulz-Schaeffer resolve this difference between concepts and their weaknesses by proposing a gradualised concept of action. Basically, they divide agency into three levels of ascending complexity (see Table 2 for an overview), which also differ regarding the capability of the actor to act effectively, regulatively, and intentionally (cf. Schulz-Schaeffer and Rammert 2019, p. 46).

Table 2 Dimensions of gradualised action

The lowest level—the level of transformative efficacy—is characterized by a weak concept of action, comparable to the one in ANT: Here, any transformative effect causing a deviation from the natural flow of events is understood as action. This includes unconscious routine actions of humans as well as the strict following of given routines, which do not allow any alternatives, by machines or other technologies. Even the physical properties of a key, as described above, can lead to agency being attributed to it. Changes in the environment that would require adaptation or modification of the action scheme do not cause adaptation in this dimension. The desired effect as well as one way of regulative execution to achieve the desired effect are present in objectified form. Additional differentiation can be made within this dimension in terms of how permanent the change brought about by the technology is.

The level of ability-to-act-differently, on the other hand, is characterized by the choice between alternatives, i.e., the ability to register changed circumstances and then modify one’s behavior accordingly in order to achieve a certain goal. In a further development of the concept (cf. Schulz-Schaeffer and Rammert 2019), the authors additionally differentiate within this dimension, emphasizing that both the alternatives for action and the procedure for choosing between these alternatives can be either pre-defined or freely selectable. At the lowest level within this dimension are thus technologies with which both the alternatives and the rules are fixed in the programming; this also includes so-called if-else loops in coding. At the highest level, both factors are determined by the technology depending on the goal, which can be the case, for example, with forms of machine learning. Concluding, actors on this level do not only possess the ability to cause an effect which can differ depending on the situation but also various ways to act regulatively with higher degrees of freedom.

Finally, the level of intentional action states that actions are not only aligned with specific, changeable circumstances or goals, but that these goals are inherent in the technology or are at least interpreted as such. All three dimensions of action (effectively, regulatively, and intentionally) are present here, making intentionality and reflexivity central aspects of this dimension. Further differentiation can be made within this definition to the extent that intentionality can be related to simple or complex goals and additionally either ascribed or actually present. The latter is based on the fact that intentions to act are often derived from culturally consolidated “typical” actions in a given situation and can therefore also be attributed to observed actions in reverse. Thus, the intention does not have to be actually present, or at least not consciously present, in order to attribute it to the action based on its embeddedness in a cultural and social context. Rammert and Schulz-Schaeffer speak of interpretative patterns from the social stock of knowledge in the form of culturally consolidated situation definitions. In a further step, however, the choice of the action goal can actually be made by the technology itself, for example on the basis of the available knowledge (for example, in the context of expert systems). This intention would not be culturally but technically objectified.

Agency can be normatively described at all three levels. However, by understanding “action” not as a single, firmly defined concept, but as gradable, the agency of specially shaped keys and artificial intelligences can be described symmetrically to the human one without either reducing all technologies and humans to a weak concept of agency—thus not being able to identify differences—or explicitly excluding many existing technologies—and also human routine activities—from the analysis of agency.

3.3 Technological agency as ascription or as existing feature

Lastly, Rammert and Schulz-Schaeffer make a distinction according to whether the agency of technology involves attribution or the description of objectively observable properties. This distinction ultimately boils down to “a difference regarding the degree of conventionalization of patterns of action attribution to technology (or the degree of deconstruction of such conventions by the scientific observer)” (Rammert and Schulz-Schaeffer 2002, p. 56; translated by the author). That is, the issue is whether the effects that technology evoke are considered and described as action as a matter of course, or whether there continues to be a debate about whether and to what extent the identification of action constitutes an attribution process.

Rammert and Schulz-Schaeffer’s resolution of this difference consists in a concretization of their understanding of what constitutes action. They take the position, also advanced by Luhmann (1984), that action and the quality of action are constituted through attribution. Accordingly, an action is an action in a social context primarily when the social interaction partner perceives it as such. Perception and attribution are in turn based to a large extent on social patterns of interpretation, which can be applied to human as well as to non-human actors. Thus, an ascription of action still takes place even if action could also be described by empirically observable properties. In this case, the action ascription is simply more objectified, meaning that ascribing an action based on the existing technological features just has a higher grade of conventionalization than ascribing an action without this basis. In this respect, both positions can be united in that technical action is regarded as ascription of action patterns on every level, but at the same time critically questioned as to “which techniques are defined and treated as (co-)actors in which action contexts and under which social conditions, and to what extent this way of seeing and acting prevails and with what consequences” (Rammert and Schulz-Schaeffer 2002, p. 56, translated by the author), especially in cases where action ascription is not very objectified yet. This is also necessary to make the research and discussion about technological agency accessible for the public which generally relies on a socially objectified understanding of action, agency, and actor—on the one hand to explain why a certain technology could be ascribed agency, on the other hand to prevent dystopic misconceptions about the abilities of existing technologies.

4 Applying the concept of distributed and gradualised action to automation in journalism

The theoretical concept described above seems to be suitable to be applied to describe automation in journalism, especially in human-machine communication research, for several reasons.

First, the authors’ theoretical perspective on technologies as acting subjects is very similar to the approach of HMC research, which also strives to include technologies into a field of social science originally exclusively focused at humans as long as they can fulfill a certain role via their functioning (cf. Guzman 2018, pp. 16–17). Rammert and Schulz-Schaeffer’s theoretical explanations are therefore connectable to this branch of research. Second, through the focus on interconnected activities and the actors performing them, it makes it possible not only to describe a complete process of action, but also provides an opportunity to single out a single technical actor (i.e., automated-writing software) and to shed more light on its role, capabilities, and interaction with other actors. Third, the understanding of actions as interconnected activities closely resembles the concept of processes (= actions) and tasks (= activities) in automation research, thus making the concept transferrable to a technical field. Fourth, the concept deviates from the usual one-dimensional taxonomies of automation. These are problematic, as described above, because even at first glance several dimensions can be identified that play a role in automation: Amount, control, coherence, and complexity. In each of these dimensions, an actor can have a different level. Especially in a field like journalism, involving many different, complex, interrelated tasks performed by different actors, it is therefore necessary to be able to describe technologies in terms of each of these dimensions independently. Rammert and Schulz-Schaeffer’s understanding of different levels of agency represents precisely this differentiation including the description of complexity that is missing in other taxonomies.

4.1 Empirical description of distributed journalistic action

Conceptualizing journalistic action.

To apply the described concept to automation in journalism, it is first necessary to shift the view from journalism as a whole to journalistic actions. This adheres to the nature of automation as it is mostly applied to processes and tasks. Also, as Rammert and Schulz-Schaeffer structured their approach by conceptualizing actions as collections of interconnected activities, their concept is best transferred to actions and activities in other fields. Thus, this reduction provides conceptual focus and enables the combination of automation and journalism.

Research in automation has often broken down “automation” to the automation of single tasks or actions and, as described above, these include at least research, production, and publication and their respective sub-tasks in journalism. Thus, a concept of journalistic action clearly adheres to an action-theoretical viewpoint, enabling a focus on the actors and the different tasks (cf. Scholl 2016, p. 377). However, as scholars like Deuze (2005, 2019) or, focused on automation in journalism, Diakopoulos (2019, p. 23) pointed out, those actions are shaped by an ideology or a complex of underlying values, norms, and goals, which is inherent to the system of journalism. Journalism is not simply researching, producing and publishing, as all of this could be done in different ways and with different outcomes—consider that producing a TV advertisement also contains such steps, but they are performed with a very different objective. Central for the thorough understanding of journalism are, thus, the performed actions as well as the underlying ideological structure that shapes or causes these actions. Therefore, journalistic action will be understood as a process consisting of multiple tasks and sub-tasks performed to transport information to a public audience, which includes researching the respective information and processing it into content, and that is structured by common values and beliefs about how this information transmission can be achieved and why it is important and necessary, including the normative goal of transmitting information in a way that enables informed opinion- and will-building of the public.

The underlying values and norms, however, will not be specified in more detail for two reasons. First, this article is mainly concerned with the differentiation of the extent of automation in journalism, which is centered around aspects like the number of tasks performed or the extent of human control. A specification of values and norms is not centrally necessary for this and will only be touched upon in the description of the higher levels of complexity in automation. Additionally, this makes it possible to transfer this taxonomy onto other technologies. Second, following Rammert and Schulz-Schaeffer’s concept and the related ANT, a description of machines should not be made using human-centered terms: Without redefinitions, these tend to be anthropocentric, thus exclusive against non-human agents. In addition, it de-values non-human actions by comparing them to a human “gold standard”. The same line of thought, also addressed by Primo and Zago (2015) who used ANT to describe automated text generation, will be followed in this paper: In theory and practice of human journalism, values like facticity, relevance, and independence are acknowledged, as well as norms resulting from those values like revealing sources for increased transparency (cf. Deuze 2005, pp. 447–450; SPJ 2014). However, this article will focus on journalistic work done by machines. Here, it cannot be presumed that the same values and norms will necessarily be central: It may be that these values and norms need to be weighted differently or that even other value concepts might occur. The discussion around norms for AI and Big Data shows that technology-centered values might be salient as well as different juristic norms. Furthermore, it is not clear who would and could even define these values. These questions still need to be explored. Thus, to avoid normative prescriptions of possible standards, no specific values and norms will be offered.

Distributed journalistic action on task and process level.

Based on their understanding of actions as an interconnected complex of activities, Rammert and Schulz-Schaeffer enable the empirical description of distributed action, including actors, the number of activities and their connections. Whereas they designed the concept for an application to whole actions (the journalistic process) and mutual influences of different actors, it is also possible to focus on a single actor (an automation technology) by specifying in which tasks it is involved (the amount) and describing its level of autonomy by identifying the influences that other actors have on its working (the control and coherence). In research on automation technologies in journalism, the latter will probably be the approach of choice to remain focused on the technology, in contrast to describing the whole process and all remotely involved actors—unless, of course, the technology is involved in the whole journalistic process.

Whereas no way of empirically measuring distribution is offered in the original concept, it is possible to regress on automation research here: The parallels of “processes” to “actions” and “tasks” to “activities” are evident as both describe a bigger goal which is pursued by reaching smaller, interconnected goals, making it conceptually transferrable. Thus, the use of Parasuraman et al.’s (2000) approach (see Sect. 2.2) is suggested. As they proposed task automation levels based on a simplified model of information processing in the human mind, the concept contributes to the symmetrical description of human and non-human actors and thus seems appealing for the present taxonomy.

However, the possibility to describe the automation of single tasks is not enough to appropriately describe the automation of journalism as journalism is not a single task—even if the use of “automated journalism” exclusively for text production could create this impression—but consists of a variety of tasks which must be combined. Parasuraman et al.’s approach offers a solution for this: Although the authors describe their automation steps as a linear process, they also refer to Gibson (2015 [1979]) to suggest that it might be conceptualized as a cycle where information acquisition of one task is based upon the final stage of previous tasks. This makes a description of different tasks, actors and their interconnection possible. It is also transferrable to journalism as, e.g., research can be seen as a process of finding sources and information, analyzing whether they are useful, deciding whether to further use them, and finally keep them for later use in an article. Similarly, production can be conceptualized as a process of gathering and analyzing all researched information, then deciding how the drawn conclusions fit together and finally generate content from it. Thus, the concept is applicable to the concept of distributed action.

The amount of automation can now be assessed by counting the tasks and subtasks where a specific technology is involved and comparing this to the overall number of tasks in the respectively interesting process or process part. In the examples in Fig. 2, it is obvious that example has a higher amount of automation than the others as automation is involved in all tasks. Control can be measured by looking further into the levels of automation in each stage. Parasuraman et al. suggest using approaches similar to the Sheridan and Verplank (1978) model which describes, focused on decision making, the relation between humans and computers and their respective influence on the decision. However, as Vagia et al. (2016, pp. 195–196) noted, automation levels can be different for every application or task. Thus, no definite level system will be suggested, but scholars are encouraged to assess the level of automation individually, in relation to the specific situation and the technology’s properties. Existing approaches to levels of automation can be adapted and modified for this purpose, but the assessment can also be guided by the general question about how highly the completion of the current (sub-)task by the technology in question is dependent on other actors in the process. On a simplified scale, example 2 in Fig. 2 has lower human control for information acquisition at the production stage because it is the only technology that collects the information all by itself that it will further process into a text.

Fig. 2
figure 2

Application of automation levels to automation technology

When looking at a technology that fulfills more than one task in the journalistic process, it is especially necessary to look at the coherence between the different tasks, following Parasuraman et al.’s suggestion of a circular process. Does the technology gather its data by fully relying on the output of the previous task—and which actor(s) completed the previous task? Is the decision to generate an article based on unusual results in automatically analyzed data made by the software, by a human, or by a human relying on the software’s suggestions? This can be seen when comparing Example 1 and 2 in Fig. 2: Whereas automation in Example 1 is only used for text generation and delivering the output to a human, it is overall more coherent than the second one, which is interrupted by human actions on the topic decision and also on the composition of the article.

Overall, this approach makes it possible to describe the empirically measurable aspects of quantity, coherence, and control that Rammert and Schulz-Schaeffer address in their concept. Quantity is captured by counting the participation in tasks, subtasks, and the individual aspects of subtasks and comparing them to the total quantity of existing tasks in the process or process segment of interest. The coherence and the autonomy or control possibilities by other (usually human) actors can be described by considering the four steps of Parasuraman et al.’s model: The more often no automation takes place at all in between or other actors are significantly involved in the fulfillment of the steps, the lower the coherence and the higher the control possibilities for other actors. Special attention can be paid to the transitions between tasks, especially if a software covers several large tasks (such as production and publication).

However, it is important to interpret the results with caution: Depending on the role in which the technology is used, it may be that the inclusion of human actors is not a reduction in automation, but rather part of the journalistic task, such as through editorial self-monitoring. Thus, for the sake of a continued symmetrical description, one should not presume a technology to suddenly take over the entire process where previously a wide variety of human actors were involved.

4.2 Gradualised journalistic agency as a measure of automation sophistication

As already explained, not only the empirical but also the normative level should be considered with automation in journalism as both are not necessarily interdependent. In terms of the analytical framework by Parasuraman et al. (2000), this means that both the lowest and the highest level of automation could have any level of agency in any individual task. Here, the concept of gradualised action comes into play. In the following, the individual levels are therefore related to journalism using examples from automated text generation. An overview of the different levels, together with hypothetical examples that illustrate how data-to-text software on different levels could be reporting on the same situation, can be found in Table 3. Notably, whereas the examples stem from text generation as this technology is the most widely used at the moment, the levels are applicable to every form of automation in journalism.

Table 3 Application of agency levels to automation technology

Level of transformative efficacy.

According to Rammert and Schulz-Schaeffer, the lowest level includes technologies that automate a task or process without optional alternatives but by always applying the same routine, either permanent or non-permanent.

As automated text generation is performed with the goal of permanent changes (creating a new text), an example for text generation at the lowest level (no permanent changes) will not be provided. Automated journalistic text generation resulting in transformative efficacy with a more permanent change could be a simple template system where data from a database is entered into slots of a pre-written text which is always the same and offers no alternatives. Alternatively, the data is transformed into fixed, pre-defined sentence parts which are then pieced together in a fixed order. A current journalistic example for this level is the L.A. Homicide Report, produced since 2007 (cf. Los Angeles Times 2022). The website lists every homicide known to police in the last twelve months and offers details such as the victim’s name, age, gender, the cause, and more, which are always arranged in the same fashion. Notably, if some information such as the cause is not available, the respective part of the sentence is left out. However, as described above, the data is merely entered in the form of a whole sentence part and the structure of the whole document is not altered when single parts are missing.

Level of ability-to-act-differently.

This level includes technologies which choose between alternatives, with the choice and the alternatives both either pre-defined or self-generated. On the lowest level with pre-defined alternatives and rules, we find template systems which offer alternative approaches for differences in the data or also several variants for the same content to generate more variability in the produced texts. In terms of general functionality, this is the level that most current text-generating software operates on: Journalists or members of the software company pre-define text structure, content, different sentence variants for different data or also for the generation of more variability, and the rules that the software should apply to choose between the alternative variants. These rules are mostly dependent on the data, e.g., if Real Madrid won against FC Barcelona, the software will choose the sentence “Real Madrid won El Clásico”, but if the match resulted in a tie, the software uses another sentence.

The highest level, a combination of self-generated alternatives and choices, can be found to a very small extent in software like AX Semantics where machine learning is being used to alter grammatical articles or optimize other details of grammar and language such as the inflection of verbs (cf. AX Semantics 2022b). Here, the alternatives do not need to be specified by a human operator but are provided by the software, based on its learning results. Equally, the choice between the alternatives is not specified by a grammatical rule implemented in the software but also by machine learning (cf. AX Semantics 2022a). However, the freedom of choices and alternatives in these cases is limited and is only a matter of grammatical correctness. For self-generated alternatives and choices concerning the informational content, dynamically created documents as described by Reiter (2016) where no human pre-formulation of text parts and no pre-defined structure are necessary could be an example.

Level of intentional action.

The level of intentional action includes technologies which own, choose, or even generate their own action goals and act accordingly. A social ascription of intentionality could occur in a technology that generates texts of varying lengths, including different amounts of information, depending on the estimated reading time for similar articles. Even if this intention would be specified in the code, for users, it could still appear as intentionality. Additionally, on this level of intentional action, it would be possible to set journalistic, technic, legal, or other norms and values as explicit goals to shape a technology’s actions and make it act (or write) accordingly. The highest level could contain a (hypothetical) technology which generates different text styles for different recognized user groups or which deduces own action goals from knowing its journalistic purpose.

4.3 Ascription and observation in HMC

The third difference that Rammert and Schulz-Schaeffer bridge is the difference between agency as ascription or as observation. Their solution—the view of all agency as ascribed, with observations just signifying a higher degree of the ascription’s objectification—is not centrally required for the analysis of automation in journalism, but still relevant for its theoretical foundation.

HMC research and related approaches such as CASA are often assigning communicator or social actor status to technologies based on their properties—namely, the social cues they possess, such as gender or a face. Following Rammert and Schulz-Schaeffer, this points towards a certain level of objectification as it is based on a tradition of at least 25 years of research on this topic. However, this research has also repeatedly shown that it is central how the users perceive the technology—meaning what properties they ascribe to it—which is not always and for every user dependent from its technical properties (a theoretical basis for this can, e.g., be found in affordance theory). The first consequence of this is that research needs to keep in mind that ascription is key, not the properties themselves, especially from a user’s perspective. The second consequence is that from a researcher’s perspective, it remains necessary to explain why we conceptualize technologies as actors or communicators based on the interaction partner’s perception, a weakened communication concept (cf. Fortunati and Edwards 2020, pp. 8–9) or technical foundations enabling them to fulfill a certain role (cf. Guzman 2018, p. 16)—not only to intensify our own understanding and sharpen our arguments, but also for a better understandability of our research and approaches for other disciplines and the interested public. Narrowing this down to journalism, clarifying what exactly makes automation technologies in journalism a communicator could prevent fearful perceptions of simple template technologies as a threat to the own occupation and ultimately lead to a more fruitful connection between human and non-human mass communicators.

5 Implications

The presented analytical scheme and conceptual basis offer various implications for the current problems with terminology and conceptualization of automation in journalism. These concern both the understanding of “automation” and “journalism” as well as implications for future HMC theory building and research. These aspects will be addressed in the following.

5.1 Refining HMC theory

The application of Rammert and Schulz-Schaeffer’s concept to automation in communication provides various insights for HMC. First, gradualised action offers an approach to actor status and the possession of agency which is also applicable to other technologies than automation in journalism. As the authors’ central argument is the ascription of actor qualities by the human counterpart, not the question about the objective possession of actor status, it resembles the argument that HMC also uses to describe technologies as communicators. However, contrasting ANT that has been widely used in HMC, the concept of gradualised action allows to differentiate between levels of agency. This gives HMC scholars a new and more refined way of describing how machines can be seen as communicators or communicative actors, and how their properties contribute to this: Pre-programmed chatbots can be differentiated clearly from elaborate voice assistants and these differences can be accounted for while still understanding both technologies as a communicator.

Second, distributed action offers a possibility to analyze single communicators as well as the whole communicative process in which they are involved. Seeing journalistic action as interconnected activities of distributed actors sharpens the focus on these interconnections and how actors mutually influence each other. This is enhanced further through the automation concept by Parasuraman et al. which can be used to visualize how tasks and sub-tasks connect to the output of previous ones. Thus, distributed action can serve as a theoretical basis for studies which, e.g., compare the application of different writing technologies in newsrooms and sociologically investigate their effect on the human actors that work with them.

Third, the concept forwards the idea of symmetrical description as introduced by the ANT. Rammert and Schulz-Schaeffer refine this by alerting scholars that while we must not presume ontological differences between actors, we also must not assume ontological equity or similarities. Thus, we must remain open for outcomes that show how interaction and communication with machines is different from the one with humans and why this is the case. Such approaches can already be found in HMC: While for a long time, social scientific concepts have been transferred to machines, expecting similar results along the CASA paradigm, scholars like Lankton et al. (2015) and Weidmüller (2022) demonstrate that trust is different for human and technological trustees. Still, using the same concepts for humans and machines is necessary as only then can we detect where the similarities and differences lie. The presented concept an be used as a sound theoretical basis when human and machine communicators are evaluated. It can not only justify the use of the same scales to measure their effects, but also help to interpret the results as an indicator for the underlying ontology of different types of actors.

Fourth, the concept provokes thoughts about how actor or communicator status is ascribed to technology: Within HMC, we find different levels of objectification in the answers to this question, from the users’ reactions to the underlying programming as the central argument. The theory shows that each of these levels and answers are justified in their own way, as long as we keep an active discourse about how and why we choose them. If we have an answer to what makes a technology a communicator, this also opens up new research possibilities, the transferability of other concepts, and much more. Furthermore, this reminds scholars of how well this must be reasoned in order to communicate it to other disciplines, practitioners, and the public. Together, these four implications allow for the theoretically sound analysis of automation and its extents for a whole journalistic process as well as for the specific investigation of human and non-human communicative actors, e.g., automated writing software.

5.2 Differentiating terms

“Automation” has been identified as a suitable modifying term that, however, has to be applied more differentiated to individual journalistic tasks that can be identified with the concept of journalistic action. Following the two approaches of narrowing the definitions versus widening their meanings, an attempt will be made to clarify the terminology. In terms of narrowing, the term “automated journalistic text generation” seems suitable to specify Carlson’s (2015, p. 416) slightly modified definition: algorithmic processes that create narrative news texts with no human intervention beyond the initial programming choices. This also does justice to the fact that not every text-generating technology might rely on data-to-text in the future (as indicated by Carlson) but might also use other processes such as machine learning to write opinion pieces or other content. In wider terms, the full automation of any part of the journalistic process can be defined as algorithmic processes that perform the respective part of the journalistic process with no human intervention beyond the initial programming choices. Furthermore, the terminology can follow the same pattern, e.g., “automated journalistic research”, “automated journalistic video production” or “automated journalistic publishing”. For partial automation, “semi-” can be added to the term and the amount of automation can be specified in the definition and an accompanying description of the basic concept. Furthermore, “journalistic” should be included as journalistic research, content generation and content distribution can differ from similar tasks in other fields.

“Automated journalism” should be used to describe the complete and cohesive automation of the journalistic process in a specific medium, form of content and topical field without human intervention beyond the initial programming and specification of the software. The reason for this is that the basic journalistic function of information transportation requires different actions in different cases and whether something is automated journalism or not thus depends on the specific tasks necessary in these cases: A sports journalist who is writing a soccer match summary drawing on pre-existent data, presented in a table, and publishes their text by entering it in a content management system follows a different journalistic process than an investigative reporter who researches over the course of months, analyzes data and conducts interviews along the way. Both can be described as journalism. A technology that can produce content based on pre-existent data and enter the resulting text into a CMS can therefore be called “automated journalism” when it is employed for the sports text. In the latter example, it could produce a part of the text that is based on the data analysis, but it could not conduct the rest of the research that is necessary for the article. Thus, the software would perform automated journalistic data analysis and content production but not automated journalism in this case.

It must be noted that even if automated journalism and automation in journalism can follow the same journalistic processes that human journalists would—as displayed in the example above—it does not have to do so. This concept is not bound to comparisons with humans, it is only bound to its outcome, which is the performance of journalistic action.

5.3 Conceptualizing different levels of automation

Whereas terminology has now been clarified, it requires a conceptual basis. Especially when talking about “semi-automated” journalistic tasks, it must be specified what exactly is meant by that.

The approach towards automation in journalism that this article offers allows to describe automation both empirically and normatively. This is grounded in Rammert and Schulz-Schaeffer’s concept of distributed and gradualised action, differentiating between amount, coherence and control, and the normative aspect of sophistication which is described based on agency levels. This approach can be used to describe actors and their interrelations in a complete journalistic process but also to describe the role that a single actor plays in this process, by giving information on the specific task and subtasks it is involved in, how high the automation of task and process is (amount and influenceability), and what specific levels of agency it possesses for every identified subtask. These differentiated levels are not only suited for a better description of technology, but also enable studies that investigate technology in society. Even if two different technologies fully automate the same process, they can still differ in terms of application and effect, and these differences can now also be traced back to a theoretically grounded distinction of the technology’s properties. Similarly, the investigated technologies can now be chosen on a systematical basis. A comparison of automated writing software’s effects on reader perception could now include examples for different levels of agency while deliberately keeping the other dimensions at the same level for comparability.

6 Conclusion

The presented approach contributes to HMC theory in several ways. It provides a possibility to use adequately specific terms for automation in journalism, enabling a common understanding of and clear communication about the subject. It offers an analytical scheme for the interconnection between human and non-human communicators in journalistic processes and for those communicators’ abilities. It poses the question of the degree of objectification in understanding technologies as communicators or actors and points out its relevance in the investigation of users’ ontological understanding of technologies as well as in scholars’ approach and communication about the topic. Lastly, it shows that sociological approaches like Rammert and Schulz-Schaeffer’s concept are often similar to ideas in HMC and can therefore be considered in building further theoretical grounding.