Keywords

This introduction is based on a slightly modified version of the call for the NeTWork workshop on “safety in the digital age”. The call framed the invitation of researchers to debate then to write a chapter for a book. The intention of this workshop then of this publication was to start a collective discussion, based on empirical and conceptual reflection, about safety in this new stage of societies’ trajectories, commonly described as “the digital age”. These chapters are followed by a conclusion which develops a sociotechnical proposition of how to start thinking “safety in the digital age”…

Algorithms, machine learning, big data and artificial intelligence (AI) are key words of a current transformation of societies. Following a first wave of internet development coupled with the spread of personal computers in the 1990s, the 2010s brought a second level of connectedness through smart phones and tablets, generating a massive amount of data from private and public activities. It is this new environment built over thirty years made of big data produced by the daily activities of people working, traveling, reading, buying, communicating and amplified and captured by a growing market of the Internet of Things—IoT, which provides an opportunity for the proliferation of algorithms, machine learning and a new generation of AI [11, 12]. Without falling into the trap of technological determinism, this transformation through digitalisation clearly affects every sphere of social life including culture, economy, science, politics, art, education, health, family, business, identity and social relations.

One can easily find examples in these different spheres through which our daily private and public lives are affected. Social media (e.g., Facebook, Twitter, LinkedIn, ResearchGate), search engines (e.g., Google, Bing, Qwant) and websites in so many various areas including online shopping (e.g., Amazon, Fnac), music (e.g., Spotify, Deezer), news (e.g., New York Times, Le Monde, Financial Times), videos, series, cinema and programmes (e.g., Netflix, YouTube, DailyMotion) or activism (e.g., SumOfUs, Avaaz) are only a few examples. Because these ubiquitous online services reconfigure our ways of listening to music, of consuming, of reading, of communicating, of learning, of creating…we simply experience new ways of being in the world.

A digital society slowly emerges, somewhere between (1) reality, (2) proclaimed bright futures and (3) fears of dystopian trends in the next years or decades to come. In the call for this NeTWork workshop on “Safety in the digital age”, we wished to remain grounded in reality. It has become indeed very clear for sociologists that we now empirically live in a mediated constructed reality (e.g., Hepp and Couldry [3], and Cardon [4]), while some wonder if these changes should be characterised as an evolution or a revolution (e.g., Rieffel [14]), others now warn of a re-engineering of humanity because of the extent of the material, cognitive and social modifications of our environment (e.g., Frischmann and Sellinger [7]).

In this respect, the rise of internet giants (GAFAM/N for Google, Apple, Facebook, Amazon, Netflix, Microsoft) triggers several concerns ranging from business monopoly through fiscal to data privacy and exploitation issues, which reveal increasing concern by civil societies and states. The thesis of a “surveillance capitalism” by Zuboff comes to mind [17], a thesis based on the careful study of the ideologies professed by the engineers behind the digital world. An example is selected by [17, p. 432], quoting Pentland, an MIT Professor.

Pentland says that “continuous streams of data about human behaviour” mean that everything from traffic, to energy use, to disease, to street crime will be accurately forecast, enabling a « world without war or financial crashes, in which infectious disease is quickly detected and stopped, in which energy, water and other resources are no longer wasted, and in which governments are part the solution rather than part of the problem (…) Great leaps in health care, transportation energy, and safety are all possible.

Narrowing this panoramic view to work, organisations, business and regulations, the implications are potentially quite profound. They seem obvious in some cases but remain also still partly uncertain in other areas. For instance, how much of work as we know it will be changed in the future? Estimates range from 9 to 47% of current jobs that could disappear within the next few decades because of AI. Whatever the extent of this replacement or mutation, one can imagine that combining human jobs with AI, or simply relieving people from current tasks, will change the nature of work as well as the configuration and management of organisations. In addition, with this digital expansion comes growing cyber-security challenges.

In the platform, digital and gig economy (e.g., Amazon, Uber, Deliveroo, Airbnb), algorithmic management has for instance been coined to characterise employees’ working conditions [9]. And some of these companies’ practices have already been met by workforce resistance in several countries, a workforce fighting for what they consider to be their rights as employees. In many cases, in the US, the UK and France, the legal system has ruled favourably concerning workers’ claims that they were in a traditional employer–employee relationship, and not in a context of companies contracting with self-employed workers.

Businesses are threatened in their market positions by innovative ways of interacting with their customers through social media and use of data, by new ways of organising work processes or by new start-up competitors redefining the nature of their activities. Consider, for example, the prospect of autonomous cars, which could completely redefine the ecosystem of companies. Car makers could well become secondary players of an industry revolving around data exploitation and management controlled by digital companies, which become the new dominant players. The insurance business could well fall into the hands of these new data masters too, in the same way as hotels chains had to cope with new digital players. Business leaders must therefore adapt to this digitalisation of markets, to potential disruptions based on big data, machine learning and AI. They must strategise to keep up with a challenging and rapidly changing environment [5].

The same applies to regulation. Because of the now pervasive use of algorithms, machine learning, big data and AI across society, notions of algorithmic governmentality [15] or algorithmic regulation [16] have been developed to identify and conceptualise some of the challenges faced by regulators. Cases of algorithmic biases, algorithmic law breaking, algorithmic propaganda, algorithmic manipulation but also algorithmic unknowns have been experienced in the recent past, including the Cambridge Analytica/Facebook scandal during the last US election and the “DieselGate” triggered by Volkswagen’s software fraud [2]. This creates new challenges for the control of algorithms’ proliferation, and some have already suggested, in the US, a National Algorithm Safety Board (e.g., Macaulay [10]).

This last point connects digitalisation with safety. How can high-risk and safety–critical systems be affected by these developments, in terms of their activities, their organisation, management and regulation? What can be the safety-related impacts of the proliferation of big data, algorithmic influence and cyber-security challenges in healthcare (e.g., hospitals, drugs), transport (e.g., aviation, railway, road), energy production/distribution (e.g., nuclear power plants, refineries, dams, pipelines, grids) or production of goods (e.g., chemicals, food) and services (e.g., finance, electronic communication)? Understanding how these systems operate in this new digital context has become a core issue. It is the role of research to offer lenses through which one can grasp how such systems evolve, and the implications for safety.

There are many affected areas in which research traditions in the safety field can contribute to question, to anticipate and to prevent potential incidents but also to support, to foster and to improve safety performance within a digital context [1, 8]. For instance, tasks so far performed by humans are potentially redesigned with higher levels of AI-based automation, whether in the case of autonomous vehicles or human–machine teaming [12]. What about human error, human–machine interface design, reliability and learning in these new contexts (Smith and Hoffmann 2017)? What are the consequences of pushing the boundaries of allocated decision making towards machines? What are the implications for the distribution of power and decision-making authority of using new sources of information, new tools for information processing and new ways of “preprogramming” actions and decisions through algorithms?

The same applies to the organisational or regulatory angles of analysis of safety critical systems, such as those developed by the high-reliability organisation [13] and risk regulation regimes [6] research traditions. What happens when protective safety equipment, vehicles, individuals’ behaviours and the automation of work schedules are interlinked through data and algorithmic management delegating to machines a new chunk of what used to be human decision making? What are the implications for risk assessment, learning from experience or compliance to rules and regulations, including inspection by authorities?

But quite importantly, what of this is realistic and unrealistic? What can be anticipated without empirical studies but only projections into the future? Which of these problems are new ones and which of them are old? The NeTWork workshop in September 2021 was an opportunity to map some of the pressing issues associated with digitalisation based on algorithms, machine learning, big data and artificial intelligence for the safe performance of high-risk systems and safety–critical organisations. The chapters in this book cover many of the hot issues one needs to have in mind when operating, managing and regulating safety in a digital age. They offer a unique treatment of this topic, one of the first to bring multiple disciplinary viewpoints to bear. Each chapter is now summarised to allow the reader to get a big picture of the multiple angles of analysis explored.

In “The digitalisation of risks assessment: fulfilling the promises of predictions?”, David Demortain introduces risk assessment in risk regulation regimes. He reminds the reader of the importance of this activity at the intersection of private companies, states, civil society, science and expertise in a variety of sectors such as the food, nuclear or pharmaceutical ones. Assessing risks consists in building mathematical models which translate phenomena into equations in order to anticipate their effects. The relationship between data, experiments, computers and models is key to an understanding of risk assessment. David describes three such models when it comes to predicting the impact of chemicals on the living (quantitative structure–activity relationships—QSAR; physiologically based pharmacokinetics—PPBK; biologically based dose response—BBDR). These models rely on different epistemological, methodological, experimental and mathematical options to support their predictive capabilities. Already extensively computerised models, the addition of machine learning, big data and artificial intelligence proves to be a new exciting prospect for promoters of increasingly sophisticated models, an example of which is the Tox21 program. David discusses digitalisation in this context by considering critically, in turn, what appears, according to him and at this stage, realistic, and what is not. He ponders the excessive ambitions surrounding datafication, computational innovation and the systemic ambition of models.

In “Key dimensions of algorithmic management, machine learning and big data in differing large sociotechnical systems, with implications of system wide safety management”, Emery Roe and Scott Fortmann-Roe translate the problem of safety in the digital age at the empirical level of software design strategies in distributed-type companies (such as Google, Netflix, Facebook or Amazon). These strategies are ones based on design trade-offs of software along four dimensions: (1) comprehensibility versus features; (2) human operated versus automated; (3) stability versus improvement and (4) redundancy versus efficiency. The fast pace of digital innovation pushes such companies towards the right end of this design spectrum, towards more features, automation, improvement and efficiency. From a safety point of view, the traditional approach favours the opposite end of this spectrum, preferring comprehensibility, human operated, stable and redundant systems. Emery and Scott challenge these taken for granted design assumptions, considering the problem of obsolescence (outdated software systems), when a system falls behind, becomes too rigid in its evolving environment and exposed, additionally, to high levels of cyber-security threats.

Olivier Guillaume illustrates with a case study the privacy aspect of the digital age in his chapter “digitalisation, safety and privacy”. He first situates the advocated value of the digital by its promoters in the context of work in organisations. Indeed, the digital age recasts the old problem of autonomy, professionalism, standardisation, bureaucracy and its relation to safety. In principle, by providing more efficient ways, through smart phones, personal digital assistants (PDAs), connected glasses and wearable sensors to plan, track, monitor and control employees’ activities, a greater level of reliability and safety could be achieved for managers. However, tracking employees’ activities is regulated by the European directive on data protection (GDPR) and is met anyway by employees’ reluctance regarding intrusive management. Olivier shows how privacy, intimacy and private life in employees’ daily activities play an important role in the construction of professional and collective identity as well as expertise. In his case study, digital solutions which impinge on privacy are negotiated, and employees obtain from their employer, through their representatives, a decision to abandon options that they consider to be intrusive. Olivier warns that in work contexts without a tradition of negotiations, or exposed to high levels of power asymmetry, the balance between digital control and employees’ privacy might lean towards the former at the expense of the latter.

Cécile Caron continues the discussion of privacy aspects of the digital age, in her chapter Design and dissemination of blockchain technologies: the challenge of privacy. She takes as her starting point the antagonism between the ideals of blockchain as an information infrastructure that is decentralised and without the need of a trusted third party, and the GDPR regulation’s requirements to have a (centralised) data controller responsible for the processing of personal data. Being based on two different forms of trust, the relationship between the two presents important privacy dilemmas that will need some form of reconciliation in concrete application of blockchain technologies. Caron studies this by means of a sociological case study of a mobility service using a blockchain, IoT solutions and mobile and web applications to track the charging of electrical vehicles. By analysing qualitative data from service designers and service users, Caron identifies different themes or “tests” that illustrate the confrontations, negotiations and alliances involved in the tension between privacy and blockchains. Among these themes are the crucial question of balancing decentralisation and (re)centralisation in the governance of privacy, the requirements for data minimisation, consent and transparency in the processing of personal data. The three dilemmas identified in the case study illustrate that concrete practices of privacy protection are by and large a skill or a form of expertise that is distributed among a wide range of actors in innovation ecosystems. The ability to find satisfactory compromises across these actors requires a high level of collaboration and experimentation.

In Considering severity of safety–critical outcomes in risk analysis: An extension of fault tree analysis, David Kaber and colleagues draw our attention to the input data of risk analysis. Despite increases in available data in some domains, other domains are still characterised by an absence of empirical observations. Hence, there are situations, particularly in novel work systems, where the data is sparse relative to the number of decision variables that must be considered in risk analysis and safety practice (the “curse of dimensionality”). They ask whether new and advanced tools can be established to create precise projections of safety–critical system outcomes in such situations and describe and discuss a method for accomplishing such projections. The authors also discuss the extension of existing systems safety analysis methods into more digitalised industries, and the crucial role of the quality and quantity of input data in such methods.

Nicola Paltrinieri in some ways picks up where Kaber and colleagues leave off, in his chapter Are we going towards “no-brainer” safety management. Where Kaber and colleagues focus on the methods used to provide analyses, Paltrinieri emphasises the role of humans in interpreting the results of such methods. He shows how increases in available data and enhanced computational power can in fact be utilised for more continuous monitoring of industrial process conditions but, as the title of the chapter suggests, the safety management of Industry 4.0 is still dependent on human judgement. By providing examples of AI-based prediction in three domains (release of hazardous materials in land-based industry, accidental drive-offs in offshore drilling operations and alarm chattering in a chemical plant), he shows that the predictions in all three cases still need to be interpreted and that we are nowhere near the condition of autonomous safety management. In addition, and like Kaber and colleagues, Paltrinieri points to the critical role of input data for making predictions and the equally critical role of human judgement in preparing some types of data for analysis.

Turning to the health sector, Mark Sujan presents two examples of the use of AI in his chapter Looking at the safety of AI from a systems perspective. In the two examples (autonomous infusion pumps for intravenous medication administration and AI support in the recognition of out-of-hospital cardiac arrest), Sujan explains the specific functions of the two systems and relates these functions to their social and professional contexts. He shows that many of the challenges are highly familiar to safety researchers, such as the “ironies of automation” and the potential for “automation surprise”. Still, modern AI systems also pose new challenges, in that these systems are not necessarily put in place to replace physical work but rather to augment human actions. This involves AI systems being given different roles compared to traditional automation and a different form of interaction between humans and technology. Sujan argues that these relationships between humans and technology, and the associated social, cultural and ethical aspects, will have greater importance for future AI applications than was the case with traditional automation. This, Sujan argues, calls for a transition from a technology-centric focus that contrasts people and AI, to a more systems-based approach where AI and humans are seen as integrated in a wider health system.

In Normal cyber-crisis, Sarah Backman provides a high-level, yet empirically grounded, discussion of the phenomenon of large-scale cyber-crises that can affect the functioning of critical infrastructures. Based on interviews with senior experts on cyber-security and critical infrastructures in Sweden, the UK and the USA, she argues that the consequence dynamics of such crises can be explained by Charles Perrow’s Normal Accident framework. She shows how the transboundary nature of large-scale cyber-crises needs to be understood through several layers: (1) the technical layer, especially emphasising the role of legacy software and hardware, (2) the cognitive layer, referring to the difficulties of perceiving and recognising dangers when tight couplings and interactive complexity is a transboundary phenomenon, (3) the organisational layer, how centralisation can make accidents more consequential, while redundancy serves to create looser couplings and (4) the macro-layer, illustrating how supply chains can be exploited by cyber-threat agents.

Picking up some of the threads from Backman’s chapter, Nævestad and colleagues examine how critical infrastructure organisations can reduce their digital vulnerability. The starting point of their chapter, Information security behaviour in an organisation providing critical infrastructure: A pre-post study of efforts to improve information security culture, is that people can be both a cause of information security incidents and a key element in protecting a system against such incidents. They examine the effects of interventions aimed at improving information security culture, with an aim to ultimately influence behaviour related to information security. By means of a multivariate regression analysis of survey data consisting of employees’ perceptions of key dimensions of information security, and controlling for education, seniority, prior knowledge and which department the respondents belonged to, they find information security culture to be the most important variable influencing information security behaviour.

Yann Ferguson discusses how the introduction of artificial intelligence in the workplace can influence the empowerment and productivity of workers, including the preservation of job quality, inclusiveness, health and safety. His chapter is titled AI at work, working with AI—First lessons from real use cases. Based on 150 use cases of a specific application of AI five ideal-type “worker stories” are crystallised, all describing potential outcomes of the use of AI in the workplace. AI can involve employees being both replaced, dominated, augmented, divided and rehumanised. All these ideal types are viable outcomes from the introduction of AI in the workplace. However, which of these consequences, or which combination of them, was not only a matter of the technology itself but was strongly shaped by characteristics of the work and workers involved. Although not in a deterministic way, the application of AI was associated with a reconfiguration of work and the form of engagement between workers, work and the technology involved.