The Linear Model of Innovation

The Second World War and its immediate aftermath signalled a critical moment in the unfolding relationship between science, society and the state, especially in the United States. The Manhattan Project in particular, involving the coordination of infrastructure and personnel in the development and production of the US nuclear programme, had demonstrated the utility of science in public policy, in this case its role in helping to win the war through the detonation of two atomic bombs over Japan. In November 1944, President Roosevelt commissioned Vannevar Bush, who played a formative role in administrating wartime military R&D through heading the US Office of Scientific Research and Development (OSRD), to produce a report laying out the contributions of science to the war effort, and their wider implications for future governmental funding of science. What emerged in July 1945 was the report, Science – The Endless Frontier (Bush, 1945), that became the hallmark of American policy in science and technology, and the blueprint and justification for many decades of increased funding in American science.

The Bush report is associated with the linear model of innovation, postulating that the knowledge creation and application process starts with (the government funding of) basic research, which then leads to applied research and development, culminating with production and diffusion, and associated societal benefit. Even if this sequential linkage may have been added post hoc, partially and imperfectly reflected in Bush’s actual report (see Edgerton, 2004), it nevertheless developed an iconic status as the origin and source of a dominant science policy narrative in which pure curiosity-driven science (knowledge for its own sake) was seen as both opposed to and superior to applied science, effectively operating as the seed from which applied research emerges, the economy grows and society prospers (Godin, 2006). As Jasanoff (2003) argues, the metaphor that gripped the policy imagination was the pipeline: ‘With technological innovation commanding huge rewards in the marketplace, market considerations were deemed sufficient to drive science through the pipeline of research and development into commercialisation’ (Jasanoff, 2003: 228). This logic was given further impetus by the diffusion of innovation literature, notably in E.M. Rogers’ classic text (1962), which again adopted a linear and determinist model of science-based innovation diffusing into society with beneficial consequences.

Central to the post-WW2 science policy narrative was the concept of the social contract, namely that in exchange for the provision of funds, scientists—with sufficient autonomy and minimal interference—would provide authoritative and practical knowledge that would be seamlessly turned into development and commercialisation. The linear model understands science and policy as two separate spheres and activities. The responsibility of scientists is first and foremost to conduct good science, typically seen as guaranteed by scientists and scientific institutions upholding and promoting the norms of communalism, universalism, disinterestedness and organised scepticism (Merton, 1973). The ideal of science was represented as ‘The Republic of Science’ (Polanyi, 1962), of science as separate from society and as a privileged site of knowledge production. The cardinal responsibility of science according to this model was primarily to safeguard the integrity and autonomy of science, not least through practices of peer review as the mechanism that guarantees the authority of science in making authoritative claims to truth, and thus ensuring its separation from the sphere of policy and politics.

This division of powers served the interests of both actors: for scientists, a steady and often growing income steam as well as considerable self-autonomy, while for politicians and policy-makers, a narrative in which they can claim that their policies are grounded in hard and objective evidence (‘sound science’) and not in subjective values or ideology. This division was also written into institutional arrangements for science policy. The Haldane Principle for instance, that the decision-making powers about what and how to spend research funds should be made by researchers rather than politicians, was written into national science funding bodies in the UK as far back as 1918, operating especially following the Second World War as a powerful narrative for self-regulation and for safeguarding the autonomy of science.

So far, we have described the linear model of science and technology, the assumptions that underpin its governance, including its optimistic and deterministic view of the relationship between pure science and social progress. Yet, as the twentieth century progressed, this model came increasingly to be under strain as providing robust governance in the face of real-world harms that derived from scientific and technological innovation. Traditional notions of responsibility in science were that of safeguarding scientific integrity, whereas responsibility in scientific governance came to include responsibility for impacts that were later found to be harmful to human health or the environment. The initial governance response was to acknowledge that (even well-conducted) science and technology could generate harms, but that these could be evaluated in advance, and within the bounds of scientific rationality, through practices of risk assessment. Following a report from the US National Research Council (1983), systematising the process of risk assessment for government agencies through the adoption of a formalised analytical framework, a rigorous and linear scheme was promoted and disseminated in which each step was based on available scientific evidence and in advance of the development of policy options. Risk assessment was thus a response to the problems of the linear model, but still very much within the linear model’s framing and worldview.

Notwithstanding the efficacy of risk assessment to mitigate the harms associated with science and technology, notably in relation to chemicals and instances of pollution, it did little to anticipate or mitigate a number of high-profile technology disasters that took place throughout the latter half of the twentieth century, and that demonstrated that science and technology could produce large-scale (and possibly systemic) ‘bads’ that evaded the technical calculus of science-based risk assessment (Perrow, 1984). High-profile disasters ranged from the Three Mile Island nuclear accident in the United States in 1979, to the Bhopal Union Carbide gas disaster in India in 1984, the Chernobyl nuclear disaster in Ukraine in 1986, the ‘mad cow’ BSE controversy in the UK and Europe throughout the late 1980s and 1990s, and the GM food and crop controversy in the 1990s and 2000s first in Europe and then across much of the Global South. The nuclear issue in particular became a focal point throughout the 1970s and 1980s for wider concerns about technological modernity, manifested in large social movements mobilised against the potential of science-led innovation to produce cumulative unknown and potentially cataclysmic risks. Theorised most famously by the sociologist Ulrich Beck and his notion of modernity having entered into a new phase dubbed the risk society, science and technology were seen as having produced a new set of global risks that were unlimited in time and space, manufactured (rather than as acts of God), potentially irreversible, incalculable, uninsurable, difficult or impossible to attribute, dependent on expert systems and institutions for their governance, and where society operated as an experiment in determining outcomes (Beck, 1992).

The saga of bovine spongiform encephalopathy (BSE) or ‘mad cow’ disease in the UK and Europe is one such risk that was woefully and inadequately governed by a reliance on formal processes of science-based risk assessment, and where the political controversy derived from the inadequate handling of a new disease in cattle under conditions of scientific uncertainty and ignorance, and in the context of Britain’s laissez-faire political culture. In this case, despite reassurances from government ministers, claiming innocently to be following scientific advice that a transmission across the species barrier would be highly unlikely (following from the available risk assessments at the time that there was no evidence that proved such a transmission could take place), a deadly degenerative brain disease spread from cattle to humans, escalating to such proportions as to threaten the very cohesion of the European Union (Macnaghten & Urry, 1998).

More generally, risk assessment as a formal mechanism of scientific governance came under sustained criticism (for an extension of this argument, see Jasanoff, 2016). First, it embodies a tacit presumption in favour of change in assuming that innovations should be accepted in the absence of demonstrable harm. Second, it prioritises short-term safety considerations over long-term, cumulative and systemic impacts, including those on the environment and quality of life. Third, it prioritises a priori assumptions of economic benefits with limited space for public deliberation of those benefits and their distribution on society. Fourth, it restricts the scope of what is considered to be ‘scientific’ expertise, typically to a restricted set of disciplines, with limited scope for accessing the knowledge of ordinary citizens. And fifth, it ignores the values and deep-seated cultural presuppositions that underpin how risks are framed, including the legitimacy of alternative framings.

The Grand Challenge Model of Science for Society

While the linear model has been criticised for failing to account for the (especially systemic) risks associated with late modernity, the model has also come under sustained criticism as offering an inadequate account of how the innovation system is (or should be) structured and for what ends. Throughout the latter part of the twentieth century, science and innovation became increasingly integrated and intertwined. The knowledge production system moved from the rarefied sphere of elite universities, government institutes and industry labs into new sites and places that now included think tanks, interdisciplinary research centres, spin-off companies and consultancies. Knowledge itself became less disciplinary based and more bound by context and practical application. Traditional forms of quality control via peer-based systems became expanded to include new voices and actors adding additional criteria related to the societal and economic impact of research. Variously framed using new intellectual concepts that included ‘Mode 2 knowledge’ (Gibbons et al., 1994), ‘post-normal science’ (Funtowicz & Ravetz, 1993), ‘strategic science’ (Irvine & Martin, 1984) and the ‘triple helix’ (Etzkowitz & Leydesdorff, 2000), a new model of knowledge production emerged in which science came to be represented as the production of socially robust or relevant knowledge, alongside and often in conflict with its traditional representation as knowledge for its own sake. Interestingly, Mode 2 authors, in a later book, contextualised this transformation to accounts of societal change, particularly the Risk Society and the Knowledge Society, where ‘society now speaks back to science’ (Nowotny et al., 2001: 50; see also Hessels & van Lente, 2008).

One institutional response to critiques of the linear model has been the development of initiatives aimed at ensuring that science priorities and agenda-setting processes respond to the key societal challenges of today and tomorrow. The ‘grand challenge’ approach to science funding best illustrates this approach. Historical examples of grand challenges range from the prize offered by the British Parliament for the calculation of longitude in 1714 to President Kennedy’s challenge in the 1960s of landing a man on the Moon and returning him safely to Earth. However, it was in the 2000s that the concept developed into a central organising trope in science policy, propelled inter alia by the Gates Foundation as a way of mobilising the international community of scientists to work towards predefined global goals (Brooks et al., 2009). In European science policy, the Lund Declaration in 2009 was a critical moment, which emphasised that European science and technology must seek sustainable solutions in areas such as global warming, energy, water and food, ageing societies, public health, pandemics and security.

More generally, the concept has been embedded across a wide array of funding initiatives that has included most recently the European Commission’s Framework 8 Horizon 2020 programme (€80 billion of funding available over 7 years from 2014 to 2020), as a challenge-based approach that reflects both the policy priorities of the European Union and the public concerns of European citizens. Legitimated as responding to normative targets enshrined in Treaty agreements, these include goals on health and wellbeing, food security, energy, climate change, inclusive societies and security. It assumes, in other words, that science does not necessarily, when left to its own self-regulating logics and processes, respond to the challenges that we as a society collectively face. It needs some degree of steering, or shaping, on the part of science policy institutions, to ensure alignment. It is thus embedded in a discourse about the goals, outcomes and ends of research.

Over the last decade, the grand challenge concept has become deeply embedded in science policy institutions, as a central and organising concept that appeals to national and international funding bodies, philanthropic trusts, public and private think tanks and universities alike. It operates not only as an organising device for research calls but also as a way of organising research in research-conducting organisations, notably universities. At my university, for example, Wageningen University configures its core mission and responsibility in strategic documents (e.g., in annual reports, strategic plans, corporate brochure) as that of producing ‘science for impact’, principally through responding to global societal challenges of food security and a healthy living environment (Ludwig et al., 2018).

The grand challenge concept is clearly aligned to the ‘impact’ agenda, where researchers increasingly have to demonstrate impact (or pathways to impact) in research funding applications and evaluation exercises. These concepts help reconfigure the social contract for science such that, at least in part, the responsibility of science is to respond to the world’s most pressing societal problems, while the responsibility of science policy institutions is configured as that of ensuring that the best minds are working on the world’s most pressing problems (Brooks et al., 2009). Perhaps not surprisingly, these initiatives prove controversial within the scientific community, as for example witnessed in a backlash from the scientific community to an initiative from one of the UK research council’s, the Engineering and Physical Science Research Council (EPSRC), plans to prioritise its funding for grants, studentships and fellowships according to national importance criteria (its ‘shaping capability’ initiative, see Jump, 2014).

Flink and Kaldewey (2018) add a further analytical layer. They produce a historically situated linguistic analysis of the ‘grand challenge’ science policy concept and the ways in which it has replaced the earlier figure of the scientist prevalent in the linear model of innovation. Since at least Vannevar Bush’s report, The Endless Frontier (1945), the dominant figure of the scientist was that of a lone individualist, discovering the frontiers of knowledge through pioneering or frontier research at the rock face of knowledge. However, while the ideal-type of this kind of scientist was that of ‘the risk-taking behavior of rugged competitive individualists pioneering into the unknown’ (Flink & Kaldewey, 2018: 16), the grand challenge concept configured a different kind of scientist. The grand challenge scientific endeavour still remains competitive but has now become collective, even sports-like, in the ways in which teams are presented as fighting to achieve a significant long-term goal, the accomplishment of which will have significant societal impact. This tends to favour the organisation of science in highly interdisciplinary and collaborative units, such as has become the case in Systems Biology or Synthetic Biology. Yet, even though grand challenges, by definition, are attempts to respond to society and to the public interest, the choice and framing of the challenges themselves have tended to remain those that have been chosen top-down by funding organisations (Calvert, 2013), and in ways that often lend themselves to ‘silver bullet’ technological solutions (Brooks et al., 2009). Nevertheless, the grand challenge concept can be seen as part of an attempt to establish a new social contract for the public funding of science, and as an important counterweight to the other dynamic that has impacted on the autonomy of science—namely, the relentless influence on economic drivers that has come to dominate research policy agendas (National Council on Bioethics, 2012).

The Co-Production Model of Science and Society

If the ‘grand challenge’ science-policy model seeks to reconfigure the social contract of science such that its core value lies, not with the pursuit of pure knowledge but in providing solutions to the world’s most pressing problems, the co-production model and approach seeks to reconfigure the social contract in another direction. While the linear model views science as the motor of societal progress, and while the grand challenge model views science as the provider of solutions for society, the model of co-production views the spheres of science and social order as mutually constitutive of each other.

Developed by Sheila Jasanoff and colleagues, and building on decades of scholarship in science and technology studies (STS), the co-production concept criticises the idea of science as producing incontrovertible fact. As Jasanoff and Simmet claim: ‘Facts that are designed to persuade publics are co-produced along with the forms of politics that people desire and practice’ (2017: 752). This takes place in deciding which facts (or truth claims) to focus on (which is seen as a normative issue), in identifying in whose interests the facts are used to support (given that facts are never seen as independent from values or indeed ideology), and in observing that public facts are achievements, or what Jasanoff and Simmet call ‘precious collective commodities, arrived at … through painstaking deliberation on values and slow sifting of alternative interpretations based on relevant observations and arguments’ (2017: 763).

There are three broad implications that derive from this approach. First, if the authority and durability of public facts depend, not on their status as indelible truths, but on the virtues and values that have been built into the ethos of science over time (e.g., through careful observation, transparency, open critique and reasoned argument), it follows that we need to give special attention precisely to these virtues, and to how these have been cultivated over time by institutional practice, as an important constituent of democratic governance. Or as Jasanoff and Simmet claim: ‘building strong truth regimes requires equal attention to the building of institutions and norms’ (2017: 764).

Second, if science and social order are co-produced, then it becomes incumbent on the research enterprise to examine precisely the relationship in practice between scientific knowledge production and social order as evinced in particular sites. Variously studying in depth the operation of scientific advisory bodies, technical risk assessments, public inquiries, legal processes and public controversies, science and technology studies (STS) scholars have identified both the values out of which science is conducted, including the interests it serves, as well as the ways in which these configurations can, over time, contribute to the formation of new meanings of life, citizenship and politics, or what more generally can be dubbed ‘social ordering’ (see, amongst many others, Jasanoff, 1990, 2004; Miller, 2004; Owens, 2015; Rose, 2006).

Third, if it is acknowledged that science and social order are co-produced, even if unwittingly through forms of practice (not least due to the continued prevalence of the fact–value distinction and the long reach of the linear model), the question arises as to what are the values that underpin the scientific knowledge-production system (and their associated cultures), and to what extent these align with broader societal values. Indeed, to what extent have the values and priorities tacitly embedded in scientific innovation been subjected to democratic negotiation and reflection? Or, perhaps more worryingly, to what extent are dominant scientific values reflective of those of incumbent interests that may be, perhaps unwittingly, closing down possibilities for different scientific pathways linked to alternative visions of the social good (Stirling, 2008, 2014). Responding to these questions, a line of research has emerged since the late 1990s, particularly prevalent in northern parts of Europe, aimed at early-stage public and societal participation in technoscientific processes as a means of fostering democratic processes in the development, approach and use of science and technology. Such initiatives, funded both by national funding bodies as well as by international bodies such as the European Commission, are typically aimed at improving relations between science and society and restoring legitimacy (e.g., see European Commission, 2007). In practice, they have been developed for reasons that include: the belief that they will help restore public trust in science, avoid future controversy, lead to socially robust innovation policy, and render scientific culture and praxis more socially accountable and reflexive (Irwin, 2006; Macnaghten, 2010). Initiatives aimed at public engagement in science have become a mainstay in the development of potentially controversial technology, notably in the new genetics, and have even been institutionally embedded into the machinery of government in such initiatives that include the UK Sciencewise dialogues on science and technology (Macnaghten & Chilvers, 2014). In academia, they have contributed to institutional initiatives that include Harvard University’s Science and Democracy Network, and to the sub-discipline of public engagement studies (Chilvers & Kearnes, 2016).

A Framework of Responsible Research and Innovation

The responsible research and innovation (RRI) concept represents the most recent attempt to bridge the science and society divide in science policy. Actively promoted by the European Commission as a cross-cutting issue in its Horizon 2020 funding Scheme (2014–2020), and embedded in its sub-programme ‘Science with and for Society’ (SwafS), RRI emerged as a concept designed both to address European (grand) societal challenges and as a way to ‘make science more attractive, raise the appetite of society for innovation, and open up research and innovation activities; allowing all societal actors to work together during the whole research and innovation process in order to better align both the process and its outcomes with the values, needs and expectations of European society’ (European Commission, 2013: 1). To some extent RRI has been a mere ‘umbrella term’, where RRI is operationalised through projects aimed at developing progress in traditional domains of European Commission activity, nominally in the so-called five keys of gender, ethics, open science, education to science, and the engagement of citizens and civil society in research and innovation activities (Rip, 2016). Under this interpretation RRI is simply a continuation of initiatives aimed at bringing society into EU research policy, starting with its Framework 6 programme (2002–2006) ‘Science and Society’ and its follow-on Framework 7 programme (2007–2013) ‘Science in Society’; identified as a (yet another) top-down construct, introduced by policymakers and not by the research field itself (Zwart et al., 2014: 2), standing ‘far from the real identity work of scientists’ (Flink & Kaldewey, 2018: 18).

Yet, another—and potentially more transformative—articulation of the RRI concept is also available. Alongside colleagues Richard Owen and Jack Stilgoe, I have been involved in developing a framework of responsible innovation for the UK research councils. Our intention at the time was to develop a framework out of at least three decades of research in science and technology studies (STS), building on the co-production model as articulated above. Our starting point drew on the observation that from the mid- twentieth century onwards, as the power of science and technology to produce both benefit and harm had become clearer, it had become apparent that debates concerning responsibility in science need to be broadened to extend both to their collective and to their external impacts (foreseen and unforeseen) on society. This follows directly from the co-production model as articulated above.

Responsibility in science governance has historically been concerned with the ‘products’ of science and innovation, particularly impacts that are later found to be unacceptable or harmful to society or the environment. Recognition of the limitations of governance by market choice has led to the progressive introduction of post hoc—and often risk-based—regulation, such as in the regulation of chemicals, nuclear power and genetically modified organisms. This has created a well-established division of labour in which science-based regulation, framed as accountability or liability, determines the limits or boundaries of innovation, and where the articulation of socially desirable objectives—or what Rene von Schomberg describes as the ‘right impacts’ of science and innovation—is delegated to the market (von Schomberg, 2013). For example, with genetically modified foods, the regulatory framework is concerned with an assessment of potential risks to human health and the environment rather than with whether this is the model of agriculture we collectively desire.

This consequentialist and risk-based framing of responsibility is limited, because the past and present do not provide a reasonable guide to the future, and because such a framework has little to offer to the social shaping of science towards socially desired futures (Adam & Groves, 2011; Grinbaum & Groves, 2013). With innovation, we face a dilemma of control (Collingridge, 1980), in that we lack the evidence on which to govern technologies before pathologies of path dependency, technological lock-in, ‘entrenchment’ and closure set in. Dissatisfaction with a governance framework dependent on risk-based regulation and with the market as the core mediator has moved attention away from accountability, liability and evidence towards more future-oriented dimensions of responsibility—encapsulated by concepts of care and responsiveness—that offer greater potential for reflection on uncertainties, purposes and values and for the co-creation of responsible futures.

Such a move is challenging for at least three reasons: first, because there exist few rules or guidelines to define how science and technology should be governed in relation to forward-looking and socially desirable objectives (see Hajer, 2003, on the concept of the institutional void); second, because the (positive and negative) implications of science and technology are commonly a product of complex and coupled systems of innovation that can rarely be attributed to the characteristics of individual scientists (see Beck, 1992, on the concept of ‘organised irresponsibility’); and third, because of a still-pervasive division of labour in which scientists are held responsible for the integrity of scientific knowledge and in which society is held responsible for future impacts (Douglas, 2003).

It is this broad context that guided our attempt to develop a framework of responsible innovation for the UK research councils (Owen et al., 2012; Stilgoe et al., 2013). Building on insights and an emerging literature largely drawn from STS, we started by offering a broad definition of responsible innovation, derived from the prospective notion of responsibility described above:

Responsible innovation means taking care of the future through collective stewardship of science and innovation in the present. (Stilgoe et al., 2013: 1570)

Our framework originates from a set of questions that public groups typically ask of scientists, or would like to see scientists ask of themselves. Based on a meta-analysis of cross-cutting public concerns articulated in UK Sciencewise government-sponsored public dialogues on science and technology, we identified five broad thematic concerns that structured public responses: these were concerns with the purposes of emerging technology, with the trustworthiness of those involved, with whether people feel a sense of inclusion and agency, with the speed and direction of innovation, and with equity: i.e., whether it would produce fair distribution of social benefit (Macnaghten & Chilvers, 2014). This typology, which appears to be broadly reflective of public concerns across a decade or so of research and across diverse domains of emerging technology (amongst our own, see Grove-White et al., 1997; Macnaghten, 2004; Macnaghten & Szerszynski, 2013; Macnaghten et al., 2015; Williams et al., 2017), can be seen as a general approximation of the factors that mediate concern and that surface in fairly predictable ways when people discuss the social and ethical aspects of an emerging technology. If we take these questions to represent aspects of societal concern in research and innovation, responsible innovation can be seen as a way of embedding deliberation on these within the innovation process. From this typology we derived four dimensions of responsible innovation—anticipation, inclusion, reflexivity, and responsiveness (the AIRR framework)—that provide a framework for raising, discussing and responding to such questions. The dimensions are important characteristics of a more responsible vision of innovation, which can, we argue, be heuristically helpful for decision making on how to shape science and technology in line with societal values.

Anticipation is our first dimension. Anticipation prompts researchers and organisations to develop capacities to ask ‘what if…?’ questions, to consider contingency, what is known, what is likely, what are possible and plausible impacts. Inclusion is the second dimension, associated with the historical decline in the authority of expert, top-down policy making, and also the deliberative inclusion of new voices in the governance of science and technology. Reflexivity is the third dimension defined, at the level of institutional practice, as holding a mirror up to one’s own activities, commitments and assumptions, being aware of the limits of knowledge and being mindful that a particular framing of an issue may not be universally held. Responsiveness is the fourth dimension, requiring science policy institutions to develop capacities to focus questioning on the three dimensions listed above and to change shape or direction in response to them. This demands openness and leadership within policy cultures of science and innovation such that social agency in technological decision-making is empowered.

To summarise, our framework for responsible innovation starts with a prospective model of responsibility, works through four dimensions, and makes explicit the need to connect with cultures and practices of science and innovation. Since its inception our framework is being put to use by researchers, research funders and research organisations alike. Indeed, since we developed the framework in 2012, one of the UK research councils, the Engineering and Physical Science Research Council (EPSRC) has made an explicit policy commitment to it (EPSRC, 2013; see also Owen, 2014). Starting in 2013, using the alternative ‘anticipate–reflect–engage–act’ (AREA) formulation (see Murphy et al., 2016), the EPSRC has developed policies that set out its commitments to develop and promote responsible innovation, and its expectations both of the researchers it funds and of their research organisations.

Discussion and Conclusion

In this paper I have discussed four paradigmatic ways of governing science and technology. I began with the linear model in which science is represented as the motor of prosperity and social progress and in which the social contract for science is configured as that of the state and industry, providing funds for science in exchange for reliable knowledge and assurances of self-governed integrity. I then explored the dynamics and features which contributed towards a new social contract for science in which the organisation and governance of science became explicitly oriented towards the avoidance of harms and the meeting of predefined societal goals and so-called grand challenges. A co-production model of science and society was subsequently introduced as a more adequate understanding of how science and social order are mutually constitutive of each other, and of the implications of such an approach for science and democratic governance. Finally, I set out a framework of responsible (research and) innovation as an integrated model of aligning science with and for society.

These four models should not be seen as wholly distinct or unrelated. Typically, they operate in concert, sometimes harmoniously, other times less so, in any governance process. Nevertheless, the broad move beyond the linear model of science and society must be applauded, both because science devoid of societal shaping is clearly poorly equipped to respond to the societal challenges we collectively face, and also because the premises that underpin the linear model, such as the fact–value distinction, are clearly poorly aligned with contemporary intellectual debate. Unshackled from outdated distinctions, a framework of responsible research and innovation offers opportunities, tools and possibilities to make science and its governance more responsive to the question as to ‘what kind of society do we want to be’ (Finkel, 2018: 1).