Keywords

1 Introduction

In this chapterFootnote 1, we analyse the implementation of responsible research and innovation (RRI) in Higher Education, Funding and Research Centres (HEFRCs) from an institutional governance perspective. Governance in this context refers to ways of steering processes in a desirable direction, in this case in the direction of responsible research and innovation. We present examples of RRI governance practices in a selection of HERFCs in Europe, which represent different modes of institutional governance, however they cover mostly top-down examples, as bottom-up experiences are less represented in the literature. Nevertheless, bottom-up governance is an ideal often voiced in theoretical discussions about RRI and Research and Innovation (R&I) governance.

We argue that these different modes of governance reflect different understandings of what it means to act responsibly in R&I, which correspond with two distinct conceptions of responsibility: a retrospective conception, according to which acting responsibly entails avoiding harm and correcting harm when committed, and a prospective conception, according to which acting responsibly entails contributing to doing good in the future. These two conceptions of responsibility, in turn, inform different narratives of what the main purpose of RRI-governance should be: making R&I better equipped to avoid future harm and thus doing right; or aligning R&I with the needs and expectations of society at large, thus contributing to doing good.

Drawing on the examples presented we suggest that bottom-up modes of governance seem especially fit to integrating principles of RRI in everyday R&I practices, when RRI is understood to entail doing good. Bottom-up modes of governance are prone to be open and inclusive and could thus be described as ethically desirable modes of RRI governance. However, the examples also indicate that some form of meta-governing structure is necessary to sustain bottom-up governance structures over time, which potentially can undermine their openness and inclusiveness.

2 Retrospective and Prospective R&I Governance

Two different conceptions of responsibility are reflected in different governance approaches to RRI. We refer to these as retrospective and prospective conceptions of responsibility. On a retrospective conception of responsibility, acting responsibly entails avoiding harm and correcting harm committed in the past. In this sense it has a “backward-looking” perspective on what acting responsibly in research and innovation entails in practice. By contrast, a prospective conception of responsibility, focuses attention on contributing to doing good in the future, thus taking a “forward-looking” perspective on the practical implications of acting responsibly in research and innovation [2]. In the current landscape of HEFRCs in Europe, some RRI governance practices reflect retrospective conceptions of responsibility, while others assume some version of prospective understandings of responsibility. This is a significant conceptual distinction in the analysis of RRI governance in HERFCs, since these two different conceptions of responsibility inform distinct narratives of what the purpose of RRI governance should be: making R&I better equipped to avoid future harm and thus doing right (this narrative assumes a retrospective conception of responsibility), or aligning R&I with the needs and expectations of society at large, thus contributing to doing good (this narrative instead assumes a prospective conception of responsibility).

Retrospective notions of responsibility have traditionally translated into a governance of R&I practices concerned with avoiding harmful products or practices of science and innovation, with a consequent focus on risk governance. However, R&I governance processes “premised on formal risk-assessment, have done little to identify in advance many of the most profound [negative] impacts we have experienced through innovation” [3, 4]. Retrospective accounts of responsibility are inherently limited in guiding decisions related to the trajectories of R&I, both due to the narrow concepts of risk that they assume [3,4,5]. And the hierarchical, top-down, regulatory forms of governance that they seem to entail, which runs counter to the unpredictable, future looking, collective enterprise of science and innovation practices. In response to R&I governance models premised on retrospective conceptions of responsibility, “a number of multi-level, non-regulatory, forms of science and innovation governance models have taken [a] forward-looking view of responsibility (…) attempt[ing] to introduce broader ethical reflection into the scientific and innovation process” [3, 4].

2.1 Research Ethics as a Governance Baseline

A basic object of the ethical governance of research and innovation is research ethics, usually through the mandatory introduction of research ethics committees that oversee that research practices and goals are not causing harm or violating rights. As it often happens, this level of governance emerged in response to scandals and public outrage and were introduced both with an eye to prevent bad things to happen again and to restore trust in [6].

Mechanisms to ensure research ethics may look like typical examples of a retrospective view of responsibility, as well as having a narrow view of [7]. Yet, their mandate can be expanded to include elements of prospective responsibility. An example is provided by the Norwegian Research Ethics Act (2017) that gives the Norwegian higher education and research institutions a statutory responsibility for putting research ethics into practice in their organization. Most of these institutions have ethics committees in place, mandated to handle cases having to do with fraud and other forms of misconduct in research. Norwegian national research ethics guidelines define the recognized research ethics norms in which the higher education and research institutions have a responsibility to provide training. The guidelines are specific for disciplinary areas and are managed by corresponding research ethics committees: the National Committee for Research Ethics in Science and Technology (NENT), the National Committee for Research Ethics in the Social Sciences and the Humanities (NESH); The National Committee for Medical and Health Research Ethics (NEM) [8].

The respective guidelines place a responsibility on the institutions to include the broader societal perspective in the research ethics assessments they make; a responsibility that is already assumed in their legal obligation to provide training and education in research ethics. However, despite the broader scope of the national ethics guidelines, it is not common for higher education and research institutions in Norway to take a more proactive, prospective responsibility for research ethics, which would assume a broader societal understanding of what research ethics means. The guidelines are not only directed at the institutional level but are intended also to promote such reflection and awareness in the researcher. This further ambition needs a more integrate effort to be realised. So, in spite of some expansive aspiration, this research ethics approach in practice remains retrospective.

At the University of Twente we find a retrospective ethics committee [1]. University of Twente made research ethics assessment required for all fields as of 2020. A discipline-specific system of research ethics committees has been established, consisting of four internal ethics committees: one for the social sciences, one for the engineering sciences, and one for the computer sciences. A central (fourth) committee was set up to monitor the three sectional committees. The ethics system is modelled on the recommendations in the SATORI project [9], which developed a standard for ethics committees. One important recommendation coming out of the SATORI-project was that of establishing discipline specific committees; another one was that of securing a degree of transdisciplinarity in the composition of the committees’ members. The committees should thus have expertise in the area being assessed, as well as one from a neighbouring area, legal expertise, and should include a member from outside the organization.

The Universitat Jaume I of Castellón (UJI) in Spain also has an internal ethics committee system [1]. Here the ethical governance approach is blended with the aspiration to promote the social responsibility of the university, which suggests a wider ambition that tends towards a broad ethical governance. The former rector of UJI initiated a process to develop the UJI’s social responsibility policy. A focus group was established with the mandate to develop a draft ethics code. The focus group consisted of university staff, students, and other stakeholders, including companies that the university collaborates with. The group drafted an ethics code that established the ethical values of the university, with a section on the Integrity and Responsible research practices. An ethics and social responsibility system was put in place to monitor and assess the implementation of the ethics code, including an ethics and social responsibility committee. The members of the Ethics and Social Responsibility Committee include staff, students, the general secretary of the university, the vice-rector of research, the director of UJI equality office, the director of the deontological committee (research integrity committee), as well as the ombudsperson for students. All issues related to breach of the Integrity and Responsible Research Practices Code are discussed in the committee. Moreover, the ethical assessment of the projects is carried out by a research deontological committee.

2.2 Retrospective Responsibility and Its Limits

In general, one characteristic of retrospective, or backward-looking, conceptions of responsibility is that they focus attention on one-off, time-limited acts, which are undertaken in the past, by identifiable agents, with adequate control and knowledge of the likely harmful consequences of the act (including unintended, yet reasonably foreseeable harm [10,11,12]. One problem with this focus on time-limited conduct in the pasts, is that processes, which were initiated in the past and are still ongoing, such as for instance research and innovation practices, and structures within which processes take place, such as the current, global academic incentive structure, fall outside the realm of evaluation when the question of responsibility for harm arises. Instead, backward-looking conceptions of responsibility is premised on an understanding of harmful acts as temporary deviations from a legal and social background structure that is assumed as normal [13]. The concern with time-limited harmful conduct thus also overlooks the fact that harm can be experienced, not merely as a one-off harmful incident, but as a persistent institutional reality, permeating everyday life, in a structural way. To accommodate forms of harm and injustice that are structural, and thus not timebound, we need a concept of responsibility understood as generated by “deeds already underway”, to borrow a term from Hans Jonas [14] rather than as retrospectively generated by deeds already done. The target are patterns of action embedded in the cultural and material reality of a social group (cf. The notion of “structural violence” [15]). Such patterns produce harm or injustice even if no particular action can be singled out as wrong.

A second problem with retrospective conceptions of responsibility is that responsibility arises only if a harmful outcome can be linked to an identifiable wrongdoer. As Young explains, this ‘identity condition’, implies that one isolates “the one or ones liable (…) thereby distinguishing them from others, who by implication are not responsible” [13]. The identity condition is problematic also in the context of R&I practices, given the plurality of actors often involved in the knowledge production process, and the fact that there is often no interaction between actors involved in the R&I process and those affected by the outcome. This is what Denis Thompson calls the “problem of many hands” [16].

Lastly, retrospective conceptions of responsibility only recognise harm that could reasonably have been foreseen. With respect to R&I processes, we do not always know what (harmful) effects in society they will have. Here risks, uncertainty and conflicting evaluations mirror the circumstances that have led some risk scholars to develop adaptive, ongoing and participatory strategies of risk management, for instance Klinke & Renn [16]. The backward-looking model of responsibility is ill-suited both to handle the dispersed agency and uncertainties of R&I, and to inspire practices that are responsible in the sense of enabling to manage unpredictable risks and unintended consequences.

Committee systems remain the dominant accountability and ethics assessment mechanism for R&I projects. The question is whether this is a suitable, and sufficient, governance mechanism for the purpose of integrating RRI in R&I processes. The Norwegian ethics committee system illustrates how an ethics committee that is originally built on the model of a retrospective, top-down committee system can be combined with more distributed governing mechanisms aimed at setting the rules of the game and encouraging and facilitating reflection, through disciplinary specific national guidelines, and the creation of temporary national fora for debate on issues of general interest, which raise ethical questions and dilemmas.

3 Prospective R&I Governance

A central premise underlying the concept of forward-looking obligations is that the responsibility to act so as to produce a desirable state of affairs, or to prevent bad outcomes in the future, increases proportionally with the capacity to influence others or our surroundings, be it peoples’ rights and freedoms, society’s basic institutions, or the environment or climate [14]. Science and technology have the potential to influence people, society and their environment in profound ways, in both a positive and a negative sense. Applying this understanding of a forward-looking conception of responsibility to R&I governance seems to entail at least two presumptions about the nature of science and the relation between science and society, both of which are debatable: (i) that potentially harmful trajectories of science and innovation can be identified and stopped or changed before new technologies are ‘locked in’ to societal practices and structures [18], and (ii) that the direction of science can be steered towards whatever society deems desirable. These objections, though, may in turn make some debatable assumptions. Objection (i) seems to rule out the possibility of what we may call a life-cycle, or adaptive management of the unintended consequences of science and technology. Of course, we cannot assume that such management is possible, but it is an option to be tested. Objection (ii) seems to assume that there is clear and majoritarian public opinion about what is desirable for society. So, while the objection raises an important point about the possibility of imposing directionality to technoscientific advances, it overlooks the fact that social opinions on what is desirable are constantly evolving and are renegotiated in light of new experiences, challenges and public debates and frames. So, again, a continuing, process view of steering seems more appropriate, if possible. Keeping in mind the limits of prospective aspirations, let us have a look at some examples.

In January 2020 the Norwegian Research Council’s (NRC) new policy on open research came into effect. The policy addresses in a systematic, strategic way open research as well as RRI, and the involvement of stakeholders in R&I. The focus on these topics is not new to NRC; however, the policy is a first attempt at linking all the elements and integrating them into NRC’s work in a systematic way, as part of NRC’s new portfolio strategy [1]. Inclusion of stakeholders is fundamental to the way in which NRC works to realise the policy on open science. The involvement of stakeholders is an important part of NRC’s new organisational strategy, involving among other things a shift to portfolio management; a development which is in line with the move towards mission-thinking and involvement at the level of the EU [19]. NRC also strongly encourages stakeholder involvement at project level as well.

Despite this commitment, the practice of stakeholder involvement remains difficult to realise and not easy to fit within the established working practices and constraints of R&I. A telling example comes from Digital Life Norway (DLN). DLN is a large Norwegian centre that promotes biotechnology research and innovation as well as transdisciplinarity. DLN has a prize for the “transdisciplinary publication of the year” open to publications authored or co-authored by researchers based in Norway. This provides a good observation point for stakeholder involvement, as it is a key feature of transdisciplinarity. Yet, very few publications reflect a significant involvement of stakeholders outside academia. In 2021 only one submission satisfied this criterion, and in 2022 none, so that the prize will not be awarded.

Another case is the Science Ombud at the University of Oslo. The idea of establishing a Science Ombud was to put in place a form of governance system that could monitor several issues around research integrity, broadly understood, and not limited to preventing fraudulent behaviour [1]. The Science Ombud has an advisory role and shall function as a low-threshold service for researchers employed. The cases that the Ombud handles are often about co-authorship (40% of the cases in 2019). But the mandate also includes issues related to other topics, although not as broad as the responsibility concept of RRI. The Ombud has no formal authority, and the idea is that researchers should be able to seek out a low-level independent body within the institution, to discuss and resolve what they themselves experience as ethically problematic issues. Confidentiality is an important principle in the functioning of the Ombud, both to ensure that the Ombud institution remains low-threshold, and that those who contact the Ombud do not’risk’ anything. The Ombud can therefore not proceed with a case without the consent of the person who reports it.

It is worthwhile elaborating on what it entails in practice to introduce a forward-looking conception of responsibility as a guiding principle for R&I governance - in contrast to that of a retrospective one. Arguably it requires a fundamental shift of mindset towards acknowledging “the intrinsically normative aspects of science and technology, including risk” [5]. At the core of prospective conceptions of responsibility is the idea that assigning responsibility to an agent concerns “the forward determination of what is to be done”, in order either to create a desirable outcome, or to prevent an undesirable one. The focus is not on a particular wrong committed by an identifiable agent who merits blame or punishment, but on “getting the right people and institutions to work together to producing a desirable outcome or preventing a bad one” [10, 14].

What matters for responsibility to be generated on the forward-looking model is the combination of an outcome that is deemed valuable (be it the prevention of a harmful outcome or the facilitation of a desirable one), and institutional capacity or power to affect whether the outcome is achieved or not. With respect to R&I governance, a prospective view of responsibility entails a shift in focus from “preoccupations with ‘downstream’ risk-governance” [5], to a broader interest in the governance of profoundly political, and therefore public, concerns about what kind of society we want - and do not want - and what kind of knowledge is required to get there. This raises a very thorny issue: who should set the agenda for research and innovation? A question that triggers the conflict between researchers’ and innovators’ freedom and social control over the object and goals of their research. Should researchers and innovators retain the autonomy of judgement that is often assumed to pertain to professionals with great expertise, or since society pays the bill, and bears the risks of research, the principle that “who pays the piper picks the tune” legitimately holds? Pressing this principle faces the additional problem that it is very difficult to stir from outside activities based on highly specialised knowledge. So, we need to look at the resources of governance.

4 Perspectives on Governance of R&I

Governance can be conceptualized as a distributed mode of governing involving other actors besides policy makers and the top-management. This allows “politics [to be] shaped through several and diverse initiatives and authorities” coming from … “networks and partnerships consisting of a range of public and private actors ([20] our translation). This conceptualization of governance emphasizes the bottom-up dynamic of governance and points to the fact that while “governance arrangements may be designed to serve a purpose, [they] can also emerge and become forceful when institutionalized” [21]. As Rip points out, there is an important analytical distinction to be made between the above conceptualization of governance understood as constituted by “bottom-up actions, strategies and interactions”, on the one hand, and governance understood as a mode of governing that “opens […] up an earlier centralized arrangement and make[s] it more distributed, on the other” [21].

Landeweerd and colleagues [22] conceptualize governance in the R&I sector as “the set of processes by which it is taken that stewardship [i.e. management] over (…) science and technology practices (research, innovation, etc.) ought to be organized in continuous calibration with those practices. “This continuous calibration, or adjustment, must necessarily entail dialogue with those enacting science and technology practices, thereby allowing a range of actors, including “policy makers, researchers, industry and civil society groups and nongovernmental actors” to partake in the shaping of those practices. In this way, decision-making processes are embedded within practice itself, rather than centralizing the authority of decision at the policy makers level [22]. Landeweerd and colleagues definition of governance is an example of what Rip refers to as governance whereby previously centralized arrangements are made more distributed, in contrast to governance as bottom-up actions and interactions that may in turn become institutionalized. Importantly, the distributed authority that governance entails should not be confused with earlier self-regulatory governing regimes characterized by scientists governing themselves internally, based on codes of conduct [23].

The concept of governance expresses a shift in the discourse on how science should be regulated, from internal self-regulation by scientists based on codes of conduct, to external regulation, yet with the ambition of allowing the actors enacting science and technology a greater degree of autonomy and a voice in how the regulation is exercised. Governance is a non-hierarchical mode of governing, in the sense that it entails a move away from attempts at steering research and innovation towards predefined aims (expressed for instance in thematic funding programs), or by stable means, (such as economic incentives and predefined indicators of performance). Compared to old regulatory models of government, which articulate hierarchical co-ordination mechanisms based on [centralized] authority, the concept of governance expresses a mode of external regulation “that is more decentralized and open-ended” [3, 4]. Indeed, in contrast to government, “governance is distributed almost by definition” [21].

RRI literature describes various forms of steering research and innovation (R&I) in the direction of responsibility in a de-centralized, open-ended way. Kuhlmann and colleagues focus on anticipatory or tentative governance models [24], Rip and colleagues on “real-time and other forms of technology assessment” [25], Wynne on “upstream engagement” [26], and Van den Hoven and colleagues on “value-sensitive design” [27]. Others use the terms network- and interactive modes of governance to capture the essence of governance [28].

Guston’s description of anticipatory governance practices at the Center for Nanotechnology in Society at Arizona State University (CNS-ASU) may serve as an example of what a multi-level, non-regulatory approach to steering R&I processes in the direction of responsibility entails in practice, with respect to governance tools [29]: “CNS-ASU unifies research programs… across three critical, component activities: foresight (of plausible future scenarios), integration (of social science and humanities research with nano-scale science and engineering), and engagement (of publics in deliberations). CNS-ASU also performs educational and training activities as well as public outreach and informal science education”. Governance in the CNS-ASU case focuses on integrating reflexivity in research and innovation activities and coordinating meeting places between scientists from the natural and social sciences and lay citizens. It aims at influencing actors in networks not by top-down steering, but by coordinating and facilitating cooperation, leaving concrete aims of the R&I activity to the networks, and allowing for probing and failing in the process [24].

Echoing the case described by Guston, Strand and colleagues observe that “[t]he question of how to govern (…) R&I networks from the perspective of funding bodies and/or government (…) is rapidly transforming from policy perspectives based on central control and accountability to a perspective where coordination and stimulation are key concepts” [30]. Importantly though, governance is not purely about coordinating and facilitating, but may involve a mix of soft and hard(er) governing mechanisms. Hence, as Stilgoe et al. point out, the governance mechanisms of facilitation, coordination and stimulation are commonly complemented with more traditional “policy instruments such as normative codes of conduct, standards, certifications, and accreditations “[3, 4]. That said, the prerogative of de-centralizing authority contained in the concept of governance means that governance in the area of R&I denotes, as a minimum, the act of “open[ing] up science and innovation” [31] to a wider range of inputs. Some would argue that this opening up entails creating new spaces of ‘public dialogue’” [3, 4], which in turn seems to point to governance mechanisms that encourage and enable networking, broad inclusion and deliberation.

5 Why Involve Citizens in R&I Governance?

If to be responsible in R&I means to meet this ideal of a more representative co-construction, then responsibility entails democratizing research and innovation. Those affected by the new technologies in the future need to be involved in debating the shaping of that future, notably by participating in the framing of the problems and questions to be researched [32]. The focus here is on the process, where democratic procedures are thought to contribute among other things to “the awareness of a more local, historically and socially contingent knowledge production”, and in this sense a more reflexive, “socially robust”, knowledge and technology [33, 34]. Inclusion is an end, and not just a means to achieve a given end.

Importantly, as Randles and colleagues emphasize the demand for inclusion “is not just about inclusivity of a wider and more diverse range of perspectives, but that inclusion follows a co-construction ambition (…) [where] wider interests participate in the framing of research, innovation, and responsibility ‘problems’; it is about how the processes of inclusion are constructed” [32]. A governance structure that aims at promoting and facilitating “upstream engagement” echoes the assumption that an inclusive, deliberative approach to science and innovation practices is an efficient mechanism for making R&I more reflexive, and - as a result - more anticipatory, and thus responsible.

The belief in the efficiency of upstream engagement as a mechanism to achieve more reflexive R&I practices has been justified with reference to the observation that “insight in the diversity of those participating in social-political interactions can only be gained by involving them in the governing process, considering them necessary sources of information” [35]. In a similar vein, Sykes and Macnaughten suggest that “choices concerning the nature and trajectory of [science and] innovation can be co-produced with publics in ways that authentically embody diverse sources of social knowledge, values and meanings” [36]. It has also been argued that research and innovation must engage with the public to serve the public [37, 38], and that “dialogue is the right thing to do for reasons of democracy, equity and justice” [36]. Others, however, have criticized the belief in public participation as an efficient mechanism for making R&I more reflexive, arguing that there is a lack of empirical evidence supporting its assumed quality and impact [39].

As pointed out by Landeweerd and colleagues. Above [22], responsibility in R&I is a matter of aligning science with the needs and expectations of society at large; that is, the goal of creating technologies that not only are not harmful, but also good, in the sense that they can be said to be socially, ethically, and environmentally desirable, and therefore also an expression of social priorities and informed preferences. If the main purpose of an R&I governance system is to ensure broad involvement in R&I processes, a relevant governance mechanism would be that of constructing good processes for involvement or rigging meeting places fit for that purpose; if, instead, the main purpose is to ensure that R&I contribute to solve the grand challenges of our time, a main governance mechanism may rather be that of facilitating transdisciplinary collaboration, where involvement of lay citizens could be one element, but not necessarily so. Note that here we assume that the grand challenges are identified by experts. If the grand challenges were identified through public involvement, then this opposition would disappear.

6 Fine-Tuning Citizen Involvement in R&I

Public engagement governance tools have been criticized, among other reasons, for framing the participation exercises in ways that are useful to particular interests [40], for downplaying the low political status of the outputs of these exercises, and for serving as an “efficient tool of de-politicizing science and technology, in much the same way as ethics expert reviews” [22]. An ethics of involvement thus concerns not just the question of who should be involved in R&I processes and why, but the question of how the persons involved should be involved. This in turn, raises further questions: how those involved can participate on an equal footing with researchers, and how their contribution should be weighed in with that of researchers. These are questions that relate to the critique of public engagement exercises concerning the low political status of the outputs of these exercises. Furthermore, these questions raise the problem of how to weigh against each other etico-political and epistemological considerations, as well as how to protect the integrity of science. Science and technical expertise can be corrupted in different ways. They can be used to mask political choices under the pretext of techno-scientific requirements, but they can also be pushed to accept assumptions that do not meet their epistemic standards and incorporate value assumptions that are controversial and contested.

Landeweerd and colleagues [22] criticize the public participation model for taking a top-down regulatory form when put into practice, and for sharing the pitfalls of either frustrating the voice of “societal views and opinions or becom[ing] a scapegoat for pre-existing agendas”. Landeweerd and colleagues argue that RRI as a mode of governance should link the governance of R&I to what von Schomberg has called “normative anchor points”, such as sustainable development and social progress [41]. This move involves that the governance of R&I should no longer be restricted to “the definition and implementation of regulation in the form of negative constraints for science and technology but also of positive aims in a societal setting” [22], thereby broadening up the governance of science “to include topics and issues addressing community values and collective behavior” [22].

Moreover, the whole process of science - and not just its products – should be subject to transdisciplinary dialogue, meaning deliberation across disciplinary divides as well as with a variety of stakeholders, including the non-expert public. Acceptability and desirability assessments should thus take place from the outset of R&I processes, when problems are framed, rather than at the stage when a project is defined, or a product is ready to be introduced to the market. These assessments should take place at various stages throughout the process, and should involve a broad range of stakeholders, rather than being confined to scientific and ethical expertise.

RRI as a governance tool can be understood to move beyond the participatory governance approach “that merely emphasizes the inclusion of different actors”, to designate “the type of engagement that actors should exhibit in the process of doing research and innovation” in a responsible way [42]. The type of engagement that doing RRI entails can be summed up in the RRI dimensions articulated by Stilgoe and colleagues [3, 4]: anticipatory, reflexive, inclusive and responsive. Taken together, these criteria envision a continuous model of public engagement throughout the life-cycle of R&I. On Landeweerd and colleagues’ account, RRI as a mode of governing entails opening up science and innovation in a way that allows for it being “shaped through several and diverse initiatives and authorities” through “a range of public and private actors” [20] (our translation). The move towards a governance of R&I activities can thus be understood as a response to RRI’s normative commitment to opening up the shaping of science and innovation to society; to reduce – and even collapse – the society-science divide that informs, and is upheld by, the self-governing, technocratic and ethics expertise modes of governing R&I.

7 Meta-governance of R&I

We follow up on this by discussing different conceptions of ethical governance in HERFCs. Our discussion takes us from top-down governing to bottom-up ideals of governance and their tensions, and further to the concept of meta-governance: facilitating the self-governance of networks through targeted procedural principles. These principles set the rules of the game and provide a common direction to R&I activities. Setting the rules of the game, however, is not a neutral intervention: it provides a frame and limits to self-governance. As the political theory of constitutionalism shows, procedures, frames and limits are ambivalent tools: they enable and they constrain, they confer power and they take away power. This is true for government as it is true for governance. Meta-governance sounds like a less intrusive concept than governance. But meta-governance is the governance of self-governance. As soon as we spell it out, the tension between intrusion and non-intrusion in the self-governance process becomes visible. We draw lessons and discuss the essential tensions emerging from the RRI literature on governance and meta-governance relevant for informing ETHNA System and similar RRI initiatives that aim to be open and inclusive.

The concept of RRI contains a dimension that designates responsibility as process, as well as a dimension that connects responsibility to particular outcomes [38]. Von Schomberg stresses that the process and product dimension of RRI are interrelated. The innovation process should thus be “responsive, adaptative, and integrated” and products developed through the innovation processes should “be evaluated and designed with a view to [the] normative anchor points [of environmental protection] (…) human health, sustainability, and societal desirability” [38].

Owen and colleagues [36] argue that a framework for what they refer to as “responsible innovation” must include consideration not only of the products of research and innovation, but more profoundly of the purposes and underlying motivations of R&I, by which they mean “not just what we do not want science and innovation to do, but what we do want them to do”. This involves reflecting on “what sort of futures(s) we want science and technology to bring into the world, what futures we care about, what challenges we want to meet, what values these are anchored in” [36]. A core question here is “how can the “right impacts” be democratically defined?” [36]. One possible answer to that question is by constructing a procedural framework that ensures fair deliberation on right impact.

Randles et. al argue [32] that the inherent normativity of RRI raises the question of “how to deal with the inevitable tensions, conflicts and related power games that arise when a heterogeneous, pluralistic actor landscape with diverging interests is confronted by norms and values intended to change behaviour”. Given the complexity of R&I networks that RRI as governance mechanism aims to facilitate, accommodate and strengthen (be it as a normative claim or a pragmatic move), the question is how best to deal with the inevitable conflicts and tensions that will arise in any “collective search for and foundation of normative direction” [32]. Randles and colleagues suggest that rather than contributing to this collective search for normative foundation, one should construct governance mechanisms “able to address contestation and facilitate the capacities and capabilities of the relevant actors to engage in constructive negotiations”, allowing the actors involved in R&I networks to negotiate the normative substance of the R&I activity themselves [32].

In a somewhat similar vein, Landeweerd and colleagues argue that “acknowledging complexity means that governance should be less about defining clear-cut solutions and more about making explicit the political issues that are at stake in science and technology. In this sense governance becomes a process in which the political nature of science and technology is made explicit, where concerned actors express that there is de facto not one, single answer (…) This means focusing less on decision-making and more on identifying the shared values and interests we have in the issues on the table; [the focus should be] on collaboration and dialogue, and on empowering participants” [22].

The RRI as governance approach on this procedural account “do[es] not focus on what RRI is (…) but on the processes and mechanisms by which it is thought to be realized” [43]; it is about providing an institutional framework that facilitates collective processes of cooperation, deliberation and negotiation, through a mixture of governance mechanisms. These include overarching principles for legitimate procedures and codes of conduct setting the rules of the game, the establishment of spaces for debate and negotiation, and policy instruments “helping to achieve legitimate agreements” [43]. Owen and colleagues [36] propose that a prospective conception of responsibility suggests an evaluative framework for what kind of processes qualify as legitimate in the governance of R&I, given the aim of steering R&I in the direction of responsible practices.

The ETHNA project [44] is a recent contribution to the RRI discourse on the governance of research and innovation (R&I). The proposed system of R&I governance in ETHNA includes four tools: an code of ethics and good practices in R&I, an ethics committee on R&I, an Ethics Line and, and indicators to monitor the progress and the performance [45]. The philosophical foundation of the ETHNA system - Habermas’s theory of communicative action [46] - presumes a procedural approach to governing research and innovation. The overarching aim is to steer R&I processes towards responsibility understood in a prospective, or forward-looking way. Governance theorists tend to agree that in order to enhance networks’ alignment with and contribution to a public good there is a need for “a system of meta-governance to stabilize key players’ orientations, expectations, and rules of conduct” [47,48,49,50].

As Jessop explains, “[m]eta-governance [is] the ‘organization of self-organization’. It involves (…) the design of institutions and generation of visions which can facilitate not only self-organization in different fields but also the relative coherence of the diverse objectives, spatial and temporal horizons, actions, and outcomes of various self-organizing arrangements (…) [Organizations] have a major role here as the primary organizer of the dialogue among (policy) communities, as an institutional ensemble charged with ensuring some coherence among all subsystems, as the source of a regulatory order in and through which they can pursue their aims” [47]. The limits of such statements are the lack of specificity. One can make big claims about the virtues of meta-governance, but unless meta-governance is given a more specific content and it is tested in practice, it runs the risk of being a purely verbal, rhetorical solution. On the other hand, if meta-governance is specified into strict, pre-defined procedures and methods it runs the risk to be either context insensitive (and hence top-down) or not feasible within real-life settings (hence too abstract and ineffective). The ETHNA concept can be seen as an attempt to produce and test a prototype model of meta-governance.

The four principles of Owen and colleagues can provide a common RRI vision, and a common understanding of the rules of the game, in a given organization. As Sørensen argues, a meta-governance structure is needed to ensure that self-governing networks follow the rules of the game. If R&I networks are to contribute to solving societal grand challenges in a just and effective manner “they must be meta-governed with that purpose in mind”, to paraphrase Sørensen [49].

The concept of a meta-governance structure succinctly captures the function that Owen and colleagues’ four procedural principles can have in the governance of R&I in the direction of RRI, namely that of setting the ‘rules of the game’ and providing a common direction to R&I activities. In this sense the principles can be understood as constitutive of the regulatory order of R&I activities. The ETHNA system and similar systems of ethical governance of R&I can involve citizens based on a meta-structure in this sense. The four principles of Owen and colleagues could for instance inform the design and use of the four tools of the ETHNA System to involve citizens in the governance of R&I in a good way.

In the evaluation of the ETHNA system as an attempt to provide a concrete skeleton to the concept of meta-governance, it will be very important to test to what extent the ETHNA tools manage to negotiate the dialectic between, on the one hand, inviting participation and empowering bottom-up initiatives, and, on the other hand, offering a framework that ensures that such involvement and initiatives meet the values and normative principles of RRI.