Science policies in the US, Europe and elsewhere have in recent years called for ‘responsible innovation’ in science and technology, implying that social and ethical considerations should be integrated with research and development (R&D) processes (21st Century Nanotechnology Research and Development Act 2003; European Commission 2004, 2007; Netherlands Organisation for Scientific Research 2008). Political concern for the societal impact of science and technology is in itself nothing new (cf. Roosevelt 1936). What distinguishes recent policies is a widespread interest in socio-technical integration at the ‘midstream’: ‘co-operative’ or ‘interdisciplinary’ research that targets early stage R&D decisions, as opposed to ‘upstream’ funding or ‘downstream’ regulatory decisions (Fisher et al. 2006). The European Commission for instance aims to: “encourage actors in their own disciplines and fields to participate in developing Science in Society perspectives from the very beginning of the conception of their activities” (European Commission 2007, p. 6).

While these mandates mark a political interest in interdisciplinary research efforts to integrate social and ethical concerns at early stages of R&D, the appropriate means by which such integration is to occur is still open to experimentation. The recently developed framework of midstream modulation (MM) opens one potential avenue for interdisciplinary collaboration in the research laboratory.Footnote 1 Two ‘laboratory engagement studies’ (Fisher 2007) have applied this framework to address the question of social responsibility in research practices, focusing on researchers’ critical reflections on the broader socio-ethical context of their work. These studies sought to gauge to what extent MM could help render more visible the broader context of laboratory research, and whether research participants considered critical reflection on this broader context to be relevant.

Engaging Researchers with the Socio-Ethical Context of Their Work

Contrary to the ‘neutrality view’ of social responsibility—the notion that the social responsibility of researchers is exhausted by the disinterested pursuit of scientific knowledge—scholars have argued that the social responsibility of researchers should include critical reflection on the socio-ethical context of their work (Verhoog 1980). This normative stance reflects recent observations in ethical and normative scholarship (cf. Douglas 2009), including engineering ethics and the ethics of science and technology. Several engineering ethicists have argued for the early assessment of moral issues in technological design by direct involvement of scientists and engineers. Van de Poel and Van Gorp have similarly argued that “designing engineers have a moral duty to reflect on the ethically relevant choices they make during the design process” (2006, p. 335). While laboratory science differs in many ways from engineering, similar challenges have been voiced in relation to laboratory science. According to Ziman, “the transformation of science into a new type of social institution” requires that the ethical dimensions of research should become part of the ‘ethos’ of science (1998, p. 1813). Accordingly, various scholars have suggested new multidisciplinary engagements in light of the radical ethical challenges posed by new and emerging science and technology (Herkert 2009, personal communication; Khushf 2006; Moor 2005; Schuurbiers et al. 2009b).

If ethical and normative scholarship has established a moral imperative for, and a general vision towards, integrating such reflection into research, it has been less clear on how to implement this vision. Theoretically established claims that scientists and engineers should reflect on the normative dimensions of their work do not in themselves enforce or encourage such reflection. Indeed, policy calls for ethical reflection may have at best a tangential effect on research practices because researchers generally perceive the broader socio-ethical context of research as peripheral to their work (Guston 2000; Rappert 2007; Schuurbiers et al. 2009a). The question of implementation can thus stymie broad normative commitments to ethical reflection in research practice. The studies presented here sought to tackle this challenge by supplementing the descriptive techniques of MM with the explicit normative commitment of an ‘embedded ethicist’. While MM is more attuned to raising ‘reflexive awareness’ among R&D practitioners (Fisher et al. 2006), could it offer possibilities for defining a context-sensitive form of ethics, using ethnographic methods that would open up the ‘black box of science and technology’ to normative inquiry (Van de Poel and Verbeek 2006)?

Midstream Modulation

Midstream modulation is a framework for guiding intervention-oriented activities in the laboratory that aims to elucidate and enhance the ‘responsive capacity’ of laboratories to the broader societal dimensions of their work (Fisher et al. 2006).Footnote 2 Developed by Erik Fisher during a three-year laboratory engagement study, MM has been applied in a range of laboratories around the world as a form of ‘socio-technical integration research’, or STIR (Fisher and Guston 2008).Footnote 3 MM extends more traditional laboratory ethnographies by augmenting participant observation methods with distinct engagement tools that allow for feedback, discussion and exploration of research decisions in light of their societal and ethical dimensions. An ‘embedded’ social or human scientist interacts with laboratory practitioners by closely following and documenting their research, attending laboratory meetings, holding regular interviews and collaboratively articulating decisions as they occur through the use of a protocolFootnote 4 that maps the evolution of research and helps feed back observation and analysis into the laboratory context itself (Fisher 2007). Regular use of the protocol allows for collaborative exploration of the nature of research decisions, with the ultimate aim of shaping technological trajectories by rethinking the processes that help characterize them (Fisher et al. 2006).

Since the general possibility and utility of MM was tested in an earlier pilot study (Fisher 2007), the studies presented here aimed to explore the extent to which MM could be applied to enhance lab-based critical reflections on the broader socio-ethical context of research. As such they attempted to bring together the normative approaches of the ethics of science and technology with the descriptive richness of science and technology studies (STS) (Radder 1998; Van de Poel and Verbeek 2006; Zuiderent-Jerak and Jensen 2007). These research studies asked two questions: (1) How can broader social and ethical dimensions of research be rendered visible in the laboratory? and (2) Do laboratory practitioners perceive critical reflection on the broader socio-ethical context of their work to be relevant?

First- and Second-Order Reflective Learning

To assess the research findings in light of these questions, I distinguish between first- and second-order reflective learning (Van de Poel and Zwart 2009; cf. Sclove 1995; Wynne 1995; Schot and Rip 1997; Grin and van der Graaf 1996). First-order reflective learning is an iterative process by which a professional experimentally finds solutions to problems using several lines of inquiry. This process “takes place within the boundaries of a value system and background theories” (Van de Poel and Zwart 2009, p. 7). First-order reflective learning thus concerns “improvement of the technology and the improved achievement of one’s own interests in the network.” Second-order reflective learning, on the other hand, “requires a person to reflect on his or her background theories and value system” (Van de Poel and Zwart 2009, p. 7). In second-order learning, value systems become the object of learning while in first-order learning these are taken for granted.

This distinction can be applied to the social responsibility of researchers: first-order reflective learning is reflection ‘within’ the research system. Van de Poel and Zwart note, “In first-order reflective learning, moral issues are dealt with within the bounds of the background theories and are approached from within the value system of the actor” (Van de Poel and Zwart 2009, p. 7). In terms of responsibility, such forms of reflection involve compliance to one’s internal responsibilities towards the research community such as the responsible conduct of research and environmental health and safety. Second-order reflective learning involves reflection ‘on’ the research system, including the value-based socio-ethical premises that drive research, the methodological norms of the research culture, and the epistemological and ontological assumptions upon which science is founded (Verhoog 1980): the background theories and values of the research system itself become the object of learning.

The value of MM with respect to the challenge for the ethics of science and technology lies in its ability to support second-order reflective learning. In addition to several instances of first-order learning that occurred as a result of the interdisciplinary interactions, MM served to enhance critical reflection on the socio-ethical context of lab work. Note that the studies did not assume that laboratory practitioners have a general ‘reflexive deficit’, or that scholars from the humanities and social sciences are somehow more reflexive. Rather, they sought to test the hypothesis that social scientific and humanistic practitioner knowledge could complement, through interdisciplinary collaboration, natural scientific practitioner knowledge.

Midstream Modulation in Delft and Tempe

The STIR studies described here consisted of two consecutive laboratory engagement studies: in the Department of Biotechnology at Delft University of Technology, The Netherlands (Fall 2008) and in the School of Life Sciences at Arizona State University, Tempe, USA (Spring 2009). A total of eight laboratory researchers participated in the studies. I had regular interactions during a period of 12 weeks with four of these researchers. The other four participants acted as ‘controls’, doing only the pre- and post-interviews at the beginning and end of the study (see Table 1). The participants were all PhD students in molecular biology. Researchers in the Delft Department of Biotechnology focused on the use of micro-organisms for industrial production of chemicals from renewable resources and as diagnostic systems, while those in the Tempe Photosynthesis Group applied genomic and molecular biological techniques to elucidate physiological processes in cyanobacteria with a view to bioenergy generation.

Table 1 Research participants

Data Collection

Following the MM pilot study (Fisher and Mahajan 2006), interactions with research participants consisted of pre- and post interviews, participant observation, regular application of the protocol and collaborative drafting of visual representations of the research process. The pre- and post-interviews enquired into the research objectives, decision-making structures, implicit and explicit references to societal goals in the project description and changes in participants’ awareness of and attitude towards ethical and societal dimensions of the research. The pre-interviews marked the beginning of a period of participant observation in which I followed the ‘high interaction’ participants, spending 8–12 h per week in the lab and participating in regular lab meetings whenever possible.

During the research phase, the STIR protocol was applied (Fisher and Mahajan 2006; Fisher 2007; Schuurbiers and Fisher 2009). Reconstructing decisions by way of the protocol allows for reflection on how the interplay of various decision components leads to decision outcomes, constituting a collaborative process in which both observed and reported information is reflected back to the practitioner over time. The embedded scholar thus becomes “part of the convergence of goals, strategies, and socio-material configurations” (Fisher and Mahajan 2010). Given the normative background that motivated these studies, my engagements attempted, in addition to bringing out latent MM ‘de facto’ considerations, to examine how issues in the ethics of science and technology as such could be brought to bear on the research process with the goal of ‘deliberately’ expanding what researchers took into account (Fisher et al. 2006). Since the goal of STIR was to explore the extent to which interdisciplinary interactions may serve to bring out a range of potential latent and implicit broader issues, I tried not to determine in advance which issues were to be considered as relevant. Indeed, a wide range of issues emerged as a result of the interactions—and were classified only in retrospect (see Table 2).

Table 2 Ethically relevant topics discussed during the lab studies

Schematic overviews of the research progress indicate the links between the interrelated series of decision processes mapped over the twelve-week period (e.g., see Fig. 1). As with the protocols, the initial drafts of these overviews were based on earlier conversations, and were discussed regularly with participants, and adapted on the basis of the feedback provided. New drafts were discussed at the following meeting, and the iterative process was repeated. These overviews, and the regular discussion of them, confirmed my understanding of the unfolding research project, built my ‘interactional expertise’ (Collins and Evans 2002), and identified relationships between the research and the broader discussions held during the protocol meetings.

Fig. 1
figure 1

Drafting a research overview. Feedback from the research participant on the initial draft led to a following draft, ultimately leading to a shared understanding of the research processes and the considerations invoked

Objects of Reflection

The iterative process of observation and feedback by means of the protocol and research overviews served to render normative issues that were directly related to the research at hand more visible to myself and my collaborators. Observation and feedback predominantly focused on research goals (knockout or overexpression of protein production pathways followed by phenotypic characterization) and molecular biological techniques (plasmid insertion, the polymerase chain reaction [PCR], separation gels, high performance liquid chromatography [HPLC], and so forth). Still, reconstructing ‘technical’ decisions by way of the protocol quite naturally brought out ‘microethics’—normative issues concerning “individuals and internal relations of the engineering profession” (Herkert 2005, p. 373). Unpacking a decision not to repeat a gel run for instance could bring out financial and time considerations, but also more overtly normative issues such as concerns about the expectations of a supervisor or the epistemic norms of the research community (verifiability, impartiality, scrupulousness). Asking why research participants took protective measures against harmful effects of carcinogens brought out personal health and safety and environmental considerations, but could also invite a research participant to comment on how colleagues ought to behave, or lead into a discussion about the appropriateness of safety regulations.

In addition to the kinds of microethical discussions—lab practices, responsible conduct of research and environmental health and safety concerns—emanating directly from the laboratory work, the feedback processes also occasioned discussion of macro-ethical issues, normative issues that apply “to the collective social responsibility of the profession and to societal decisions about technology” (Herkert 2005, p. 373). Enquiring into the impact of a confidentiality agreement on the freedom to publish research results could lead us to examine intellectual property, confidentiality and the influence of private investors on research. A question on the relationship between expectations raised in a research proposal and the actual work done could serve to explore the role of promises and expectations in research, science-policy interfaces and hype-disillusionment cycles in research. Ultimately, repeated questions like “How do you know that the results you have just obtained are actually a result of your transformations?” led to discussions on philosophical topics like reductionism and the problem of underdeterminacy of scientific data.

Table 2 categorizes the range of topics discussed and provides indicative questions that initiated such discussions, showing how implicit value judgments were rendered explicit by asking ‘broader’ questions. Most of these topics were addressed in each of the interactions, given that their discussion was dependent on the nature and stage of the research projects as well as the particular experiments performed at the time of study.

These findings suggest that researchers frequently deal with normative and social issues but without necessarily labeling them as such, as the notion of de facto modulation (Fisher and Mahajan 2006) posits. Researchers are not accustomed to viewing their decisions from a normative perspective or discussing the normative aspects of decisions explicitly. Such broader issues were brought into focus by routinely asking different kinds of questions than those usually encountered in the midst of laboratory research: questions about the normative dimensions of lab practices, about researchers’ personal moral concerns, about the possible longer term ethical, legal and social implications of research, and so forth (see also Table 2). Thus, the methods and techniques of MM can help render ethical and societal dimensions of research more visible to practitioners within the context of the laboratory.

In addition to these kinds of discussions brought about by applying MM methods and techniques, several kinds of learning occurred as a result of the interactions. This speaks to the question of whether research participants perceived critical reflection on the broader socio-ethical context of their work to be relevant.

Reflection ‘Within’ the System

In several ways the iterative observation and feedback processes occasioned instances of first-order reflective learning, i.e., learning related to technological improvement and the improved achievement of the researcher’s own interests. The regular occurrence of ‘efficiency’ discussions, probing for possible overlooked considerations or alternatives of a technical nature, on several occasions led to improvement of the technology or the improved achievement of the research participant’s interests in the situation in which he or she was working. For instance, after observing R1A repeatedly preparing small amounts of stock solution for a gel, I asked whether making a bigger batch could save time. Efficiency discussions were a matter of trial and error: participants appreciated my effort, but had often thought about possible alternatives already. In other cases, my questions suggested new alternatives. Applying the protocol to a particular experiment that R2A was performing, we determined that there was an opportunity to identify a specific chemical compound involved in cell-to-cell communication. R2A was searching for the compound in a bottom-up fashion, by measuring cell reactivity to different candidate compounds. When I proposed a top-down experiment, determining the presence of the compound in a sample where the anticipated cell communication was already occurring, R2A replied:

My supervisor decided to do it this way. Probably the current experiment was easiest …. But that might be the way to go, now that this doesn’t work.

Such efficiency discussions thus served a threefold purpose: they elucidated the details of the experiments; probed whether an outsider’s perspective could occasion new research opportunities; and built trust, enhancing a sense of co-labor. When I asked R1D at some point whether our interactions led him to perceive new research opportunities, he said:

[It happened] just now. Well, I have to look back, I have to think about what I’ve done every now and then, to tell you what I did, so to say. So that forces me to some kind of realization …. At the same time I’ve been working on a presentation for a work meeting. At that moment I also realize that knocking out those genes could well have more consequences than we think …. And then I started reading back, like what is the capacity of that transporter, and then I came across a calculating error …. So, on the one hand, you force me to think, and on the other hand a work meeting forces me to think. So … it comes from both sides so to say.

These examples indicate that regular application of the protocol facilitated first-order learning, although it is difficult to pinpoint precisely what triggers the learning process. R1D found his calculating error as a result of being “forced to some kind of realization.” Perhaps my questions instigated this realization process, or perhaps it emerged from thought processes developing in the researchers’ minds as they explained their work to me. In any case, the collaborative process stimulated mutual learning. There were other instances of this kind of learning, such as when I was discussing one of the draft research overviews with R2A. Looking at the number of research lines he was simultaneously pursuing, he realized how much he had taken on, leading him to the conclusion that he needed to make decisions about which research lines to pursue and which ones to drop:

… it’s a good following of the process …. I think you can pretty much see how the thinking evolves, right? I mean, the first insertion, that was my supervisor’s idea, and then I came up with other stuff, and we get to the point where I’m thinking about stuff that is not even cyanobacteria genes, but something else.

When I enquired later about the relevance of our discussion, he commented that he had never given research planning much thought, but saw the value of it now:

For me that was the most important point, that I see how much I have to do, or have done, or how sometimes stuff gets entangled with other stuff if you never realize that things are related. Then you end up with a contest, and entrepreneurship, and things which you never thought about, and then … It’s also fun to see how you have four lanes, or forks, and then one of them stops, because you’re trying to advance the other one, and try to keep all of them running at the same time.

Apart from efficiency discussions, considerations of a more explicitly normative nature in some cases led to changes in lab practice. For instance, several research participants who wore two plastic gloves to prevent getting acrylamide on their skin, would subsequently open a cupboard without first removing one of the gloves. When invited to present my findings to the research group at the final lab meeting I attended, I noted this lack of compliance with environmental health and safety regulations, feeding back my observation. The example sparked a hefty debate. Some researchers in the group felt strongly about complying with such regulations, particularly with regard to wearing lab coats, even though no one seemed to ever wear them. A few days later I received unsolicited news that several lab members had now started wearing lab coats again:

It [often] happened … that when I was handling ethidium bromide gels, some drops reached my clothes, or … unprotected areas of my hands …. Meanwhile [my] lab coat was clean and ironed on my chair …. I was thinking that one day I should … wear mine, even though I’ll raise some eyebrows …. Then came your presentation … and I remembered how I used to take care of my safety and my clothes …. Monday, after the seminar, on my way to the lab, I noticed that [S] [was wearing his] lab coat—he was spraying nitrogen on some concentrated samples and needed to protect his clothes. I said to myself, “[Now is] the moment. If I [do it now there] will be two [of us] wearing … lab coats”. I … [wore] it for the rest of the [time].

Apparently, the presence of an outsider in the lab enabled a change in laboratory practice, as a result of rendering explicit and discussing the latent moral considerations of lab practitioners, particularly the ‘recognition’ (quite literally) of personal safety and well-being as a moral value. As this behavioral change illustrates, laboratory-based, collaborative work that was structured by MM was able to accomplish what regulations up to that point could not. Along with the other examples cited, it also confirms that MM can encourage first-order reflective learning by elucidating and enhancing laboratory decisions, whether aimed at improving the technology (a more efficient experimental setup, less time-consuming procedures) or achieving one’s own interests (better research planning, compliance to existing regulations).

Such reflection ‘within’ the system of course has value, but more encompassing reflection and learning, such as called for in the ethics of science and technology, would go beyond issues of compliance and improvement and would enhance the capacity of scientists and engineers to reflect on the broader socio-ethical context of their work and the reasons for the regulations in the first place. It would require ‘broad and deep’ learning (Schot and Rip 1997, p. 257), including second-order reflection on the background theories and value systems of the research context in which researchers operate.

Reflecting ‘on’ the System

In addition to microethical considerations, broader social and ethical dimensions of research were also regularly discussed during protocol meetings. One example of second-order reflective learning relates to the moral dimensions of genetic engineering. R1D at one point considered integrating a heterologous gene in the micro-organism with which he was working. He faced a choice between integrating a human gene and a mouse gene, both of which fulfilled the required characteristics. Discussing the choice with his supervisors, he invoked a range of technical considerations such as substrate specificity, affinity, capacity, availability of a plasmid and scientific novelty. The question of whether integrating a human gene would be morally acceptable was not discussed. Still, R1D expressed his moral reservations during one of the protocol meetings:

  • R1D: I’m cloning a mouse gene, because … I decided like I’m not going to do a human gene. At least, there was a choice between human and mouse, well, then I’ll go for mouse, that’s a bit … safer.

I subsequently probed R1D for the moral arguments he might have:

  • Me: Why would that matter? A gene is a gene, right? A sequence of base pairs that you can reproduce synthetically.

  • R1D: It’s an image-thing. Practically, pieces of DNA from one organism work better than others, and synthetic genes don’t always work optimally, probably because of interaction with the genome. Where it comes from is important, it’s a bit … ethical. The DNA is still from that person. You put a piece of human in a micro-organism. I would have less difficulty if we would synthesize the DNA based on the sequence of a human fragment of DNA.

R1D’s response included some morally relevant dimensions. Beyond the practical consideration that “pieces of DNA from one organism work better than others, and synthetic genes don’t always work optimally”, he showed awareness of possible issues in relation to public concern by saying that “it’s an image thing”. He also expressed a moral value with respect to the integrity of the human genome: “You put a piece of human in a micro-organism.” His response led us to explore each of these dimensions further. The ‘practical consideration’ prompted discussion about reductionism: if genes are nothing more than strings of nucleotides, then why would synthetic genes not work optimally? In addition to further practical considerations (synthetically produced genes may have overlooked point mutations for example), we considered the background assumptions behind genetic engineering (the assumption that genes express proteins may turn out to be more complicated than expected due to unknown gene–gene interactions in the living system). The potential for public concern led to a discussion on how to address public concerns about genetic modification. From the possible moral values involved in the acceptability of using genomic material of human origin came discussion of deontological and utilitarian views in ethical decision making and the question of normative pluralism. Evaluating the relevance of these discussions at a later stage, R1D commented:

  • R1D: I had given it some thought subconsciously, but I never really gave it careful thought …. Ethics can be very boring, until you reach dangerous territory, and then it becomes fun.

This response suggests that the perceived relevance of ethical issues for researchers increases when discussed in relation to concrete situations and, furthermore, that their discussion in close proximity to the research activities that occasioned them may expand the kinds of considerations that researchers invoke when making morally relevant decisions. These are moments when the embedded ethicist can introduce broader perspectives and invoke theories from other ways of knowing while maintaining a direct bearing on the research at hand. There were numerous occasions for bringing a broader normative perspective to bear on the work done in the laboratory during the lab studies, for example on the regulation of research on genetically modified organisms, intellectual property and the ethics of promising.

Another example of second-order learning occurred when discussing synthetic biology. While regularly ordering synthetic genes from chemical suppliers, research participants did not see their own work as being related to synthetic biology, nor to the ongoing debates on synthetic biology in ethics and the social sciences. Upon learning that R1D had ordered a synthetic gene I asked:

  • Me: Would you call this synthetic biology?

  • R1D: That depends. What is synthetic biology? Much of what is now called synthetic biology resembles what we do: putting a piece of synthetic DNA in a host. But I think synthetic biology is making all components synthetically …. Really to develop a cell from scratch might take another twenty years.

R1D did not consider normative questions on the desirability of building cells from scratch to be relevant because of the practical complexities involved and the long time span before that vision might become a reality, whereupon I invited him to take a historical perspective. I referred to the progress that was made in molecular biology in recent decades, and how we probably would not have predicted 20 years ago that ordering a synthetic gene would be a standard procedure by 2010. I invited him to reflect on recent developments from this broader perspective, where 20 years is just around the corner.

  • R1D: Then you would need to think about the use, or the goal. If you can build a cell, then you can build other things as well. We shouldn’t go in the direction of synthetic higher organisms. There’s always a risk that others move in the wrong direction. You shouldn’t be using it for other purposes. It’s like a knife: you can use it for good or for bad …. That’s why we should maybe think about these things. Then there has to be extra regulation.

Taking the longer-term perspective that ethicists and social scientists may take when reflecting on new developments such as synthetic biology, R1D started to think about his research in a markedly different way. By contemplating the long-term impacts of his work, he started to reflect on the broader purpose and potential outcomes of the developments of which his own work was a part, acknowledging the relevance of broader reflection.

A third example of second-order learning concerns the social relevance of research. Questions concerning the future use of research outcomes were regularly discussed in each of the studies. Responses from all eight of the research participants to the two questions on social relevance featured in the pre-interviews shared a similar ambiguity. All participants responded positively to the first question: does society benefit from research?

  • C1A: One of the main goals is that society benefits, from any research. It’s not just a fun thing we’re doing here.

  • R2A: I wouldn’t see what would be the point otherwise. If it would not help the rest, if that’s the reason, than usually … Society should benefit; what would be the point otherwise?

While being convinced of the general societal benefits flowing from scientific research, participants had more difficulty in predicting the possible benefits of their own research projects in response to the more concrete follow-up question: does society benefit from your research?

  • C1A: I hope so. It’s not my immediate goal; I haven’t thought much about it. What I’m doing is basic research; this is probably a little bit far away from … What I’m doing is too far away.

  • R1A: Honestly, I don’t see any significant contribution, no. Maybe there is very slightly, slightly, indirectly, related to contributing ideas, maybe there is some technology … But otherwise, the result, for us researchers, we’re excited but for other people, who cares?

Wanting to pursue this perceived discrepancy between the general benefits of research and the specific benefits of individual research projects, I revisited the question of social relevance throughout each of the studies. Research participants responded in a similar fashion: a general picture emerged in which the ultimate benefits of research cannot and should not be accurately predicted. Participants gave several historical examples of knowledge flowing from basic research that only much later turned out to have practical use like the invention of the light bulb, penicillin or X-radiation, and concluded that unrestrained basic academic research is ultimately more likely to increase the possibility of socially relevant applications than directly demanding social relevance. Increasing calls for social relevance were therefore seen to pose a danger to scientific progress, and ultimately to societal progress, by stifling the innovative power of research:

  • R1D: If you invest more in society-improvement, then the learning curve of science will become less steep. So … in the end it’s less good for science … And in the end maybe also for society … in the long term.

Interestingly, most of the research projects under study relied predominantly on funding from private organizations and were strongly driven by the need to deliver practical applications. When I questioned the amount of freedom involved in privately funded research, research participants readily acknowledged that their freedom is limited because of the expectations of the private investor. They saw this as the inevitable result of decreases in government funding: the only way for a research group to survive is by strengthening links with private industry. But while acknowledging that this shift in funding mechanisms limited their academic freedom, they continued to invoke the principle of unrestrained academic research to argue against calls for social relevance. Their background assumptions and value systems were in tension with recent changes in funding mechanisms.

I subsequently tried to challenge their assumptions by first assuming them: supposing that one cannot predict the societal benefits flowing from research, and therefore academic research should be unrestrained, then how should a private investor determine which types of research to fund, given that funding sources are necessarily limited?

  • Me: The question is: how do you make the decisions whether I should fund genetic modification of cyanobacteria, or whether I should maybe fund your colleagues who do evolutionary growth of cyanobacteria?

  • R2A: That’s why the, well the way that I thought is that politicians are the voice of the people, and those are the ones that automatically decide who gets the money, because they should have, they should know, what people want. So if people want cleaner fuels, then they give money to cleaner fuel. If people wanted better dogs, than they would find someone else. I think it’s driven like that.

To press the question, I would ask how the research participants would decide which research to authorize if they were a policy maker. R2A took recourse in a process of democratic decision making:

  • R2A: Right, I guess the policy has to be made, [based] on the average of what people think …. [T]he policy [should not] be made on the thinking of one person only, but on what most people think.

  • Me: But how about if big masses of people, like in Europe, say we don’t want any genetic modification? Would you say, well, that’s the majority vote, I’ll just quit my job and find another?

  • R2A: Probably not like that. But … I tend to be objective on those sorts of issues, so … Someone who can prove to me that that was the best decision, I would follow it. If someone would have a good argument I probably would … not quit my job, but find a different approach. I guess, I don’t know.

Such discussions thus problematized the unquestioned assumption that the demand for societal relevance hampers societal benefit. Research participants realized that some kind of demarcation criterion was needed to determine which research to fund, only to realize that this would involve measuring the value of knowledge as a function of some kind of external relevance, contradicting their original assumption that the utility of research cannot be predicted.

The MM feedback mechanisms allowed for attending to broader questions as they impinge on the daily work of researchers, and pointing to possible tensions and ambiguities in research participants’ responses. The value of these ‘second-order’ discussions lies not so much in having motivated directly observable changes in practice, but in the fact that participants engaged in critical reflection on the broader socio-ethical context of their work. Participants observed the ambiguity in their initial responses, realized that some criterion of relevance is needed ‘in the real world’ to determine what projects to authorize, and showed interest in reflecting on it in more nuanced ways:

  • R1D: Yeah, you pull … away from the science a little, you put [the science] in a somewhat different perspective, more like … You look at science as a society so to say, where all kinds of things happen.

  • R2D: What I think is useful is that one can indeed think about what kind of societal interest is involved when someone does this kind of research …. I think it’s really interesting that people will start thinking about the use much more.

These findings suggest that participants began to reflect in new ways on the underlying background theories and value systems operative in research. By challenging unquestioned assumptions, discussing what future applications could come out of the research, and sharing different visions on the role of science in society, the socio-ethical context came to life within the context of research—something that participants indicated not having experienced before, neither through their ‘ethics and society’ curriculum nor ethically-oriented funding requirements.

Research participants indicated that the ongoing discussions during and alongside the actual conduct of research did not hamper, but instead added value to the research process in several ways. In the words of R1D, ‘stepping into the helicopter’ could serve as a guide to research planning, to identify overlooked opportunities, to relate lab research to its broader policy contexts, and to uncover latent normative issues. When during the post-interview I asked R1D whether he thought the study was useful to him, he replied:

… everybody should perhaps reserve free space in their agendas every now and then, stop all experiments … and think …. Maybe you could … Should one integrate this in each and every PhD project? That someone from outside the faculty comes along, and you need to account for your actions towards that person. And the guy sitting in front of you would only have to ask: why? Why this? Why that? Couldn’t you do that differently? And how does it work?

Discussion

These experiences suggest that the broader socio-ethical dimensions of research were rendered more visible within the research context and that research participants perceived such broader reflection to be relevant. MM served to encourage researchers to address the socio-ethical context of their work through collaboration and in real time. The lab studies aligned with the objective of real-time technology assessment to “provide an explicit mechanism for observing, critiquing, and influencing social values as they become embedded in innovations” (Guston and Sarewitz 2002, p. 94) while adopting the overtly normative standpoint that researchers should engage in critical reflection. Like the MM/STIR pilot study (Fisher 2007), these studies helped bring out latent ethical and societal dimensions of research, rendering explicit considerations that hitherto remained implicit, at a time when they could influence researchers’ decision-making. Unlike the pilot study, they also aimed to introduce relevant socio-ethical knowledge and perspectives, and initiate discussion of specific moral questions as they arise in the laboratory context. As Ibo Van de Poel and Peter-Paul Verbeek note:

Synergy between engineering ethics and STS … could result in an empirical and reflexive research, which is empirically informed and critically contextualizes the moral questions it is asking but at the same time does not shy away from the effort to actually answer them. (Van de Poel and Verbeek 2006, p. 234)

The approach I adopted in these studies is not morally agnostic. It invokes the procedural norm that researchers have a moral obligation to critically reflect on their research. Yet a commitment to such ‘deliberative modulation’ does not require the embedded humanist to enter the laboratory with a predetermined set of substantive norms; as the laboratory engagement experiences made abundantly clear, the content of critical reflection can only emerge as a result of situated interactions over time. Such collaborative, situated critical reflection combines different ways of thinking and knowing: those of the laboratory researcher and those of the embedded social researcher (Gorman et al. 2009). It instilled a sense of urgency, concreteness and relevance to research participants that differs essentially from reading about them in a textbook, for example. It also supports early detection and warning signals of the ethical valence of research outcomes that may otherwise go unnoticed. Additionally, MM can take a more focused (and less speculative) approach towards ethical reflection that could lead to more meaningful interactions between scientists and ethicists (cf. Nordmann and Rip 2009). Note however that the sample size of these lab studies cautions against overgeneralizing: the results need to be compared with other findings to confirm or refute these observations.

The perceived value of second-order reflective learning proceeds by way of the perceived value of first-order learning, of improving the achievement of one’s own interests. During each study’s duration, initial reticence from research participants turned into enthusiasm for discussing both the progress and the broader aspects of their research. Given that ‘rethinking’ knowledge production in research systems depends on the willingness of research communities to rethink their own practices, such collaborative approaches could be more effective than external forms of critique.

Of course, this dependency on research participants’ willingness to engage implies certain limitations too. While the ‘voluntaristic’ approach towards collaborative engagement can enhance researchers’ critical reflection, it also builds an asymmetrical relation between the researchers and the embedded scholar. As a guest in the research group, the latter is dependent on the acceptance and endorsement of the hosts, and critical views cannot be allowed to disrupt good relationships. This may not be a problem if the collaboration is seen by research participants to be conducive to first-order learning, but could become a problem when there is strong normative disagreement. In those cases, the embedded ethicist has no ‘jurisdiction’ (Anthony Stavrianakis, personal communication). The need to respect operative conditions and dynamics within the laboratory inevitably limits the range of possible critiques. Furthermore, the collaborations are constrained by their social and institutional environment. Existing, internal responsibilities often take precedence over a researcher’s broader social responsibilities.

That said, MM has been found to enhance the critical reflection of research participants on the socio-ethical context of their work. Such reflection is arguably needed if other social and ethical programs—upstream engagement, technology assessment, codes of conduct, etc.—are to be successful. The reflective learning documented here provides modest indications of Webster’s vision of STS, that is,

helping to set the terms on which science might be accorded a socially warranted status that in important ways is distinct from, critical of and supersedes the conventional (scientistic) sense in which science has been legitimated (Webster 2007, p. 460).

This vision must be tempered by the danger of the STS practitioner becoming an “integral co-productionist element of the very structures of power and culture which might be just what STS should be challenging” (Wynne 2007, p. 494). This is the real challenge for the embedded researcher: becoming part of the convergence of goals, strategies and configurations of the laboratory insofar as it provides access to different registers of justification (Arie Rip, personal communication), while not losing sight of the original intentions behind one’s entrance into the laboratory. Walking the fine line between co-labor and critique may allow different voices to be heard at the heart of the R&D enterprise, tapping potentials for learning and change that could prove significant.

Conclusion

The laboratory engagement studies described here provide an indication of the potential for interdisciplinary collaborations to enhance the critical reflection of scientists and engineers, albeit in a relatively small sample size. They demonstrate that broader socio-ethical dimensions can be productively engaged during laboratory research. Midstream modulation was found to engender fruitful and meaningful collaborations between social and natural scientists, encouraging second-order reflective learning while respecting the lived morality of research practitioners. Not only did it help make broader socio-ethical issues more visible in the lab, it encouraged research participants to critically reflect on these broader issues. Contrary to their initial claims, participants came to acknowledge that broader socio-ethical dimensions permeated their research. Importantly, first-order learning seems to be a prerequisite for the possibility of second-order learning: research participants’ willingness to engage in critical reflection on the broader socio-ethical context of research was seen to be dependent on their perception that the collaboration also improved the achievement of their own (research) interests.

The ongoing observation-based feedback of the midstream modulation framework and STIR protocol allowed the laboratory researchers and embedded ethicist to build collaborative capacities and establish conditions for productive reflection on ethical and social considerations. While what counts as an ethical issue is to some extent a negotiation between the individual collaborators, the procedural norm of reflective learning can guide both practitioners as they deliberatively integrate socio-ethical assessment with ongoing and future research directions.