Lacking an a priori definition or a universally accepted description, how do philosophers imagine the institution of science? Though collective imagination has been more thoroughly studied in the context of other practices, philosophy seems to also rely on these flexible mental-textual idealizations, which provide a suitably bounded object for scholarly inquiry. While the details may differ from scholar to scholar, the imagined institution of science provides a cognitively-expedient alternative to working with strict definitions of science, at one extreme, or messy examples from an actual scientific practice, at the other. Perhaps the quickest way to see such idealization in action is through the ‘science and values’ debate in philosophy of science. The attempts therein to delineate the proper role of non-epistemic considerations within ideal scientific practice almost immediately invoke a particular imaginary of what science is or should be like, blurring the lines between definition, description, and prescription.
This feature of the debate seems to have already been noticed by Elliot (2011), who observes that some philosophers constrain the roles of values in science specifically in order to promote their own normative ideal of science. Other authors, he notes, are more interested in clarifying logical or conceptual relationships between abstract ideas (i.e. science and values) than in prescribing how science should be structured. But as Elliot asserts, these differing purposes are a source of ongoing ambiguity for the debate over the role of values. Here seems to be some of the confusion noted by Merton; how and why do we formulate an imaginary of science, with more or less connection to the concrete practices for which it stands-in? Let’s unpack this further with a specific case: Heather Douglas’s widely-cited work on responsibility in Science, Policy, and the Value-Free Ideal.
Douglas’s idealization of science and its antecedents
Douglas (2009) continues a mid-20th century debate, an unfinished conversation over whether scientists deserve “autonomy” in their pursuit of knowledge. As Douglas describes it, one side emphasizes the ways in which science is and must be free of social or political values. These arguments for autonomy, relying on Michael Polanyi’s “tacit knowledge” and scientific realism, represent several compelling possibilities. But value-freedom, even as an ideal, is not quite so simple. Douglas reminds us of the other side of the debate. Responses from Rudner (1953) and Churchman (1948) reveal that we cannot avoid judging the quality of a particular hypothesis or the risk we take on by accepting it. It is important to notice, here, how even these early proponents of value-ladenness constrain their object of inquiry. Rudner’s “The Scientist Qua Scientist Makes Value Judgments”, for example, bases his argument not on a full account of science but on the somewhat pithy observation that hypothesis testing is part of any “satisfactory” account of the scientific method. As a result, even when considering an example of science at perhaps its most violent and politically-entangled – the Manhattan Project – Rudner asks the reader to consider the values inherent in accepting a specific hypothesis, that operating the Chicago Pile would not trigger the combustion of the atmosphere. Ominous indeed, but it is not at all apparent that hypothesis testing captures the most important philosophical features of this moment in science.
By identifying her own contributions as a continuation of such historical arguments between Rudner and his contemporaries, Douglas inherits their limited idealization of science, even as she attempts to expand it in Science, Policy, and the Value-Free Ideal. Scientists, in her account, have “role responsibilities,” unique to the position of a scientist. And they also have “general” moral responsibilities; they are responsible for their behavior in the way all social beings are. She acknowledges that, sometimes, our roles can take priority and remove our general moral liability. Lawyers, she suggests, have a role that shifts responsibility onto the legal system and can disregard some general moral responsibility as they defend a guilty client. So we might think that the role of scientist is like that of a lawyer, such that general moral responsibility for mistakes and oversight is set aside. Douglas thinks not.
The problem, she suggests, is that usually there is no “rigid” institutional structure within science that can take morally significant decisions out of the hands of scientists (2009, p. 73). Unlike the context of law, there is no adversarial system to ensure diverse or antagonistic perspectives are marshaled. There is no judge that at the end of the day has the power to decide whether or not a hypothesis deserves further testing, weighing its probability against the very real costs of a false positive. This is a decision that only the scientist and her immediate colleagues can make. Douglas, thus, argues that scientists have an obligation to consider the cost of error in their professional claims; they are not excused from general moral responsibilities. Douglas thus attacks the inconsistency in the traditional idealization of science as the autonomous pursuit of knowledge and cuts through the “confusion” noticed by Merton in 1938; because scientific practice is sufficiently unlike legal practice, its practitioners have to give up some autonomy.
This negative comparison with the value-free ideal, however, leaves Douglas’s own idealization of science somewhat implicit. Douglas’s more constructive analysis can be found in “The Moral Terrain of Science” (2014), where she lays out a revised vision of science; it is a practice that is situated in and valued by society. She uses this fact to prescribe scientists’ responsibilities beyond internal communal norms. There are, accordingly, three “bases” of responsibility to which scientists are accountable: “The first is to good reasoning practices; the second to the epistemic community of science; and the third to the broader society in which science functions and is valued.” If we want to normatively evaluate a particular research project or field, we can run through each basis. On the first basis: did the research produce “reliable empirical knowledge”? According to Douglas, science exists because society values it as a source of “reliable empirical knowledge,” and is therefore beholden to that epistemic standard (2014, p. 95). On the second basis: did the researchers support and enable their epistemic peers? As members of an interdependent community, scientists cannot act as if they work alone. Finally, on the third basis: Can we say that a given area of research embodies and advances the values attributed to science by broader society? Since science does not occur in a vacuum, research and its reasonably foreseeable effects should not conflict with societal values.
As Douglas points out, this characterization of science (to the extent it is taken up) prevents anyone from problematically bracketing questions of knowledge from questions of morality; scientists can’t excuse themselves from moral responsibility (i.e. basis 3) for the sake of empirical knowledge (i.e. basis 1). A neuroscientist, for example, cannot cite her dedication to good science simply to evade normative evaluation of her contribution to technological development, whether for military lie-detectors or novel treatments for depression. A philosopher of science is likewise encouraged to develop a sensitivity to the whole range of obligations that scientists might have, whether epistemic or ethical or in-between. In this way, Douglas resolves the tension in the traditional value-free idealization of science not by denying it outright but rather modifying it in favor of social responsibility. In her picture, science is still presumed to be a discrete practice, isolated from broader society by a set of internal epistemic norms, but with a few added (“general”) moral responsibilities.
This strategy seems argumentatively effective against philosopher peers who believe science must be “value-free”, and it pushes philosophers of science to consider jointly epistemic and ethical prescriptions for scientific practice. However, Douglas’s responsibility attributions still draw on a particular imaginary of science – in some sense, a philosopher’s imaginary – rather than that of an actual existing practice. Her account of science seems to continue Kuhn’s legacy of considering some social elements of practice while maintaining older, traditional imaginaries of science as a self-contained epistemic community that operates according to internal norms. This choice is most evident in her chosen definition of science: “an iterative, ampliative process of developing explanations of empirical phenomena, using the explanations to produce predictions or further implications, and testing those predictions. In light of the new evidence from the tests, explanations are refined, altered, or further utilized” (Douglas 2014).
It is not clear if the scientific community, presumably as much an institution or a loose set of organizations as a process, would be correctly circumscribed by such a definition; if we took the process she describes as the definitive norm for scientists, then the “scientific” community might exclude scientists working in the context of biomedicine as well as researchers pursuing what STS scholars highlight as “technoscience” (the subject of the next section). Such a idealization of science also does not account for clinicians, DARPA, industry, the Market, bioethicists, scientist-engineers, and other things that are not self-evidently inside or outside science thus defined. Their responsibilities to their practices and to society are thus left unexamined.
What’s wrong with imagining science as bounded epistemic practice?
For a philosopher, the choice of what to include in an idealization often depends on one’s intellectual project and aims. Nonetheless, even within the narrow philosophical debate about values in science, there have been some suggestions that the framing of science therein leads to serious problems. These critical responses are not posed explicitly in terms of imaginaries or idealizations, missing any corrective insights provided by such a theoretical perspective. But they do illustrate the negative effects of applying the scholarly idealization (exemplified by Douglas) to cases drawn from real-world knowledge practices. Biddle and Kukla (2017), for example, reject the “insistently discursive picture of epistemic activity in the inductive risk literature—one that in effect reduces much of it to propositional inference” (p. 217). They observe among some philosophers a tendency to focus on individual reasoning, treated as a deliberate balancing of available evidence and values. As a result of this tendency, institutional norms, political ideologies, contextual understandings of evidence and a whole range of epistemically-relevant phenomena are thus hidden from further analysis. Looking for an alternative way forward, Biddle and Kukla conclude that we must cultivate a more expansive “geography of epistemic risk” that can capture the full range of phenomena previously identified as inductive and subject them to a more contextually-sensitive form of philosophical inquiry.
To similar effect, Elliot and McKaughn (2014) lament that many philosophers debating values in science simply assume that epistemic values take precedence over non-epistemic values in theory or model assessment. They notice that this assumption contradicts a core feature of scientific practice; the variety of practical domains in which scientists work frequently requires scientists to weigh epistemic values against non-epistemic values, sometimes in a much more “direct” way than philosophers want. As an example, Elliot and McKaughn present the case of risk assessment in government regulation of toxic chemicals. Here, a primary epistemic challenge is to answer questions like ‘does this substance cause cancer in humans?’, but with an important constraint: each day that goes by without the answer is a delay in preventing harmful exposure. If one intends to minimize the social cost of neglecting suspected carcinogens, then one can reasonably choose an expedited risk assessment method over a maximally rigorous or accurate one.
Some might worry, of course, that including this sort of reasoning in our normative accounts of science would justify more objectionable roles for non-epistemic values in science. And in response, we may choose to discount risk assessment as bad science or as non-science in order to save our idealization. But this response should not be taken lightly. As Elliot and McKaughn point out, science is actually conducted to attain a wide range of goals, and such regulatory and other “hybrid” science is arguably (as I will describe in the next section) definitive of science since at least the 1950s. While discounting them as improper science may keep them out of the pages of Philosophy of Science, they will continue to exist in the world. The dominant epistemological modes of climate science, for instance, will continue to guide policy-making, even when their methods and goals are not shared by those most impacted by climate change (Jasanoff 2010). There are thus a range of serious ramifications to doubling down on the philosopher’s imagination of science as an epistemically-bounded practice, of which I will highlight two.
First, an imaginary with little connection to actual practice may inhibit its utility as a prescriptive or regulative tool. If a philosopher—working on science or any other human practice—uses a non-existent or unattainable idealization to set its value and its community structure, the resulting prescriptions may be inapplicable or practically useless. In the case of researcher responsibility, Douglas’s prescribed obligations will have little or no force for technoscientific researchers, who may not recognize the imaginary being presupposed. “Who said we’re all in the business of explaining?” a physicist might ask while working in the context of nanotechnology. So too might a developmental biologist when patenting tissue engineering techniques for their biomedical technology company. As already noted by Biddle and Kukla (2017), the reduction of science to hypotheses and individual psychologies leaves us unable to address the full range and depth of knowledge practices. Societal effects of regulatory science, policy-relevant climate research, and many other philosophically-relevant phenomena will simply go unnoticed or even ignored by professional philosophers. Scholarly sophistication is of little value when the detailed idealizations under negotiation are exclusive to disciplinary philosophy.
Furthermore, as demonstrated by research on “sociotechnical imaginaries”, widely-shared idealizations of science do not just assume how science should be organizationally configured; they also pair that configuration with some desirable future that practitioners are working towards, whether that consists of avoiding climate change or creating healthy self-disciplining citizens. The normative orientation of philosophy would recommend careful attention to such value-laden implications. Even if philosophers of science admit the loftiness of their idealization of science—maybe it’s just a distant hope—then there remains the task of justifying that desired state of affairs to persons who might not share their vision for science in society. I may, for instance, believe that science includes empirical research conducted for policy purposes and that philosophers should have something to say about the goals driving it (from choice of problems to theory appraisal). Considering alternative visions of science, (e.g. technoscience), will help make these two criticisms more concrete.
4. Alternative imaginaries of science available beyond philosophy
We should primarily understand Douglas as responding to a particular debate about value-free science. In that sense, she presents a necessary correction to the imaginary of science as value-free by formulating an opposing imaginary. But if her mapping of the “moral terrain” is to provide a new responsibilist framework for philosophy of science, there are some additional obstacles to consider. In Douglas’s main arguments, she often imagines science as a single identifiable practice, independent from university professorships, federal administration, and biotech start-ups. Recall, for example, that her idealized scientific community is circumscribed by its pursuit of “reliable empirical knowledge.” Douglas also distinguishes between “the value of knowledge” and “social value,” which (without some qualification) neglects her overarching insight that scientists frequently make policy and, in general, influence social order. More recently, she has suggested that the “most important thing to know” about science is its “critical and inductive nature” (Douglas 2017). Unfortunately, these epistemological definitions—reminiscent of basis 1 in “Moral Terrain”—orient Douglas’s account towards science that solves internal epistemic puzzles. This is disappointing given that Douglas clearly acknowledges the extent to which even sociologically-oriented philosophy of science, under the influence of Kuhn, has artificially bounded science (2009, p. 61). On one reading, Kuhn uses The Structure of Scientific Revolutions to hide a vision of autonomous science within socially-rich imagery, to create a “social” picture of science that actually neglects society. Steve Fuller (1992), for instance, sees this as the consequential and anti-democratic misstep within philosophy of science. Taken to the extreme, this ‘Kuhnian’ imaginary of science presupposes a clearly bounded scientific community, a consistent empirico-methodological core, and a society which surrounds it but is clearly separable.
Do all philosophers of science uncritically promote this view of science as an isolated practice? As illustrated above by the responses to Douglas within philosophy of science, her idealization of science has not been taken up uniformly or without criticism across the scholarly community. And more systematic empirical analysis of our own philosophical habits and texts would be needed to determine whether there is a one or more dominant imaginaries of science underpinning work in philosophy of science. Nevertheless, the limitations associated with this particular idealization should trigger a moment of reflection about what inspires or supports our own imaginaries. It is notable that Douglas herself has recently called for greater attention to the “loom” – that is, the institutional structures and norms – that enable and constrain the “tapestry” of science (2018). In the spirit of fulfilling this vision, it is instructive to look beyond disciplinary philosophy and compare philosophers’ implicit visions of science with those in other disciplines.
“Mode 2” science and the technoscientific imaginary
In contrast to much philosophy of science, literature in STS typically portrays the boundedness of science as an achievement, the result of deliberate social and discursive work by practitioners. What is “inside” or “outside” of science, “applied” or “pure” is not given in advance, but actively contested and negotiated. Gieryn (1983) describes such “boundary work” as a practical problem for scientists, impacting the allocation of funding, respect, and epistemic credibility, among other things. Researchers can, for example, build up a distinction between pure and applied science, such that they are not responsible for failed technological applications or for the unjust biomedicalization of “atypical” bodies and lifestyles.Footnote 3 Or, as is more common today, they can undermine the distinction such that science can ally itself with industry and remodel society in the name of “innovation” or “disruption.” Latour (1991) takes the contingency of boundaries even further to suggest that the modern imaginary of Science stands or falls with our faith in a transcendent Nature that can be purified of culture or politics. Scientists then position themselves as the exclusive representatives of objects and natural things, portraying everyone else as operating in the messy world of human interests or passions. We need not, according to Latour, take this dichotomy for granted; through the lens of networks, the spheres of culture and nature are not so cleanly separated. Overall, these critical examinations of science challenge us to ask how an idealization is sustained and to inquire who benefits and who is harmed via a given idealization of science.
More to the point, the specific character of scientific boundary-drawing described in recent years seems to stray from Douglas’s idealization of science, providing an instructive counter-imaginary. Nordmann (2012), for example, stresses that much of contemporary science (i.e. “technoscience”) is oriented towards “knowledge of control” and capacity-building, which both evade the attention of narrower philosophical discussions of scientific knowledge. Bensaude-Vincent et al. (2011) argue further that scientists can now study and present objects explicitly in terms of their potential value to humans, non-scientists included. The authors suggest that much of today’s most heralded research falls into this category of “technoscience”, despite the fact that it clashes with the more “pure” vision of science proposed by philosophers like Francis Bacon and his intellectual successors. At a higher level of analysis, these phenomena have also been described as part of the emergent “triple helix” of science, industry, and government collaboration (Etzkowitz and Leydesdorff 1998) or the regime of “Mode 2” science (Gibbons et al. 1994). Each of these empirically-based understandings of science show it to be more porous, less obviously bounded than philosophers like to think.
Societal consequences of the technoscientific imaginary
Most significant, at least for the relationship between imaginaries and normative understandings of science, is the way in which these descriptions of “Mode 2” science and technoscience are not merely academic idealizations but are promoted and enacted in society. Some technoscientific researchers, for instance, deny their isolation from society by invoking a porous imaginary of science, an unintentional echo of STS. Previous empirical studies, for instance, have shown that researchers will often describe themselves as cogs in a machine, lacking true control—and thus responsibility—over the direction of their projects (e.g. Swierstra and Jelsma 2006). Others have reported that technoscientific researchers will even set aside personal uncertainty about the social trajectory of science in order to match the values of funding agencies or other knowledge users (Brown and Michael 2003). In these cases, describing science as deeply entangled in society has the same effect as describing it as wholly autonomous; it becomes difficult to hold researchers accountable for the values they espouse or the technologies they enable. Boundary work, thus, can function just as well removing distinctions as it does building them up.
Set aside for a moment the issue of which imaginary is correct or even most practical.Footnote 4 The consequences of adopting a technoscientific imaginary are significant for our reasoning about science. As envisioned within technoscience, the action of institutional structure, funding priorities, and disciplinary identities, can be used to pose a very simple critique of Douglas’s prescriptive claims. An individual can’t be held responsible without some combination of freedom, causal sufficiency, and factual understanding of possible consequences. And technoscientific actors might deny having any of the three conditions! If, for example, funding agencies only further unethical values, then the individual researcher might cite this as a severe constraint on their agency. Or similarly if the disciplinary identity of the neuroscientist is subsumed by pressure to become an excellent entrepreneur, than the failing to fulfill “scientific responsibility” may be unavoidable for some. Scrambling for intellectual property might be expected, despite the negative effects on an open community of inquiry. One could thus read the “Mode 2” imaginary as disarming Douglas’s attributions of scientific responsibility.Footnote 5
However, the problem with simply or conveniently presupposing Douglas’s idealization of science is more general and more pervasive than a first-order disagreement about scientists’ agency or lack thereof. Her account of responsibility depends on a certain way of thinking about science and the way that it is ordered. Like much of philosophy of science, this account is partially empirical, relying on the author’s familiarity with real world practice, and partially a priori holding some features as fixed or as definitive. Douglas’s idealization, in particular, builds on a long and well-regarded history of philosophical debate about science and can only improve as details from actual sciences are filled in, as the “moral terrain” is given some much-needed topographical detail. But by juxtaposing the philosopher’s imaginary of science with the STS literature on technoscience, the fundamental idiosyncrasy of philosophical imagination is revealed. Our go-to idealizations of science or quick definitions should be more controversial than they are. While there is nothing inherently wrong with being idiosyncratic, philosophy of science should confront its choice of imaginaries.Footnote 6
5. Responsible imagining in philosophy of science and elsewhere
Science is an isolated community of epistemic practice. Science is a pro-active force for societal change and progress. Science is technoscience. How do philosophers of science (among others) deal with these competing statements? As illustrated by Douglas and the debate over values in science, the creative flexibility of imagination has supplanted the analytic foundation of definition. Even when a philosopher provides a nominal definition, the arguments that follow often play fast and loose with idealizations and actualities, switching seamlessly between features that we want science to have and features that it has in spite of us. Though Laudan (1983) has banished words like “unscientific” to the world of politics or Edinburgh SSK, many philosophers rely on an ability to introduce their idealized vision of science into discussions of practical import and real world problems. Douglas does so in order to start a conversation of responsibility. Helen Longino’s (2002) critical contextual empiricism, another example, turns her commitment to liberal society into an account of good science. In this sense, philosophers of science are not so different from everyone else. As described by Jasanoff and Kim (2015), sociotechnical imaginaries help scientists, policy-makers, and publics think about and organize science in society, even as its exact definition or boundary is contested in day-to-day life. The difference, here, is that we philosophers have the opportunity to hold ourselves to a higher standard, reflecting on our choice of idealization and its relationship (if any) to messy human practices.
I like to refer to this reflective habit as practicing ethics of imagination, a careful consideration of the potential benefits and perils associated with particular acts of imagination in philosophy of science. The task is too great to achieve once and for all in the space of a paper, but here it is instructive to briefly describe some similar debates about idealization in adjacent areas of philosophy. These conversations are not all recent and have been conducted in different terms—the concept of imagination is mostly absent and the STS concept of “sociotechnical imaginary” is non-existent—but worries over the role and impact of ideal theory in philosophy overlap significantly with my own provocation for philosophy of science. In short, these analogous debates suggest irresponsible imagination brings a pretense of philosophical progress while frequently neglecting the question of who benefits from and who is excluded by the process of idealizing science.
Critiques of ideal theory in philosophy
In his widely cited “‘Ideal Theory’ as Ideology”, Mills (2005) explains the problematic role of the ideal in moral theory and political philosophy. The term “ideal”, for him, is not meant to convey the mere presence of grand normative concepts but rather it refers to a philosophical heuristic that he calls “ideal-as-model.” Such models, he explains, can be based on reality (i.e. “descriptive”) or taken to the extreme (i.e. “idealized”) such that the actual is forgotten altogether. And when the actual is left behind, violence, oppression, and other real world challenges tend to fall away, set aside for someone else to address in the indeterminate future. This form of abstraction is not innocent, he argues, and functions in a variety of ways to exclude the plight of the most marginalized in society. It is for this reason that he labels ideal theory as an ideology, a “distortional complex of ideas, values, norms, and beliefs” that fails to represent the experiences of women, people of color, and the working class.
Held (1989), among others, has directed similar complaints at specific uses of Rawlsian ideal theory to analyze justice. The philosophical insights derived from idealization may be intellectually engaging, but they are so distant from reality that resulting conceptions of the just society may seem impossibly or offensively abstract. In response, Held calls for “applicable” moral theories that hew closer to lived experience and history. Walzer (1983) too has proudly declared that he would rather stand “in the cave”Footnote 7 than survey society from far above. There is, he observes, a meaningful sense in which our “social world” is created in the mind (i.e. with imagination) and is specific to our socio-cultural position. In sum, these statements should not be confused with simple calls for better correspondence between moral or political theory and reality. The lesson, I propose, is that imagination can be misused and even cause harm.
Recall the philosopher’s imaginary of science, taken here to be exemplified by Douglas’s internalist description of science: “an iterative, ampliative process of developing explanations of empirical phenomena.” In this case, the maintenance of an epistemically-bounded practice fulfills our modern, culturally-specific aspirations for a purified nature, in which citizens can watch carefully crafted technical performances and ascertain the truth for themselves. As a result of this imaginary, the scientist earns a special role in democracy, providing ostensibly apolitical facts to feed into the processes of governance and policy-making (Ezrahi 1990); so-called interest groups, including patient activists, may be excluded as biased. “Scientific” responsibility, in this picture, can be stringent within the community, but it neglects the many grey areas or “trading zones” (Galison 2010) where theorists and technicians interact. Violence committed on behalf of scientific progress is not mentioned.Footnote 8 And the lack of diversity among the practitioners of science is bracketed as a temporary deviation as opposed to a constitutive feature.Footnote 9 These choices in formulating an image of science are neither value-neutral nor inevitable.
More fundamentally, this outcome of a particular idealization also reveals how the relatively simplistic ontology of the philosopher’s imaginary fails to capture the countless ways in which society and its contingent values serve as a condition of possibility for science. Post-Kuhnian analyses of science in STS, disability studies, and feminist critiques each highlight what is missing. We learn that “good” scientific reasoning is situated in a constellation of widely-shared implicit normative commitments, which may be racist, sexist, or neoliberal (among other things). Public discourse, for instance, shapes what types of bodies or minds are deemed broken and what projects deemed fund-able or ethical. One need only look at the beginning or end of papers in Science or Nature, where the authors echo and reinforce these commitments (e.g. ‘x number of people suffer from condition y’, ‘male brains are more ... than female brains’). These interactions between society and science are as significant as they are ubiquitous and, as such, we should not exclude them from our reasoning without weighing the consequences and alternatives.
Some philosophers will reject these criticisms as attacking a non-existent position or method. Yet, Mills’s (2005) point, at least in part, is that irresponsible abstraction is grounded in the unrepresentative lived experience of the philosopher. The problem is personal, not just logical or methodological. Mills notes that the social position of the theorist—likely a man, upper class and white—influences what is maintained and what is ignored when reasoning shifts from the actual to the idealized. Williams (1985) observes a similar dynamic; he claims that despite the philosopher’s pretense to see the world sub specie aeternitatis, with the associated aura of timeless objectivity, we cannot separate the philosopher as theorist from the philosopher who has a certain life and character. This is likely true for any cognitive process of philosophical imagination, whether the idealization in question is egalitarian society or a global response to climate change. Yet, the effects of subjectivity could be especially pernicious in philosophy of science, where there is a large physical and conceptual distance between the philosophy department and the day-to-day exercise of technoscientific power.
Much of present-day graduate training in philosophy does not recommend starting “in the cave” or in the field, as Walzer recommends. It does not train aspiring scholars to spend time in labs or tune into congressional hearings on science policy. Likewise, graduate curriculum rarely provides rigorous methodological preparation to interview individuals who have, for example, experienced doubt about a new form of prenatal genetic screening or individuals who have been studied against their will. So it is not surprising that a recent JSTOR search of Philosophy of Science yields only 12 hits for ‘stigma’, 25 for ‘disability’, 83 for ‘federal’, 81 for ‘patent’, 175 for ‘industry’, 244 for ‘women’, and 37 for ‘racism,’ over the course of the journal’s 88-year history.Footnote 10 We could, of course, interpret these absences as nothing more than a case of differing priorities. But that is to miss the point; there is just not much in departmental life that could upset the philosopher’s implicit imaginaries or contradict a 20th century imaginary of value-free science.
At the same time, the isolated character of the philosopher’s intellectual life shines through when we write about science. Many philosophers of science can recite the general idea of The Structure of Scientific Revolutions from memory, whether admiringly or not. Arguments and evidence, texts and disciplines, these things are present in our reasoning but so much else can be left out, from the 1980 Bayh–Dole Act to the regime of technoscientific promises (Joly 2010). Those are more likely to be found in the self-understandings of synthetic biologists or in the STS literature than in the work of philosophers of science. As Dewey puts it, “[The] ideals that guide us are generated through imagination. But they are not made out of imaginary stuff. They are made of the hard stuff of the world of physical and social experience” (1934/2008 9:33). Unfortunately, the philosophical form of life may be well-suited to avoiding some foundational questions: what would it take to explicitly justify our chosen imaginaries of science? And who has been excluded from our imagined social world? To avoid both questions amounts to a dual failure to consider injustices in society and to fulfill our own disciplinary virtue: self-reflection.
Some philosophers of science have already resisted this trend in the context of narrower discussions. Bätge et al. (2013), for instance, employ Mills’s (2005) worries about ideal theory as a way to critique Philip Kitcher’s chosen imaginary of well-ordered science. In my own work, I have advocated for greater attention to the normative content of imaginaries in the study of scientific “repertoires” (Sample 2017). Others, meanwhile, have put forward the more programmatic lessons that the field needs. Wylie (2012) connects “Ideal Theory’ as Ideology” to ongoing conversations about feminist approaches to philosophy of science, specifically those relying on standpoint theory and the view from the margins. Harding (2004), too, argues that philosophers of science must attend to the “social locations and political struggles” that are implicated in knowledge practices. Standpoint theory can help achieve this, she explains, by extending our inquiry to the “context of discovery”, by considering the role of group consciousness in knowledge production, and acknowledging that philosophy of science has a “political unconscious.” Across these rejections of ideal theory, we are asked to reflect on the role of imagination in both doing science (as a practitioner) and analyzing it (as a philosopher). Nevertheless, this does not entail that one must be an epistemologist or standpoint theorist to appreciate the importance of responsible imagination. As described earlier, adjacent work in STS depicts a wealth of technoscientific and “sociotechnical” imaginaries—oppressive or emancipatory, outdated or emerging—with which we can compare our own idealizations, teaching us how we might imagine otherwise.
6. Conclusion: institutionalizing ethics of imagination through context
It is a strength of our field that we can use idealization and imagination in a similar way to publics, practitioners, and policy-makers; that is, we craft visions of science that are not limited by rigid adherence to empirical detail or by the immediate practical needs of institutions. For this reason, I am not recommending that all philosophers of science become sociologists of knowledge or that they substitute all idealization with description. But the process of imagining Science, as good or as bad, is tied to the context of imagination. As mentioned earlier, this fact creates a few challenges. Practically, philosophers’ idiosyncratic insights and prescriptions might go unheeded by non-philosophers. Our old fashioned examples, thought experiments, and writing might fail to resonate with the worldly scientist of today, who may be as concerned with collecting patents as with the context of justification. Worse, philosophical reasoning on science may ignore and perhaps reinforce forms of injustice and suffering in the process of idealization.
What is the solution? Changing the context of philosophical reasoning could take many forms. Creating a diverse community of philosophers who can draw on complementary lived experiences is an obvious start. Many philosophers have noted there much work to be done in this respect.Footnote 11 More STS-minded philosophers might also stress the importance of empirical observation or closeness to experience; with the help of empirical methods in the imaginaries literature, they might start with what exists outside the office. Such a re-orientation towards the science-in-the-world is an excellent place to start, and some philosophers have a substantial head start in this respect. However, even devoted armchair philosophers could benefit from spelling out out exactly which (whose) desirable futures they are taking for granted when doing analytic work.
Regardless, ethics of imagination does not dictate that we discuss only one, good imaginary or only the “accurate” imaginary of science. Accuracy is a poor fit for something that is at least partially ideal and almost completely perspectival. Instead, I propose an ethics of imagination in which we take up or reject these imaginaries with care and deliberation. As in Gieryn’s sociological work on boundaries, we should ask who wins and who loses within our favorite imaginary. We should ask how responsibility might be distributed, who gains epistemic authority, and how democracy is re-configured. There are surely many other questions to be asked, but most of all we should imagine science or technoscience with a keen awareness of how it could affect fellow members of society, especially those with the least power. As so many scholars have shown, collective imagination matters, structuring identities, practices, and society at large. To the extent philosophers of science are part of this process, we must take care that our contributions are as responsible as they are creative.