If we cannot define science using only analysis or description, then we must rely on imagination to provide us with suitable objects of philosophical inquiry. This process ties our intellectual findings to the particular ways in which we philosophers think about scientific practice and carve out a cognitive space between real world practice and conceptual abstraction. As an example, I consider Heather Douglas’s work on the responsibilities of scientists and document her implicit ideal of science, defined primarily as an epistemic practice. I then contrast her idealization of science with an alternative: “technoscience,” a heuristic concept used to describe nanotechnology, synthetic biology, and similar “Mode 2” forms of research. This comparison reveals that one’s preferred imaginary of science, even when inspired by real practices, has significant implications for the distribution of responsibility. Douglas’s account attributes moral obligations to scientists, while the imaginaries associated with “technoscience” and “Mode 2 science” spread responsibility across the network of practice. This dynamic between mind and social order, I argue, demands an ethics of imagination in which philosophers of science hold themselves accountable for their imaginaries. Extending analogous challenges from feminist philosophy and Mills’s. “Ideal Theory’ as Ideology,” I conclude that we ought to reflect on the idiosyncrasy of the philosophical imagination and consider how our idealizations of science, if widely held, would affect our communities and broader society.
Despite the “demise of the demarcation problem” (Laudan 1983), philosophers still talk about “Science” or a science or “the sciences.” Sometimes even without a clear definition, we charge full force into discussions of “well-ordered” science or the value-free ideal. Though such untethered thinking may seem like a quintessentially philosophical privilege, the capacity to imagine and re-imagine science in society is itself significant, integral to democratic life in modern nation states. Philosophers of science have not made much of this phenomenon, but it was once within the purview of the field. In 1938, Philosophy of Science published a talk by RK Merton, “Science and the Social Order,” in which he reflects on the shared ideals that situate the institution of science within a democracy. In his characteristically sweeping manner, he observes that scientists subscribe to an “ethos” for their practice, which requires them to focus on the immediate furtherance of knowledge and to disregard how that knowledge travels out into other “spheres of value and interest.” The individual scientist justifies this epistemic myopia by believing that science is nonetheless a force for good in society, a rationale which Merton flags as a non-factual matter of faith. Though science is a solely truth-oriented practice, it is apparently grounded in a commitment to social good. Overall, Merton calls this vision of scientific work a “confusion,” a self-contradicting ethos that both reinforces the societal importance of an autonomous science and renders it incapable of addressing controversial social effects.
Merton’s observations highlight the practical importance of ascribing abstract meanings to science, even beyond the context of academic philosophy of science. The collective ethos that he describes reserves a place for science in society, but at the cost of opening up scientists to responsibility attributions from their critics. The virtue of disinterestedness sometimes looks like negligence in the eyes of broader society, with consequential effects. Members of the public may voice dissatisfaction. Funding administrators might look elsewhere. And scientists might find themselves strategically reducing their autonomy in some ways, taking responsibility for discovering only socially significant truths. I will leave it up to the reader to connect these possibilities to current debates about science, but the core point remains. Even though an ethos is “in the head” of one or more people, a careful sociologist or empirically-minded philosopher will see that cognitive visions for science are deployed, resisted, and modified in practice. As Merton shows so well, any given understanding of science in society is “ideal,” loaded with propositional attitudes, values, and hopes, but are also often “real,” providing meaning and guidance for the action of individuals and groups.
Now, in 2022, I suggest that the interplay of scientific ideals and mundane practices is as consequential as ever in the structure of society, tying together activities as diverse as presidents’ claims of scientific leadership in global crisis, contested performances of public health expertise, and novel collaborations between engineers, investors, and scientists. Across such instances, the place of science in society is not static but always being simultaneously re-imagined and re-enacted. Nevertheless, this dual character of science now receives only indirect attention in the recent philosophical literature on science, disconnected from adjacent sociological literatures that could explain it. I invoke Merton (1938) as a reminder that this sort of inquiry can be illuminating even within the narrow intellectual concerns of academic philosophy of science. The acts of collective imagination that motivate a practice can be taken into account and enrich our discussions of more well-worn philosophical topics, including objectivity, realism, explanation, and responsibility, to name just a few.
Accordingly, I propose that we must renew and refine our attention to collective imagination in philosophy of science. This methodological correction, I will show, achieves two things. First, it reveals for the philosophical observer of scientific practice a key mechanism by which science becomes situated in society, eschewing the easy Kuhnian assumption that science is a social but self-contained epistemic practice. More specifically, attending to the role of collective imagination helps philosophers of science to better understand how scientific reasoning and experimental techniques are made simultaneously intelligible and possible through particular institutional arrangements, values, and visions of desirable futures. Philosophers can then engage with these normatively-laden features in a given context, inquiring into their origins and asking if that normative content is genuinely worthy of our assent. Philosophy of science thus becomes continuous with research in applied ethics, political philosophy, and science and technology studies (STS).
Second, but just as importantly, the framework of imaginaries can also be turned back onto philosophical practice itself as a form of self-reflection. By identifying the content of philosophy’s own internal imaginaries of science, whether idealizations or heuristic definitions, we can compare our discourse with the sociotechnical imaginaries that actually organize scientific practices in society. This critical comparison does more than simply reveal inaccuracies or limitations in our own disciplinary habits. Realizing the dependence of philosophy of science on the philosopher’s imagination reinforces the profound and urgent need to create an inclusive, equitable, and representative community within our discipline.
To support these sweeping promises of utility and self-correction, the present paper is organized into three parts. I begin with the high-level insight that institutions and imagination are closely linked in society, as documented in foundational and more recent social theory on imagination and institutions (Sect. 2). Working from this assumption – imagination matters – I then move to analyze the imaginary of science implicit in two case studies (Sects. 3 and 4), one philosophical and one from STS, and demonstrate their implications for the distribution of responsibility in society. The former is Heather Douglas’s responsibilist alternative to the value-free ideal. I argue that her framework, as an exemplar of philosophical reasoning about science, is still beholden to more traditional Kuhnian idealizations of science as an isolated epistemic practice, which negatively affects the applicability of her prescriptions. As a contrast case, I present a counter-imaginary from emerging technosciences, like synthetic biology or nanotechnology, in which the philosopher’s imaginary is subverted; science is understood as entangled with society, with industry, government, and new technological applications. Comparing these cases, I conclude (Sect. 5), exhibits the idiosyncrasy of philosophical imaginaries of science and pushes us to make collective imagination a more explicit object of inquiry and self-reflection. Our work on science will be better when we actively question the implicit sociotechnical arrangements and desirable futures that underpin philosophical thinking.
2 Theories of institutions and collective imagination in science
My entry point for this analysis is no one philosophical debate about science (e.g. can science be value-free?) but rather a more fundamental insight from social theory. That is, for any institution or organized human practice, imagination is essential. Science, and the corresponding mental representations of science, is no exception. For this reason, it is appropriate to begin with a brief review of relevant scholarship in canonical social theory, which will clarify my understanding of imagination and provide the primary dimension of comparison in the next section.
Already, I have invoked a range of related terms – ideal, ethos, idealization, collective imagination, imaginary, and “sociotechnical imaginary.” What do they all mean and how do they relate to one another? Although we lack a comprehensive taxonomy for such terms, together they gesture at an important foundation for our societal institutions: the capacity to imagine an ideal institution and ascribe meaning to its instantiations in daily life. In sociology, symbolic interactionists have described this cognitive component of institutions as a powerful abstraction, but notably, not so powerful as to fully determine social life: “A complex institution may then be described in at least three major ways: as a series of ongoing everyday activities, as a reified abstraction, and as an official organisational chart with specific offices, tasks, processes of induction and expulsion, and communication paths. [...] These forms represent interlocking realities which act on and against each other” (Rock 1979). In How Institutions Think, anthropologist Mary Douglas extends this argument further to make a causal point (1986). Mere patterns of human behavior become institutions when they align with widely shared understandings of morality or the natural world: “for a convention to turn into a legitimate social institution, it needs a parallel cognitive convention to sustain it” (Douglas, 1986, p. 46).
If these accounts are correct, then the cognitive conventions underpinning social institutions should be of great interest to philosophers, because of the implicit or explicit normative content encoded therein. As Smith (1987) has pointed out, the interpretive categories and forms of sense-making behind day-to-day practices are often ideological, preventing disempowered women and other community members from resisting unjust forms of governance or economic systems.Footnote 1 This poses a challenge for philosophers, and not merely sociologists or a few applied ethicists, to grapple with the political or ethical content in the institutional ideals of modern society. And while this challenge is just as pressing for the study of science as for any other institution, it has not been taken widely taken up within English-language philosophy of science, leaving Merton’s 1930s empirical inquiry into ethos as a somewhat anomalous artifact in Philosophy of Science.
Meanwhile, research in STS has made collective imagination a core methodological and conceptual focus. Authors from a variety of disciplines have used “technoscientific imaginaries” to explain how scientists and engineers understand and coordinate their work with reference to a desired future or achievement (Marcus 1995). More recently, Jasanoff and Kim (2015) direct our attention to the higher-order societal effects of “sociotechnical imaginaries,” defined as “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology.” By referencing a range of empirically-grounded case studies, the authors effectively demonstrate how the sharing and uptake of these imaginaries provide a normative backdrop or blueprint, so to speak, for the otherwise incomprehensible and deceptively mechanistic action of science in society.
A full accounting of theories on institutions, imagination, and the appropriate methods for their documentation is beyond the scope of this paper, but some important contributions include theories of social imaginaries, like the aforementioned Jasanoff and Kim (2015), Taylor (2004), and Castoriadis (1998). In addition, discussions of the individual imagination (esp. in philosophy) are also very relevant and are discussed briefly in Section 5 below, including Mills (2005), Williams (1985), and Dewey. Thinking along these lines can, however, be informative even without delving too far into existing theory on the topic of imagination. For the purposes of my present argument, it is useful to simplify the preceding theory down to a few key observations.
Most importantly, imagination is not merely the individual capacity to generate mental content creatively. It is not just fantasy, visualization, or day dreaming. To the contrary, imagination also refers to an empirically observable system at the heart of social order, shaping the distribution of resources, the self-understanding of individuals, and the organization of practices. We can break down its action into several related components: (1) collective imagination, a capacity of individuals to collectively and creatively ascribe abstract meaning(s) to social institutions, (2) imaginaries, particular reified abstracts of an institution that may or may not be shared, (3) idealization,Footnote 2 an imaginary of an institution that departs from reality for the purposes of cognitive convenience or the communication of an ideal. Beyond simply stressing the philosophical significance of these terms, I will now use them to juxtapose the idealizations used by philosophy of science with a “technoscientific” alternative. Through comparison, the idiosyncrasy of each vision of science becomes apparent.
3 Case study from philosophical reasoning - rejecting the value-free ideal?
Lacking an a priori definition or a universally accepted description, how do philosophers imagine the institution of science? Though collective imagination has been more thoroughly studied in the context of other practices, philosophy seems to also rely on these flexible mental-textual idealizations, which provide a suitably bounded object for scholarly inquiry. While the details may differ from scholar to scholar, the imagined institution of science provides a cognitively-expedient alternative to working with strict definitions of science, at one extreme, or messy examples from an actual scientific practice, at the other. Perhaps the quickest way to see such idealization in action is through the ‘science and values’ debate in philosophy of science. The attempts therein to delineate the proper role of non-epistemic considerations within ideal scientific practice almost immediately invoke a particular imaginary of what science is or should be like, blurring the lines between definition, description, and prescription.
This feature of the debate seems to have already been noticed by Elliot (2011), who observes that some philosophers constrain the roles of values in science specifically in order to promote their own normative ideal of science. Other authors, he notes, are more interested in clarifying logical or conceptual relationships between abstract ideas (i.e. science and values) than in prescribing how science should be structured. But as Elliot asserts, these differing purposes are a source of ongoing ambiguity for the debate over the role of values. Here seems to be some of the confusion noted by Merton; how and why do we formulate an imaginary of science, with more or less connection to the concrete practices for which it stands-in? Let’s unpack this further with a specific case: Heather Douglas’s widely-cited work on responsibility in Science, Policy, and the Value-Free Ideal.
3.1 Douglas’s idealization of science and its antecedents
Douglas (2009) continues a mid-20th century debate, an unfinished conversation over whether scientists deserve “autonomy” in their pursuit of knowledge. As Douglas describes it, one side emphasizes the ways in which science is and must be free of social or political values. These arguments for autonomy, relying on Michael Polanyi’s “tacit knowledge” and scientific realism, represent several compelling possibilities. But value-freedom, even as an ideal, is not quite so simple. Douglas reminds us of the other side of the debate. Responses from Rudner (1953) and Churchman (1948) reveal that we cannot avoid judging the quality of a particular hypothesis or the risk we take on by accepting it. It is important to notice, here, how even these early proponents of value-ladenness constrain their object of inquiry. Rudner’s “The Scientist Qua Scientist Makes Value Judgments”, for example, bases his argument not on a full account of science but on the somewhat pithy observation that hypothesis testing is part of any “satisfactory” account of the scientific method. As a result, even when considering an example of science at perhaps its most violent and politically-entangled – the Manhattan Project – Rudner asks the reader to consider the values inherent in accepting a specific hypothesis, that operating the Chicago Pile would not trigger the combustion of the atmosphere. Ominous indeed, but it is not at all apparent that hypothesis testing captures the most important philosophical features of this moment in science.
By identifying her own contributions as a continuation of such historical arguments between Rudner and his contemporaries, Douglas inherits their limited idealization of science, even as she attempts to expand it in Science, Policy, and the Value-Free Ideal. Scientists, in her account, have “role responsibilities,” unique to the position of a scientist. And they also have “general” moral responsibilities; they are responsible for their behavior in the way all social beings are. She acknowledges that, sometimes, our roles can take priority and remove our general moral liability. Lawyers, she suggests, have a role that shifts responsibility onto the legal system and can disregard some general moral responsibility as they defend a guilty client. So we might think that the role of scientist is like that of a lawyer, such that general moral responsibility for mistakes and oversight is set aside. Douglas thinks not.
The problem, she suggests, is that usually there is no “rigid” institutional structure within science that can take morally significant decisions out of the hands of scientists (2009, p. 73). Unlike the context of law, there is no adversarial system to ensure diverse or antagonistic perspectives are marshaled. There is no judge that at the end of the day has the power to decide whether or not a hypothesis deserves further testing, weighing its probability against the very real costs of a false positive. This is a decision that only the scientist and her immediate colleagues can make. Douglas, thus, argues that scientists have an obligation to consider the cost of error in their professional claims; they are not excused from general moral responsibilities. Douglas thus attacks the inconsistency in the traditional idealization of science as the autonomous pursuit of knowledge and cuts through the “confusion” noticed by Merton in 1938; because scientific practice is sufficiently unlike legal practice, its practitioners have to give up some autonomy.
This negative comparison with the value-free ideal, however, leaves Douglas’s own idealization of science somewhat implicit. Douglas’s more constructive analysis can be found in “The Moral Terrain of Science” (2014), where she lays out a revised vision of science; it is a practice that is situated in and valued by society. She uses this fact to prescribe scientists’ responsibilities beyond internal communal norms. There are, accordingly, three “bases” of responsibility to which scientists are accountable: “The first is to good reasoning practices; the second to the epistemic community of science; and the third to the broader society in which science functions and is valued.” If we want to normatively evaluate a particular research project or field, we can run through each basis. On the first basis: did the research produce “reliable empirical knowledge”? According to Douglas, science exists because society values it as a source of “reliable empirical knowledge,” and is therefore beholden to that epistemic standard (2014, p. 95). On the second basis: did the researchers support and enable their epistemic peers? As members of an interdependent community, scientists cannot act as if they work alone. Finally, on the third basis: Can we say that a given area of research embodies and advances the values attributed to science by broader society? Since science does not occur in a vacuum, research and its reasonably foreseeable effects should not conflict with societal values.
As Douglas points out, this characterization of science (to the extent it is taken up) prevents anyone from problematically bracketing questions of knowledge from questions of morality; scientists can’t excuse themselves from moral responsibility (i.e. basis 3) for the sake of empirical knowledge (i.e. basis 1). A neuroscientist, for example, cannot cite her dedication to good science simply to evade normative evaluation of her contribution to technological development, whether for military lie-detectors or novel treatments for depression. A philosopher of science is likewise encouraged to develop a sensitivity to the whole range of obligations that scientists might have, whether epistemic or ethical or in-between. In this way, Douglas resolves the tension in the traditional value-free idealization of science not by denying it outright but rather modifying it in favor of social responsibility. In her picture, science is still presumed to be a discrete practice, isolated from broader society by a set of internal epistemic norms, but with a few added (“general”) moral responsibilities.
This strategy seems argumentatively effective against philosopher peers who believe science must be “value-free”, and it pushes philosophers of science to consider jointly epistemic and ethical prescriptions for scientific practice. However, Douglas’s responsibility attributions still draw on a particular imaginary of science – in some sense, a philosopher’s imaginary – rather than that of an actual existing practice. Her account of science seems to continue Kuhn’s legacy of considering some social elements of practice while maintaining older, traditional imaginaries of science as a self-contained epistemic community that operates according to internal norms. This choice is most evident in her chosen definition of science: “an iterative, ampliative process of developing explanations of empirical phenomena, using the explanations to produce predictions or further implications, and testing those predictions. In light of the new evidence from the tests, explanations are refined, altered, or further utilized” (Douglas 2014).
It is not clear if the scientific community, presumably as much an institution or a loose set of organizations as a process, would be correctly circumscribed by such a definition; if we took the process she describes as the definitive norm for scientists, then the “scientific” community might exclude scientists working in the context of biomedicine as well as researchers pursuing what STS scholars highlight as “technoscience” (the subject of the next section). Such a idealization of science also does not account for clinicians, DARPA, industry, the Market, bioethicists, scientist-engineers, and other things that are not self-evidently inside or outside science thus defined. Their responsibilities to their practices and to society are thus left unexamined.
3.1.1 What’s wrong with imagining science as bounded epistemic practice?
For a philosopher, the choice of what to include in an idealization often depends on one’s intellectual project and aims. Nonetheless, even within the narrow philosophical debate about values in science, there have been some suggestions that the framing of science therein leads to serious problems. These critical responses are not posed explicitly in terms of imaginaries or idealizations, missing any corrective insights provided by such a theoretical perspective. But they do illustrate the negative effects of applying the scholarly idealization (exemplified by Douglas) to cases drawn from real-world knowledge practices. Biddle and Kukla (2017), for example, reject the “insistently discursive picture of epistemic activity in the inductive risk literature—one that in effect reduces much of it to propositional inference” (p. 217). They observe among some philosophers a tendency to focus on individual reasoning, treated as a deliberate balancing of available evidence and values. As a result of this tendency, institutional norms, political ideologies, contextual understandings of evidence and a whole range of epistemically-relevant phenomena are thus hidden from further analysis. Looking for an alternative way forward, Biddle and Kukla conclude that we must cultivate a more expansive “geography of epistemic risk” that can capture the full range of phenomena previously identified as inductive and subject them to a more contextually-sensitive form of philosophical inquiry.
To similar effect, Elliot and McKaughn (2014) lament that many philosophers debating values in science simply assume that epistemic values take precedence over non-epistemic values in theory or model assessment. They notice that this assumption contradicts a core feature of scientific practice; the variety of practical domains in which scientists work frequently requires scientists to weigh epistemic values against non-epistemic values, sometimes in a much more “direct” way than philosophers want. As an example, Elliot and McKaughn present the case of risk assessment in government regulation of toxic chemicals. Here, a primary epistemic challenge is to answer questions like ‘does this substance cause cancer in humans?’, but with an important constraint: each day that goes by without the answer is a delay in preventing harmful exposure. If one intends to minimize the social cost of neglecting suspected carcinogens, then one can reasonably choose an expedited risk assessment method over a maximally rigorous or accurate one.
Some might worry, of course, that including this sort of reasoning in our normative accounts of science would justify more objectionable roles for non-epistemic values in science. And in response, we may choose to discount risk assessment as bad science or as non-science in order to save our idealization. But this response should not be taken lightly. As Elliot and McKaughn point out, science is actually conducted to attain a wide range of goals, and such regulatory and other “hybrid” science is arguably (as I will describe in the next section) definitive of science since at least the 1950s. While discounting them as improper science may keep them out of the pages of Philosophy of Science, they will continue to exist in the world. The dominant epistemological modes of climate science, for instance, will continue to guide policy-making, even when their methods and goals are not shared by those most impacted by climate change (Jasanoff 2010). There are thus a range of serious ramifications to doubling down on the philosopher’s imagination of science as an epistemically-bounded practice, of which I will highlight two.
First, an imaginary with little connection to actual practice may inhibit its utility as a prescriptive or regulative tool. If a philosopher—working on science or any other human practice—uses a non-existent or unattainable idealization to set its value and its community structure, the resulting prescriptions may be inapplicable or practically useless. In the case of researcher responsibility, Douglas’s prescribed obligations will have little or no force for technoscientific researchers, who may not recognize the imaginary being presupposed. “Who said we’re all in the business of explaining?” a physicist might ask while working in the context of nanotechnology. So too might a developmental biologist when patenting tissue engineering techniques for their biomedical technology company. As already noted by Biddle and Kukla (2017), the reduction of science to hypotheses and individual psychologies leaves us unable to address the full range and depth of knowledge practices. Societal effects of regulatory science, policy-relevant climate research, and many other philosophically-relevant phenomena will simply go unnoticed or even ignored by professional philosophers. Scholarly sophistication is of little value when the detailed idealizations under negotiation are exclusive to disciplinary philosophy.
Furthermore, as demonstrated by research on “sociotechnical imaginaries”, widely-shared idealizations of science do not just assume how science should be organizationally configured; they also pair that configuration with some desirable future that practitioners are working towards, whether that consists of avoiding climate change or creating healthy self-disciplining citizens. The normative orientation of philosophy would recommend careful attention to such value-laden implications. Even if philosophers of science admit the loftiness of their idealization of science—maybe it’s just a distant hope—then there remains the task of justifying that desired state of affairs to persons who might not share their vision for science in society. I may, for instance, believe that science includes empirical research conducted for policy purposes and that philosophers should have something to say about the goals driving it (from choice of problems to theory appraisal). Considering alternative visions of science, (e.g. technoscience), will help make these two criticisms more concrete.
3.2 4. Alternative imaginaries of science available beyond philosophy
We should primarily understand Douglas as responding to a particular debate about value-free science. In that sense, she presents a necessary correction to the imaginary of science as value-free by formulating an opposing imaginary. But if her mapping of the “moral terrain” is to provide a new responsibilist framework for philosophy of science, there are some additional obstacles to consider. In Douglas’s main arguments, she often imagines science as a single identifiable practice, independent from university professorships, federal administration, and biotech start-ups. Recall, for example, that her idealized scientific community is circumscribed by its pursuit of “reliable empirical knowledge.” Douglas also distinguishes between “the value of knowledge” and “social value,” which (without some qualification) neglects her overarching insight that scientists frequently make policy and, in general, influence social order. More recently, she has suggested that the “most important thing to know” about science is its “critical and inductive nature” (Douglas 2017). Unfortunately, these epistemological definitions—reminiscent of basis 1 in “Moral Terrain”—orient Douglas’s account towards science that solves internal epistemic puzzles. This is disappointing given that Douglas clearly acknowledges the extent to which even sociologically-oriented philosophy of science, under the influence of Kuhn, has artificially bounded science (2009, p. 61). On one reading, Kuhn uses The Structure of Scientific Revolutions to hide a vision of autonomous science within socially-rich imagery, to create a “social” picture of science that actually neglects society. Steve Fuller (1992), for instance, sees this as the consequential and anti-democratic misstep within philosophy of science. Taken to the extreme, this ‘Kuhnian’ imaginary of science presupposes a clearly bounded scientific community, a consistent empirico-methodological core, and a society which surrounds it but is clearly separable.
Do all philosophers of science uncritically promote this view of science as an isolated practice? As illustrated above by the responses to Douglas within philosophy of science, her idealization of science has not been taken up uniformly or without criticism across the scholarly community. And more systematic empirical analysis of our own philosophical habits and texts would be needed to determine whether there is a one or more dominant imaginaries of science underpinning work in philosophy of science. Nevertheless, the limitations associated with this particular idealization should trigger a moment of reflection about what inspires or supports our own imaginaries. It is notable that Douglas herself has recently called for greater attention to the “loom” – that is, the institutional structures and norms – that enable and constrain the “tapestry” of science (2018). In the spirit of fulfilling this vision, it is instructive to look beyond disciplinary philosophy and compare philosophers’ implicit visions of science with those in other disciplines.
3.2.1 “Mode 2” science and the technoscientific imaginary
In contrast to much philosophy of science, literature in STS typically portrays the boundedness of science as an achievement, the result of deliberate social and discursive work by practitioners. What is “inside” or “outside” of science, “applied” or “pure” is not given in advance, but actively contested and negotiated. Gieryn (1983) describes such “boundary work” as a practical problem for scientists, impacting the allocation of funding, respect, and epistemic credibility, among other things. Researchers can, for example, build up a distinction between pure and applied science, such that they are not responsible for failed technological applications or for the unjust biomedicalization of “atypical” bodies and lifestyles.Footnote 3 Or, as is more common today, they can undermine the distinction such that science can ally itself with industry and remodel society in the name of “innovation” or “disruption.” Latour (1991) takes the contingency of boundaries even further to suggest that the modern imaginary of Science stands or falls with our faith in a transcendent Nature that can be purified of culture or politics. Scientists then position themselves as the exclusive representatives of objects and natural things, portraying everyone else as operating in the messy world of human interests or passions. We need not, according to Latour, take this dichotomy for granted; through the lens of networks, the spheres of culture and nature are not so cleanly separated. Overall, these critical examinations of science challenge us to ask how an idealization is sustained and to inquire who benefits and who is harmed via a given idealization of science.
More to the point, the specific character of scientific boundary-drawing described in recent years seems to stray from Douglas’s idealization of science, providing an instructive counter-imaginary. Nordmann (2012), for example, stresses that much of contemporary science (i.e. “technoscience”) is oriented towards “knowledge of control” and capacity-building, which both evade the attention of narrower philosophical discussions of scientific knowledge. Bensaude-Vincent et al. (2011) argue further that scientists can now study and present objects explicitly in terms of their potential value to humans, non-scientists included. The authors suggest that much of today’s most heralded research falls into this category of “technoscience”, despite the fact that it clashes with the more “pure” vision of science proposed by philosophers like Francis Bacon and his intellectual successors. At a higher level of analysis, these phenomena have also been described as part of the emergent “triple helix” of science, industry, and government collaboration (Etzkowitz and Leydesdorff 1998) or the regime of “Mode 2” science (Gibbons et al. 1994). Each of these empirically-based understandings of science show it to be more porous, less obviously bounded than philosophers like to think.
3.2.2 Societal consequences of the technoscientific imaginary
Most significant, at least for the relationship between imaginaries and normative understandings of science, is the way in which these descriptions of “Mode 2” science and technoscience are not merely academic idealizations but are promoted and enacted in society. Some technoscientific researchers, for instance, deny their isolation from society by invoking a porous imaginary of science, an unintentional echo of STS. Previous empirical studies, for instance, have shown that researchers will often describe themselves as cogs in a machine, lacking true control—and thus responsibility—over the direction of their projects (e.g. Swierstra and Jelsma 2006). Others have reported that technoscientific researchers will even set aside personal uncertainty about the social trajectory of science in order to match the values of funding agencies or other knowledge users (Brown and Michael 2003). In these cases, describing science as deeply entangled in society has the same effect as describing it as wholly autonomous; it becomes difficult to hold researchers accountable for the values they espouse or the technologies they enable. Boundary work, thus, can function just as well removing distinctions as it does building them up.
Set aside for a moment the issue of which imaginary is correct or even most practical.Footnote 4 The consequences of adopting a technoscientific imaginary are significant for our reasoning about science. As envisioned within technoscience, the action of institutional structure, funding priorities, and disciplinary identities, can be used to pose a very simple critique of Douglas’s prescriptive claims. An individual can’t be held responsible without some combination of freedom, causal sufficiency, and factual understanding of possible consequences. And technoscientific actors might deny having any of the three conditions! If, for example, funding agencies only further unethical values, then the individual researcher might cite this as a severe constraint on their agency. Or similarly if the disciplinary identity of the neuroscientist is subsumed by pressure to become an excellent entrepreneur, than the failing to fulfill “scientific responsibility” may be unavoidable for some. Scrambling for intellectual property might be expected, despite the negative effects on an open community of inquiry. One could thus read the “Mode 2” imaginary as disarming Douglas’s attributions of scientific responsibility.Footnote 5
However, the problem with simply or conveniently presupposing Douglas’s idealization of science is more general and more pervasive than a first-order disagreement about scientists’ agency or lack thereof. Her account of responsibility depends on a certain way of thinking about science and the way that it is ordered. Like much of philosophy of science, this account is partially empirical, relying on the author’s familiarity with real world practice, and partially a priori holding some features as fixed or as definitive. Douglas’s idealization, in particular, builds on a long and well-regarded history of philosophical debate about science and can only improve as details from actual sciences are filled in, as the “moral terrain” is given some much-needed topographical detail. But by juxtaposing the philosopher’s imaginary of science with the STS literature on technoscience, the fundamental idiosyncrasy of philosophical imagination is revealed. Our go-to idealizations of science or quick definitions should be more controversial than they are. While there is nothing inherently wrong with being idiosyncratic, philosophy of science should confront its choice of imaginaries.Footnote 6
3.3 5. Responsible imagining in philosophy of science and elsewhere
Science is an isolated community of epistemic practice. Science is a pro-active force for societal change and progress. Science is technoscience. How do philosophers of science (among others) deal with these competing statements? As illustrated by Douglas and the debate over values in science, the creative flexibility of imagination has supplanted the analytic foundation of definition. Even when a philosopher provides a nominal definition, the arguments that follow often play fast and loose with idealizations and actualities, switching seamlessly between features that we want science to have and features that it has in spite of us. Though Laudan (1983) has banished words like “unscientific” to the world of politics or Edinburgh SSK, many philosophers rely on an ability to introduce their idealized vision of science into discussions of practical import and real world problems. Douglas does so in order to start a conversation of responsibility. Helen Longino’s (2002) critical contextual empiricism, another example, turns her commitment to liberal society into an account of good science. In this sense, philosophers of science are not so different from everyone else. As described by Jasanoff and Kim (2015), sociotechnical imaginaries help scientists, policy-makers, and publics think about and organize science in society, even as its exact definition or boundary is contested in day-to-day life. The difference, here, is that we philosophers have the opportunity to hold ourselves to a higher standard, reflecting on our choice of idealization and its relationship (if any) to messy human practices.
I like to refer to this reflective habit as practicing ethics of imagination, a careful consideration of the potential benefits and perils associated with particular acts of imagination in philosophy of science. The task is too great to achieve once and for all in the space of a paper, but here it is instructive to briefly describe some similar debates about idealization in adjacent areas of philosophy. These conversations are not all recent and have been conducted in different terms—the concept of imagination is mostly absent and the STS concept of “sociotechnical imaginary” is non-existent—but worries over the role and impact of ideal theory in philosophy overlap significantly with my own provocation for philosophy of science. In short, these analogous debates suggest irresponsible imagination brings a pretense of philosophical progress while frequently neglecting the question of who benefits from and who is excluded by the process of idealizing science.
3.3.1 Critiques of ideal theory in philosophy
In his widely cited “‘Ideal Theory’ as Ideology”, Mills (2005) explains the problematic role of the ideal in moral theory and political philosophy. The term “ideal”, for him, is not meant to convey the mere presence of grand normative concepts but rather it refers to a philosophical heuristic that he calls “ideal-as-model.” Such models, he explains, can be based on reality (i.e. “descriptive”) or taken to the extreme (i.e. “idealized”) such that the actual is forgotten altogether. And when the actual is left behind, violence, oppression, and other real world challenges tend to fall away, set aside for someone else to address in the indeterminate future. This form of abstraction is not innocent, he argues, and functions in a variety of ways to exclude the plight of the most marginalized in society. It is for this reason that he labels ideal theory as an ideology, a “distortional complex of ideas, values, norms, and beliefs” that fails to represent the experiences of women, people of color, and the working class.
Held (1989), among others, has directed similar complaints at specific uses of Rawlsian ideal theory to analyze justice. The philosophical insights derived from idealization may be intellectually engaging, but they are so distant from reality that resulting conceptions of the just society may seem impossibly or offensively abstract. In response, Held calls for “applicable” moral theories that hew closer to lived experience and history. Walzer (1983) too has proudly declared that he would rather stand “in the cave”Footnote 7 than survey society from far above. There is, he observes, a meaningful sense in which our “social world” is created in the mind (i.e. with imagination) and is specific to our socio-cultural position. In sum, these statements should not be confused with simple calls for better correspondence between moral or political theory and reality. The lesson, I propose, is that imagination can be misused and even cause harm.
Recall the philosopher’s imaginary of science, taken here to be exemplified by Douglas’s internalist description of science: “an iterative, ampliative process of developing explanations of empirical phenomena.” In this case, the maintenance of an epistemically-bounded practice fulfills our modern, culturally-specific aspirations for a purified nature, in which citizens can watch carefully crafted technical performances and ascertain the truth for themselves. As a result of this imaginary, the scientist earns a special role in democracy, providing ostensibly apolitical facts to feed into the processes of governance and policy-making (Ezrahi 1990); so-called interest groups, including patient activists, may be excluded as biased. “Scientific” responsibility, in this picture, can be stringent within the community, but it neglects the many grey areas or “trading zones” (Galison 2010) where theorists and technicians interact. Violence committed on behalf of scientific progress is not mentioned.Footnote 8 And the lack of diversity among the practitioners of science is bracketed as a temporary deviation as opposed to a constitutive feature.Footnote 9 These choices in formulating an image of science are neither value-neutral nor inevitable.
More fundamentally, this outcome of a particular idealization also reveals how the relatively simplistic ontology of the philosopher’s imaginary fails to capture the countless ways in which society and its contingent values serve as a condition of possibility for science. Post-Kuhnian analyses of science in STS, disability studies, and feminist critiques each highlight what is missing. We learn that “good” scientific reasoning is situated in a constellation of widely-shared implicit normative commitments, which may be racist, sexist, or neoliberal (among other things). Public discourse, for instance, shapes what types of bodies or minds are deemed broken and what projects deemed fund-able or ethical. One need only look at the beginning or end of papers in Science or Nature, where the authors echo and reinforce these commitments (e.g. ‘x number of people suffer from condition y’, ‘male brains are more ... than female brains’). These interactions between society and science are as significant as they are ubiquitous and, as such, we should not exclude them from our reasoning without weighing the consequences and alternatives.
Some philosophers will reject these criticisms as attacking a non-existent position or method. Yet, Mills’s (2005) point, at least in part, is that irresponsible abstraction is grounded in the unrepresentative lived experience of the philosopher. The problem is personal, not just logical or methodological. Mills notes that the social position of the theorist—likely a man, upper class and white—influences what is maintained and what is ignored when reasoning shifts from the actual to the idealized. Williams (1985) observes a similar dynamic; he claims that despite the philosopher’s pretense to see the world sub specie aeternitatis, with the associated aura of timeless objectivity, we cannot separate the philosopher as theorist from the philosopher who has a certain life and character. This is likely true for any cognitive process of philosophical imagination, whether the idealization in question is egalitarian society or a global response to climate change. Yet, the effects of subjectivity could be especially pernicious in philosophy of science, where there is a large physical and conceptual distance between the philosophy department and the day-to-day exercise of technoscientific power.
Much of present-day graduate training in philosophy does not recommend starting “in the cave” or in the field, as Walzer recommends. It does not train aspiring scholars to spend time in labs or tune into congressional hearings on science policy. Likewise, graduate curriculum rarely provides rigorous methodological preparation to interview individuals who have, for example, experienced doubt about a new form of prenatal genetic screening or individuals who have been studied against their will. So it is not surprising that a recent JSTOR search of Philosophy of Science yields only 12 hits for ‘stigma’, 25 for ‘disability’, 83 for ‘federal’, 81 for ‘patent’, 175 for ‘industry’, 244 for ‘women’, and 37 for ‘racism,’ over the course of the journal’s 88-year history.Footnote 10 We could, of course, interpret these absences as nothing more than a case of differing priorities. But that is to miss the point; there is just not much in departmental life that could upset the philosopher’s implicit imaginaries or contradict a 20th century imaginary of value-free science.
At the same time, the isolated character of the philosopher’s intellectual life shines through when we write about science. Many philosophers of science can recite the general idea of The Structure of Scientific Revolutions from memory, whether admiringly or not. Arguments and evidence, texts and disciplines, these things are present in our reasoning but so much else can be left out, from the 1980 Bayh–Dole Act to the regime of technoscientific promises (Joly 2010). Those are more likely to be found in the self-understandings of synthetic biologists or in the STS literature than in the work of philosophers of science. As Dewey puts it, “[The] ideals that guide us are generated through imagination. But they are not made out of imaginary stuff. They are made of the hard stuff of the world of physical and social experience” (1934/2008 9:33). Unfortunately, the philosophical form of life may be well-suited to avoiding some foundational questions: what would it take to explicitly justify our chosen imaginaries of science? And who has been excluded from our imagined social world? To avoid both questions amounts to a dual failure to consider injustices in society and to fulfill our own disciplinary virtue: self-reflection.
Some philosophers of science have already resisted this trend in the context of narrower discussions. Bätge et al. (2013), for instance, employ Mills’s (2005) worries about ideal theory as a way to critique Philip Kitcher’s chosen imaginary of well-ordered science. In my own work, I have advocated for greater attention to the normative content of imaginaries in the study of scientific “repertoires” (Sample 2017). Others, meanwhile, have put forward the more programmatic lessons that the field needs. Wylie (2012) connects “Ideal Theory’ as Ideology” to ongoing conversations about feminist approaches to philosophy of science, specifically those relying on standpoint theory and the view from the margins. Harding (2004), too, argues that philosophers of science must attend to the “social locations and political struggles” that are implicated in knowledge practices. Standpoint theory can help achieve this, she explains, by extending our inquiry to the “context of discovery”, by considering the role of group consciousness in knowledge production, and acknowledging that philosophy of science has a “political unconscious.” Across these rejections of ideal theory, we are asked to reflect on the role of imagination in both doing science (as a practitioner) and analyzing it (as a philosopher). Nevertheless, this does not entail that one must be an epistemologist or standpoint theorist to appreciate the importance of responsible imagination. As described earlier, adjacent work in STS depicts a wealth of technoscientific and “sociotechnical” imaginaries—oppressive or emancipatory, outdated or emerging—with which we can compare our own idealizations, teaching us how we might imagine otherwise.
3.4 6. Conclusion: institutionalizing ethics of imagination through context
It is a strength of our field that we can use idealization and imagination in a similar way to publics, practitioners, and policy-makers; that is, we craft visions of science that are not limited by rigid adherence to empirical detail or by the immediate practical needs of institutions. For this reason, I am not recommending that all philosophers of science become sociologists of knowledge or that they substitute all idealization with description. But the process of imagining Science, as good or as bad, is tied to the context of imagination. As mentioned earlier, this fact creates a few challenges. Practically, philosophers’ idiosyncratic insights and prescriptions might go unheeded by non-philosophers. Our old fashioned examples, thought experiments, and writing might fail to resonate with the worldly scientist of today, who may be as concerned with collecting patents as with the context of justification. Worse, philosophical reasoning on science may ignore and perhaps reinforce forms of injustice and suffering in the process of idealization.
What is the solution? Changing the context of philosophical reasoning could take many forms. Creating a diverse community of philosophers who can draw on complementary lived experiences is an obvious start. Many philosophers have noted there much work to be done in this respect.Footnote 11 More STS-minded philosophers might also stress the importance of empirical observation or closeness to experience; with the help of empirical methods in the imaginaries literature, they might start with what exists outside the office. Such a re-orientation towards the science-in-the-world is an excellent place to start, and some philosophers have a substantial head start in this respect. However, even devoted armchair philosophers could benefit from spelling out out exactly which (whose) desirable futures they are taking for granted when doing analytic work.
Regardless, ethics of imagination does not dictate that we discuss only one, good imaginary or only the “accurate” imaginary of science. Accuracy is a poor fit for something that is at least partially ideal and almost completely perspectival. Instead, I propose an ethics of imagination in which we take up or reject these imaginaries with care and deliberation. As in Gieryn’s sociological work on boundaries, we should ask who wins and who loses within our favorite imaginary. We should ask how responsibility might be distributed, who gains epistemic authority, and how democracy is re-configured. There are surely many other questions to be asked, but most of all we should imagine science or technoscience with a keen awareness of how it could affect fellow members of society, especially those with the least power. As so many scholars have shown, collective imagination matters, structuring identities, practices, and society at large. To the extent philosophers of science are part of this process, we must take care that our contributions are as responsible as they are creative.
It is for this reason that Smith proposes “institutional ethnography” as a method to reveal and critique ideology inherent in human work.
See Mills (2005) in particular for a discussion of ideals and idealization in philosophy.
See Clarke et al. (2003) for a review of sociological work on biomedicalization
Bensaude-Vincent (2008) and Bensaude-Vincent and Loeve (2018), considering the initial popularizations of “technoscience” by Bruno Latour and Gilbert Hottois, have suggested that the term is itself a counter-ideal and may not always have consistent positive content beyond a rejection of accounts of science as pure.
It is worth noting that Douglas acknowledges that individuals may have to “collectivize” certain responsibilities that are too big for one person, as in the creation of the National Science Foundation or the National Institutes of Health (Moral Terrain, section 4); a lack of causal sufficiency is not an automatic excuse for the individual.
More politically-oriented philosophers of science, Popper and the Vienna Circle included, show that there is a precedent, however imperfect, for this sort of reflection; each links their visions for science to a particular understanding of social order, like liberal democracy.
“[...] in the city, on the ground.” It is not entirely clear whether the statement entails a descriptive “ideal” as mentioned by Mills (2005) or just a general commitment to particularism.
Compare to Visvanathan 1997 for an instructive contrast.
In the United States, for example, many “minority” groups are excluded from STEM occupations, despite comprising an ever larger proportion of the total population (National Academy of Science 2010).
As a very informal point of reference, the same search yields 6857 hits for the word ‘science.’
Bätge, D., Blundell, A., Gerr, W. D., Gotthelf, A., Hüsing, B., & Liesert, R. (2013). Well-ordered science in a not well-ordered society. In M. Kaiser & A. Seide (Eds.), Philip Kitcher: Pragmatic Naturalism (pp. 77–90). Frankfurt: Ontos.
Bensaude-Vincent, B. (2008). Technoscience and convergence: a transmutation of values? Summerschool on Ethics of Converging Technologies, Dormotel Vogelsberg, Omrod/Alsfeld, Germany. Retrieved January 23, 2022 from https://halshs.archives-ouvertes.fr/halshs-00350804.
Bensaude-Vincent, B., & Loeve, S. (2018). Toward a philosophy of technosciences. In S. Loeve, X. Guchet, & B. Bensaude-Vincent (Eds.), French Philosophy of Technology (pp. 169–186). Cham: Springer.
Bensaude-Vincent, B., Loeve, S., Nordmann, A., & Schwarz, A. (2011). Matters of interest: The objects of research in science and technoscience. Journal for General Philosophy of Science, 42(2), 365–383.
Biddle, J. B., & Kukla, R. (2017). The geography of epistemic risk. In K. Elliot & T. Richards (Eds.), Exploring Inductive Risk: Case Studies of Values in Science (pp. 215–237). New York: Oxford University Press.
Brown, N., & Michael, M. (2003). A sociology of expectations: Retrospecting prospects and prospecting retrospects. Technology Analysis & Strategic Management, 15(1), 3–18.
Castoriadis, C. (1998). The imaginary Institution of Society. Cambridge: MIT Press.
Churchman, C. W. (1948). Statistics, pragmatics, induction. Philosophy of Science, 15(3), 249–268.
Clarke, A. E., Shim, J. K., Mamo, L., Fosket, J. R., & Fishman, J. R. (2003). Biomedicalization: Technoscientific transformations of health, illness, and us biomedicine. American Sociological Review, 68(2), 161.
Dewey, J. (1934/2008). Faith and its object. In J. Boydston (Ed.), The later works: 1925–1953. (Vol. 9). Carbondale: Southern Illinois University Press.
Dotson, K. (2012). How is this paper philosophy? Comparative Philosophy, 3(1), 3–29.
Douglas, H. (2009). Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh.
Douglas, H. (2014). The moral terrain of science. Erkenntnis, 79(5), 961–979.
Douglas, H. (2017). Science, values, and citizens. In Eppur si muove: Doing History and Philosophy of Science with Peter Machamer, pages 83–96. Springer.
Douglas, H. (2018). From tapestry to loom: Broadening the perspective on values in science. Philosophy, Theory, and Practice in Biology, 10(20210309).
Douglas, M. (1986). How Institutions Think. Syracuse University Press.
Elliott, K. C. (2011). Direct and indirect roles for values in science. Philosophy of Science, 78(2), 303–324.
Elliott, K. C., & McKaughan, D. J. (2014). Nonepistemic values and the multiple goals of science. Philosophy of Science, 81(1), 1–21.
Etzkowitz, H., & Leydesdorff, L. (1998). The endless transition: A triple helix of university-industry-government relations. Minerva, 36(3), 203–208.
Ezrahi, Y. (1990). The Descent of Icarus: Science and the Transformation of Contemporary Democracy. Cambridge MA: Harvard University Press.
Fuller, S. (1992). Being there with Thomas Kuhn: A parable for postmodern times. History and Theory, 31(3), 241–275.
Galison, P. (2010). Trading with the enemy. In M. E. Gorman (Ed.), Trading Zones and Interactional Expertise: Creating New Kinds of Collaboration. Cambridge MA: MIT Press.
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. London: Sage.
Gieryn, T. F. (1983). Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists. American Sociological Review, 48(6), 781–795.
Harding, S. (2004). A socially relevant philosophy of science? resources from standpoint theory’s controversiality. Hypatia, 19(1), 25–47.
Held, V. (1989). Rights and Goods: Justifying Social Action. Chicago: University of Chicago Press.
Jasanoff, S. (2010). A new climate for society. Theory, Culture & Society, 27(2–3), 233–253.
Jasanoff, S., & Kim, S.-H. (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago: University of Chicago Press.
Joly, P.-B. (2010). On the economics of techno-scientific promises. In Débordements. Mélanges offerts à Michel Callon, pages 203–222. Presses des Mines.
Latour, B. (1991). We Have Never Been Modern. Cambridge MA: Harvard University Press.
Laudan, L. (1983). The demise of the demarcation problem. In Physics, philosophy and psychoanalysis, pages 111–127. Springer.
Longino, H. (2002). The fate of knowledge. Princeton: Princeton University Press.
Marcus, G. E. (1995). Technoscientific imaginaries: Conversations, profiles, and memoirs (Vol. 2). Chicago: University of Chicago Press.
Merton, R. K. (1938). Science and the social order. Philosophy of Science, 5(3), 321–337.
Mills, C. W. (2005). ‘Ideal theory’ as ideology. Hypatia, 20(3), 165–183.
National Academy of Science (2010). Expanding underrepresented minority participation: America’s science and technology talent at the crossroads. National Academies Press.
Nordmann, A. (2012). Object lessons: Towards an epistemology of technoscience. Scientiae Studia, 10(spe):11–31.
Rock, P. (1979). Making of symbolic interactionism. Macmillan.
Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20(1), 1–6.
Sample, M. (2017). Silent performances: Are ‘repertoires’ really post-Kuhnian? Studies in History and Philosophy of Science Part A, 61, 51–56.
Smith, D. E. (1987). The everyday world as problematic: A feminist sociology. Toronto: University of Toronto Press.
Swierstra, T., & Jelsma, J. (2006). Responsibility without Moralism in Technoscientific Design Practice. Science, Technology & Human Values, 31(3), 309–332.
Taylor, C. (2004). Modern Social Imaginaries. Durham: Duke University Press.
Visvanathan, S. (1997). A carnival for science: Essays on science, technology, and development. Oxford: Oxford University Press.
Walzer, M. (1983). Spheres of Justice: A Defense of Pluralism and Equality. New York: Basic books.
Williams, B. (1985). Ethics and the Limits of Philosophy. Cambridge: Harvard University Press.
Wylie, A. (2011). Women in philosophy: The costs of exclusion. Hypatia, 26(2), 374–382.
Wylie, A. (2012). Feminist philosophy of science: Standpoint matters. In Proceedings and Addresses of the American Philosophical Association, 86(2), 47–76.
Open Access funding enabled and organized by Projekt DEAL.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Sample, M. Science, responsibility, and the philosophical imagination. Synthese 200, 79 (2022). https://doi.org/10.1007/s11229-022-03612-2