The Golem and The Leviathan: Two Guiding Images of Irresponsible Technology

What does it mean to be irresponsible in developing or using a technology? There are two fundamentally different answers to this question and they each generate research strands that differ in scope, style and applicability. To capture this difference, I make use of two mythical creatures of Jewish origin that have been employed in the past to represent relationships between man and man-made entities: the Golem (Collins and Pinch, 2002, 2005) and the Leviathan (Hobbes, 1994). The Golem is the traditional image of technology as a creature that can be helpful but needs to be controlled. Irresponsibility in this perspective is the failure to exercise control. The Leviathan is the image of technology as a difficult compromise between fundamental values. Irresponsibility is in this perspective is allowing some values to systematically dominate others. Having worked out the basics of these images, I show that each comes with its specific methodological challenges: where the Golem gives rise to the Collingridge Dilemma of control, the Leviathan gives rise to Münchhausen’s trilemma of justification. Since the Golem image is predominant in scholarship on irresponsibility, I conclude with an appeal for a more equal distribution of efforts in conceptualizing technologies as Golems and as Leviathans.


3
the Golem approach gives rise to the Collingridge dilemma of control, the Leviathan approach gives rise to the Münchausen's trilemma of justification. In Sect. 6, I conclude that we should distribute our efforts more equally towards seeing science as the Golem and seeing it as the Leviathan.

The Golem: Irresponsibility as Recklessness
Seeing technologies as Golems is the starting point of a normative approach to technology that is focused on control. How do we reap all the benefits while avoiding all the hazards? How do we make the Golem work for us and not against us? These are the fundamental questions.
But there is an inherent dilemma at the core of this approach: "When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming" (Collingridge, 1982: 11). The challenge is therefore to organize the social control of technology in such a way as to avoid both hasty intervention (low knowledge, high flexibility) and late intervention (high knowledge, low flexibility). Collingridge's own answer is still recognized as valuable ones. His approach is based on two principles: fallibilism (the assumption that we are prone to make mistakes in evaluating the social effects of emerging technologies) and incrementalism (small, reversible changes instead of large, utopian changes). These two ideas are captured in the following passage: in making a decision under ignorance two things are essential; the ability to discover information which would show the decision to be wrong, and the ability to react to this information if it ever comes to light. Whatever decision is made, factual information which would reveal it to be wrong must be searched for, and if it is found, the decision must be revised (Collingridge, 1982: 30) The double remedy of fallibilism and incrementalism should help us avoid the entrenchment of new technology with undesired effects -that is, it should help us avoid the situation of a dangerous Golem 'at large' and difficult to control.
The dilemma of control has been one of the core ideas in various normative approaches to technology (Buckley et al., 2017;Genus & Stirling, 2018). I have in mind fields such as responsible research and innovation (Koops et al., 2015;Owen et al. 2013;Rip, 2014;van den Hoven et al. 2014) and public engagement with science (Krabbenborg & Mulder, 2015;Macnaghten, 2020;Wilsdon & Willis, 2004), and many other fields where scholars and practitioners focus on the question of intervention, i.e., control. Yet despite this great influence of the Collingridge dilemma, it has never been noted that the idea has a much longer history, one that goes back at least as far as early modern political thinking (Harvey, 2007). The dilemma appears clearly in Machiavelli's work where it arises in relation to the notion of control -not of technology, but of society: When provided for in advance, [political] disorders can be cured; but if you wait until they are upon you, medicine is too late -the disease has become incurable. It is the same with pulmonary tuberculosis: doctors say that it is easy to cure in its incipient stage, but hard to recognize; yet as time goes on, if not recognized and treated at the outset, it becomes easy to recognize and hard to cure (Machiavelli,The Prince,(175)(176)(177)(178)(179)(180) Tuberculosis seems to have been the preferred metaphor for hazard within the body politic. The metaphor had been used among others by Cicero (Letters to Atticus 2.1.7), Seneca (On Clemency 1.9) and Aquinas (De regimine principum 1.2; 4.23). The idea is clear: you want to avoid disease of the state as you want to avoid disease of the body, but the disease is either (i) easy to cure and hard to recognize, or (ii) easy to find and hard to cure. Collingridge missed this piece of philosophical heritage since he does not mention Machiavelli or the body politic metaphor anywhere in his work. Yet it is important to bring this heritage to light because it reveals a feature that is only subtly apparent in Collingridge's work: the dilemma of control occurs in a monist environment. This means that the dilemma of control occurs most pertinently against a moral background of fixed normative ideals regarding the future -a background from which major conflicts and incompatibilities are taken to be resolved (Connolly, 2005;Lassman, 2011). The disease and its consequences for the body politic are incontestably bad. The only remaining question is that of proper diagnosis (What kind of disease is it?) and treatment (What intervention is needed?). It is unimaginable that someone would prefer tuberculosis to the healthy body, that someone would prefer a diseased body politic to a healthy one. When asking questions of control, normative ideals and the good/bad distinction are assumed to be fixed. This transpires in the image of the Golem who, remember, is either serving or destroying its masters.
What is irresponsibility in this perspective? To both Machiavelli and Collingridge, irresponsibility means being too far away from the process you seek to control. Since our decisions are fallible and since interventions need to be swift, it is irresponsible let the "clumsy" Golem loose, allowing it to function according to its own internal dynamics. Irresponsibility is therefore a form of recklessness. Even though humans are not the actual cause of the disaster (the Golem is), they are the proximate cause and thus morally responsible. It is no surprise then that scholars working within this perspective have been focused on devising methods for intervention. Since the 1980s, a plethora of methods, criteria, requirements, and other normative tools have been developed and deployed for making sure that various Golems are properly controlled (for an overview, see Stilgoe et al., 2013). The need for prompt intervention when the Golem 'betrays its masters' is equally apparent and this justifies the continuous development of social and technical tools for 'upstream' intervention in early stages of technology development (Wilsdon & Willis, 2004). This was precisely Machiavelli's suggestion: the prince should move his court within a newly conquered state to achieve this form of readiness and availability.
I assume that the value of the Golem as a guiding image is beyond doubt. It is indeed vital to keep in mind, particularly when considering new technologies, that the artefacts we build can work not only for us but also against us. But as a guiding image the Golem only functions well when we can unproblematically distinguish the good from the bad. Events such as the tragic explosion of the Challenger space shuttle or long-term diffusion of lead on the Earth through the use 1 3 of leaded gasoline (one of Collingridge's examples) are cases where the good is neatly separated from the bad. Yet not all technologies are like that. The disease, to go back to Machiavelli's metaphor, is not always as obvious as tuberculosis and it might not even be all that obvious that it is a disease. Rather, technologies can sometimes appear more clearly as trade-offs between different values. For this, an alternative guiding image is necessary.

The Leviathan as an Alternative Guiding Image
In the Old Testament, the Leviathan appears as a serpent-like creature that is killed by God. Yet in philosophy its long-lasting fame was acquired as the central image of Thomas Hobbes's 1651 political treatise Leviathan or the Matter, Forme, and Power of a Common-Wealth Ecclesiasticall and Civil. In this account, the Leviathan ceases to be a mythical monster from the seas and instead is enlisted to represent the manmade institution that guarantees the protection of some values to the detriment of others. The Leviathan is thus the personification of a social contract.
Although the Leviathan gives the title of Hobbes' treaties and is depicted in its famous frontispiece (see Fig. 1), it is striking that the term only appears three times in the entire treatise. What's more, the three appearances "give us very divergent images of that enigmatic political beast" (Tralau, 2007: 61). The Leviathan is created not by nature but "by art" (i.e., by the craft of men) and we learn, as the frontispiece already suggests, that it takes the form of an "artificial man" (Leviathan, I. 1). Later on, however, the Leviathan is referred as "one man or assembly of men" that "bears the persons" of the individual citizens, but in the same paragraph, the Leviathan is also revealed as a "commonwealth", a "sovereign" and a "Mortal God, to which we owe under the Immortal God, our peace and defense" (Leviathan, II, 85-88). The Leviathan therefore has the double nature of artificiality and manhood; it is both a man-made and man-like. The frontispiece provides a visual representation of this duality: the Leviathan is quite clearly a humanoid presence; it has a face, two arms, and even a mustache; but it is a giant many times greater than the rolling hills in front of him and the village at the bottom. Furthermore, the body is eerily made of many smaller individuals -representing the whole of society -who are positioned such that they face the Leviathan. It is only the Leviathan who faces us, the readers.
To understand the symbolism of the Leviathan, and its relevance for evaluating technology, we must remember that Hobbes was writing in a political context of great uncertainty about the status of the monarchy and the status of law more generally. The English Civil War had broken out in 1641-1642 and had brought the country before a constitutional impasse in which the King and the Parliament were validating laws independently of one another. As a result, value hierarchies that were once undisputable now became subject of doubt. Men were thrown back into what Hobbes famously called the state of nature. In the state of nature, we exercise all our natural rights with full liberty -since there is no law, or there is a multitude of divergent laws. The state of nature is a "war of all against all" which breeds "continual fear, and danger of violent death," making "the life of man, solitary, poor, nasty, brutish, and short." (Leviathan, I. 63). Under these circumstances, Hobbes tells us, it is only natural to seek a new hierarchy, a new contract, preferably on more stable grounds. This contract is needed because neither of the many values that animate society -justice, freedom, mercy, entrepreneurship, intelligence, patience, wisdom, etc. -is powerful enough to dominate the rest. This is when the Leviathan comes into play: a man-made entity which stands for our communal decision to prioritize some values over others in order to escape the chaos of the state of nature.
Hobbes's develops this view of the Leviathan in a way that is strongly colored by context in which he was writing. He tells us that the Leviathan must necessarily be born from our rational preference for security at the cost of liberty, thereby making the first step in a contractarian political philosophy: "[one would] lay down this right to all things and be contented with so much liberty against other men, as he would allow other men against himself" (Leviathan, I, 64). But for the present purposes we do not need to take this trade-off as defining the Leviathan. Rather, the Leviathan can represent this type of decision where some values are served while others are disserved. As the prototypical image of a societal compromise, the Leviathan is the counterpoint image needed in the normative study of technology.
To regard a technology as a Leviathan means to view it as a trade-off between two or more values. The image of the Leviathan assumes a pluralist moral environment populated by a plurality of values, none of which is clearly and constantly dominating others. Unlike the Golem, who either saves us or destroys us, the Leviathan saves us in some sense and simultaneously destroys us in another. I will work 1 3 out the normative implications of this image in the next section. Here I want to illustrate this by looking at an example.
The global dependence on fossil fuels is increasingly recognized to be an economic, ecological, and geopolitical vulnerability. This much is clear -but what is the solution? The technologies that dominate the picture convert renewable energy (solar or wind) into electricity (Chu & Majumdar, 2012;Egli et al., 2018). However, we do not use electricity only when and where the sun shines and the wind blows. Because of this, electricity that is not used right away must be stored both short term and long-term (Detz et al., 2018;Gust et al., 2009;Krassen et al., 2011). The solution of green electricity thus brings about the problem of storage. The general solution so far has been to store this surplus of renewable electricity within the grid. But using the grid in this way has its limits and as the fluctuations become bigger the need to compensate by using fossil fuels also increases. Additionally, the decision to turn renewable energy into electricity commits us to the use of overly complex energy plants that can only be operated by large companies that have the knowledge, human resources, and capital. The energy system thus remains highly centralized. At the same time, it is not at all obvious that technologies for renewable electricity, centralized and expensive as they may be, can be upscaled by such a factor that they actually solve the original problem which is our dependence on fossil fuels (Faunce et al., 2013;Thapper et al., 2013). An alternative pathway would be to store this surplus energy into chemical bonds and thus create what are known as. Currently, however, the field of solar fuels remains in its incipient stages. Alreadyexisting technologies such as water electrolysis are expensive and inefficient while future technologies such as solar fuels through artificial leaves are in the stage of fundamental research (Detz et al., 2018). To all this we must add that some areas of energy consumption such as long-distance transportation and some industries (cement and steel) cannot directly be decarbonized by using green electricity since they require fuels or heat (Thiel & Stark, 2021).
Technologies for green electricity can be seen as a Leviathan. They represent our commitment to a greener future, sure, but they more importantly represent a specific answer to the question of how this future will look like in terms of values served and values disserved. The value of mobility is highly served -as personal transportation becomes greener and more electric. Home ownership, on the other hand, is served less -as steel, cement and other materials become more expensive and home ownership is limited. Personal autonomy might be decreased as centralized electricity grids grow and our dependence on them is maintained or increased. National autonomy might also decrease, as countries such as China dominate the renewables market (Mathews & Tan, 2014). The North-South equity might also decrease -both because of the by-products from the production of polysilicon and silicon wafers which accumulate in water and soil and through the lack of a global initiative for recycling waste solar panels (Xu et al., 2018). The list could be continued but these examples are sufficient to illustrate difference between the Golem vision of technology -with its clear-cut functioning/malfunctioning distinctionand the Leviathan vision of technology as a contract and thus a compromise between different values. In science and technology studies (STS), scholars have generally focused on how difficult it is to get out of an already made contract, generating research on technological lock-ins, technological channelings, technology traps or path dependences (Aghion et al., 2019;Berkhout, 2002;Frey, 2019). What the Leviathan view adds to that is the normative element involved: technology is seen as a moral choice in a pluralist environment, one that will now or in the long term serve certain values and not others. Additionally, what the Leviathan brings to the fore is the agonism between values and the fact that technological choice sows the seeds for conflict between those that are served and those that are disserved (Popa et al., 2020). Under these conditions, the question of responsibility must be formulated and tackled differently. It cannot be a matter of control. To this I turn in the next section.

The Leviathan: Irresponsibility as Tyranny
A normative approach guided by the image of the Leviathan will naturally focus on the value trade-offs embodied by the technology under study. What does irresponsibility look like from a Leviathan perspective? From a Golem perspective, irresponsibility is recklessness. Normative approaches tend therefore to result in intervention methodologies (for controlling the Golem). This approach is perfectly suited for cases in which the envisioned trade-offs are unproblematic -e.g., we do not want asbestos, PCBs, leaded gasoline, incandescent light bulbs, steam locomotives, cathode ray screens, zeppelins, gunpowder, and those gigantic front wheels on earlier versions of bicycles. But things are not always that simple.
Contrasting with the wide variety of approaches that have taken the image of the Golem as a starting point, there is little research guided by the image of the Leviathan. One notable exception are studies that focus on the problem of moral overload (Van de Poel, 2009van den Hoven et al., 2012). According to these studies, technologies can place us before a moral overload. The term 'moral overload' is defined as a situation where (i) multiple value requirements have an equal claim on an agent's decision making and (ii) any decision will result in some of these value claims being sub-optimally served. This sub-optimal serving of some value requirements is referred to as moral residue (van den Hoven et al., 2012). This line of research is largely based on moral philosophy that focuses on moral dilemmas (Foot, 1983;MacIntyre, 1990;Marino, 2001;Williams, 1980Williams, , 1981, moral conflict (Brink, 1994;Trigg, 1971), and moral struggle (Levi, 1990). I want to focus on this approach first because I believe it captures an essential component of a Leviathan-based normative approach but also because, as I will argue, it underestimates its complexity.
What is the responsible thing to do in a state of moral overload? There are numerous quantitative and qualitative methodologies that have been developed for tackling the multi-variable challenge of decision-making in a state of moral overload (van de Poel, 2014). Values can be grouped, prioritized, redefined, recombined and eliminated. However, while it is crucial for the decision agent to know these methods, they do not solve the problem at hand -which is that of acting responsibly under the given conditions. The challenge is not to get out of moral overload as such, but how to do so responsibly. How do we know whether a certain grouping, prioritization, redefinition, or recombination is the responsible one? That is the real problem.

3
An alternative answer is to see innovation itself as the source of solutions to moral overload. This 'resolution by innovation' has been advanced van den Hoven and colleagues (2012) and van de Poel (2014). An example in this sense is the baby phone. Technology in this case is said to provide a solution to the choice between parental values (looking after your baby) and social ones (spending time with your friends). But the example is, I think, misguided. The trade-off in this case only appears to be resolved if we have already redefined and reduced the relevant values: redefined because the parent who engages in social activities while stuck to the baby phone is neither fully 'taking care of the baby' nor fully 'engaging with friends', and reduced, because there are many other values at stake such as frugality (baby phones can be expensive), safety (baby phones can break down), and autonomy (baby phones become indispensable once you start using them). If these values were not part of the initial situation, then moral residue was not inevitable and so the initial situation was not a real moral overload after all. The important lesson here is that 'responsible technology' in the Leviathan view is not the technological solution that satisfies one value (or one set of aggregate values lumped together, e.g., Christian values, liberal values). This feature of a technology is not enough because we assume as a matter of course that while some values are satisfied, others are dissatisfied. The idea of recklessness is therefore too unrefined to serve us within this normative view. I believe scholars have integrated precisely this idea when they have suggested that behaving responsibly relative to technology should be seen as a more complex form of care (Grinbaum & Groves, 2013;Groves & Adam, 2011;Jonas, 1982Jonas, , 1984 rather than the straightforward avoidance of risk. I will return to this point at the end of this section. It is my purpose in what follows to further develop this idea by borrowing insights form the literature on pluralism. The starting point for a notion of irresponsibility in the Leviathan view is in fact the same one that appears in Hobbes' work: the state of pluralism that characterizes the state of nature. This means that ways of being in the state of nature are many and no individual or group of individuals can embody the maximal satisfaction of all relevant values. In the case of technology this translates into the assumption that none of the values that could be embodied in a technology will a priori be more important than the other values. Ranking of values for the creatin of a Leviathan is a matter of context. For example, we might want prioritize safety over freedom when it comes to nuclear waste but prioritize freedom over safety when it comes to built-in speed limiters in personal vehicles. We seek the technologies that benefit the poor, e.g., by discovering affordable ways of making medicine, but we also direct our efforts towards supercars, expensive jewelry, and space tourism. Of course, we can hope as pluralists usually do that there is a lower limit to all this -a set of violations that cannot possibly be tolerated regardless of the benefits they might produce somewhere else in society (Crowder, 2020: 43-58). For example, we might learn a lot about cancer if we were allowed to experiment on individuals in a way that violates human decency, health, and happiness. Not even the prospect of great technological advances can legitimize such sacrifices of basic values. Yet beyond these extreme violations, the interplay between values in technology development suggests the absence of any a priori method for ranking values into some universal hierarchies that all Leviathans must embody.
Additionally, we must note that a technology (Leviathan) is not just one compromise but in fact a set of compromises at distinct levels. Take the example of technologies for green electricity mentioned above. At a societal level, the PV solar panel is a choice of one grand challenge, namely, phasing out fossil fuels, among a number of other grand challenges that society faces (e.g., poverty, injustice, clean water, etc.); at a strategic level, the PV solar panel is a choice of one strategy for tackling that challenge, namely, decarbonization, among a number of other strategies for the same challenge (e.g., degrowth, rationing energy use, etc.); at a resource level, the PV solar panel is the choice of one resource for implementing the strategy (solar energy) among a number of other resources (e.g., wind, nuclear, hydropower); at a technological level, the PV solar panel is the choice between converting solar energy intro electricity as opposed to storing it into chemical bonds (e.g., hydrogen, ammonia etc.). The necessity of compromise appears at all these levels and thus values are simultaneously served and disserved at all these levels.
When technology is seen as a Leviathan and thus as an embodied compromise (or set of compromises), irresponsibility cannot be simply associated with the violation of value since such violation is understood to be inherent in the technology itself. Rather, responsibility can take the form of maintaining the interplay between values such that, across cases, no value will be allowed to continuously and systematically dominate other values. If every innovation act involves values served and values disserved, irresponsibility would be to allow or encourage the systematic dominance of some values over others across technologies, leading to the dominant values becoming something like a 'currency' based on which our serving of the other values is evaluated. Isaiah Berlin has famously espoused this point of view in the field of political pluralism, insisting that we must seek the "precarious equilibrium that will prevent the occurrence of desperate situations, of intolerable choices" (Berlin, 2003: 18). Irresponsibility is then a form of imbalance, a refusal of the difficult moderation that is needed to keep the system of values in place and prevent it from collapsing into a unitary, monistic system. 1 Irresponsibility is then not the dominance of some values over others in one Leviathan or the other. That is the Golem view, and the possibility of technologies failing to serve values according to expected rankings is why the Golem view is still crucial. In the Leviathan view irresponsibility is the continuous and systematic dominance of one or more values over others such that the balance is broken, and, in extreme cases, the dominating values become the currency with which other values are evaluated. Here we can use a distinction introduced by Walzer between dominance and tyranny (Walzer, 1983). Dominance is the local prioritizing of one or more values over others and it is the natural result of values being ranked differently according to context (writ large). As we have seen, safety might dominate the creation of some Leviathans, liberty might dominate the creation of others, happiness might dominate the creation of yet others etc. Tyranny, on the other hand, is a redefinition of the game of choice such that one or more values come, in time, to systematically dominate others. The example closest to our experience in modern, democratic societies is the tyranny of money in which other spheres are corrupted such that one can exchange it for values such as fame, political office, citizenship, attractiveness, and education. In the case of technology, this could be (counter-factually) envisaged as the dominance of sustainability, 1 3 which might come to function as the make-or-break criterion for new technologies as well as, in time, the value that can compensate for the lack of other values in the way money might compensate for lack of political virtues or education. The state of pluralism, governed by what Hobbes saw as the natural equality between individuals and what Walzer refers to as the complex equality between values, becomes the necessary starting point from which innovation, political or technological, can be played responsibly (Walzer, 1983: 312).
One might ask whether tyranny is indeed a species of irresponsibility and, conversely, whether maintaining complex equality is indeed a species of responsibility. If this question is simply one of semantics, I can only point to the fact that the term 'irresponsibility' seems to apply. But if pressed further I would have to admit that I only know this because I speak English. However, I suspect that the question pertains not so much to semantics as it does to moral theory. Is the moral behavior discussed above one for which responsibility is the relevant norm? I believe so. First, notice that the considerations put forward all pertain to the proper yielding of power, which is the traditional domain of responsibility. Furthermore, as an ethical behavior, maintaining of complex equality within technological systems is both future-oriented (Groves & Adam, 2011) and exhibits what Jonas referred to as nonreciprocity towards future generations (Jonas, 1982(Jonas, , 1984. To make sure that no value comes to overdominate a certain technological system is to control the power present generations have over that system and thus preserve it for healthy functioning in the future. Tellingly, when Jonas discusses cases of major and irreversible technological choices in the direction of one value or the other, his critique seems to come from a Leviathan viewpoint, e.g., But is it morally unthinkable, for example, to stop biomedical technology from holding down infant mortality in "underdeveloped countries" with high birthrates, even though the miseries in the wake of overpopulation may be more horrible still? Any number of initially beneficent ventures of grand-scale technology could be adduced to illustrate the dialectics, the two-edged nature, of most of them. The point is that the very blessings and necessities of technology harbor the threat of turning into courses. (Jonas, 1982: 896-897) The 'two-edged nature' of momentous technological choices, and the ensuing 'dialectics,' is precisely what I have referred to here as the trade-off between values served and values disserved. Jonas is thus looking at biomedical technology as a 'grand-scale' contract that is neither absolutely good nor absolutely bad but simultaneously good and bad in a relative degree.
To sum up. In the Leviathan view, responsibility is an attempt to maintain complex equality within the system such that a plurality of values can be allowed to coexist and enter compromises in present and future technologies. To systematically disallow this coexistence means to exclude alternative visions of the good (the good life, the good future etc.). Importantly, the Golem view is not thereby invalidated. After all, some technologies can also fail to serve the values in view of which they were built -whether because of negligence or malice -in a context where there are hardly any alternative visions of the good. (Remember Machiavelli's metaphor of tuberculosis). The two types of irresponsibility therefore complement each other.
For even when the technology does work as it should, and even more when it does not, it can trigger or encourage the destabilization of the "precarious equilibrium" (Berlin) between the many values that animate our social and individual lives.

From the Collingridge dilemma to the Münchhausen trilemma
When a technology is seen as a Golem, the Collingridge Dilemma arises because there is an inverse relationship between our knowledge of risks and our ability to intervene. This dilemma does not arise in the Leviathan view because technology does not appear on the background of fixed preferences and immutable ends. In dealing with a Leviathan, terms such as 'risk' and 'benefit' will be relative to one's chosen priority of ends. I want to draw attention to a parallel problem that brings its own challenges in the Leviathan perspective -the Münchhausen trilemma.
The idea that we should avoid systemic imbalances and unbearable choiceswhat Walzer called tyranny -certainly has an air of egalitarian justice to it, but what is it based upon? What is so wrong about overdominance? Why shouldn't one or more values dominate others? This touches upon the core of the concept of responsibility developed here. And if we provide an answer -e.g., 'blocked exchanges' degrade the meaning of dominated values (Walzer, 1983) -then this in turn can be called into question in the same way. Why should we prevent blocked exchanges? And even if we accept as a matter of principle that tyranny is undesirable, we are not out of the woods yet. How are we to decide, in specific situations, whether a certain imbalance instantiates the notion of tyranny? How do we decide that one value has been satisfied enough and we compensate now or else the "precarious equilibrium" is lost?
Laboring in a Leviathan perspective involves becoming embroiled in questions of principle that are essential to the decision-making process yet for which no predefined answers can be provided (save for the most extreme cases of unambiguous, universally recognized evils). This is then the burden of working under pluralist conditions. This situation is well-known in epistemology under the name of 'Münchhausen trilemma', after a story in which the fictional Baron of Münchhausen saves himself and his horse out of a swamp by pulling on his own hair (Albert, 1985;Popper, 2002). The three horns of the trilemma are: (1) Dogmatism: stipulate a principle that is taken to be self-evident, (2) Circularity: engage in circular argumentation where the premises are logically equivalent to the conclusion, and (3) Infinite regress: argue ad infinitum by appealing to new reasons in defense of the challenged principles.
Notice that the decision agent working within the Golem view has variant (1) as a clear way out because the line between the desirable and undesirable is taken to be self-evident: we self-evidently want the Golem to work for us, not against us. That is, we do not want the spaceship to explode, we don't want the 1 3 fuel additive that pollutes the entire earth, we do not want the political equivalent of the tuberculosis. But in case of the Leviathan such self-evidence is lacking. All three choices are thus open because moral residue is inevitable (this, I assume, is what Jonas was hinting at when he used the term 'dialectics' in the passage quoted at the end of the previous section).
There are several approaches to the Münchausen's trilemma. Karl Popper's falsificationism is perhaps the most well-known attempt to avoid the three horns altogether (Popper, 1992(Popper, , 2002. In this view, we give up the ambition to justify our claims (and to justify the principles that support these claims, and the principles that in turn support these principles etc.) and instead we treat every decision as mere conjectures. Applied to new technologies, this would translate into the following approach: start with whatever vision of a better future appears to be the best in the given circumstances and then try hard to refute this initial vision. However, it is doubtful whether this approach would align well with the practical constraints of working with technology. The most obvious problem is that notions such as 'evidence,' 'refutation' and 'competing conjecture,' while clear in the natural sciences, are quite ambiguous when applied to socio-technical systems. For instance: What would count as a refutation of an energy future that is fully dependent on a centralized electricity grid? What would count as evidence that overpopulation is a greater risk than infant mortality? If such words have a stable meaning in the sciences, and it is questionable that they do, they in any case do not enjoy such stability in practical matters relating to technology.
Pluralists have proposed a different approach. The Münchhausen trilemma must be embraced, rather than solved or circumvented. We must learn to live with it because in this way we learn to avoid intolerable choices and points of no return: Some among the Great Goods cannot live together. That is a conceptual truth. We are doomed to choose, and every choice may entail an irreparable loss.
[…] If the old perennial belief in the possibility of realizing ultimate harmony is a fallacy […] if we allow that Great Goods can collide, how do we choose between possibilities? What and how much do we sacrifice to what? There is, it seems to me, no clear reply. But the collisions [between the Great Goods], even if they cannot be avoided, can be softened. Claims can be balanced, compromises can be reached: in concrete situations not every claim is of equal force -so much liberty and so much equality; so much for sharp moral condemnation, and so much for understanding a given human situation; so much for the full force of the law, and so much for the prerogative of mercy; for feeding the hungry, clothing the naked, healing the sick, sheltering the homeless. Priorities, never final and absolute, must be established. (Berlin, 2003: 17-18) If vales cannot be reconciled, if 'irreparable loss' (i.e., moral residue) is inevitable, then there is no avoidance of the Münchausen's trilemma, no way of satisfactorily justifying our choice before the other. The strategy is not to avoid or resolve this conflict or resolve it in favor of one of the options, but to try to learn from it and maintain it as a beacon of the parties' inevitably limited worldview.
This answer will not be terribly satisfying to all readers, but I want to suggest that it provides a valuable indication of how to live with the Münchausen's trilemma. In concrete situations, the best we can do from a pluralist viewpoint is to constantly "engage in what are called trade-offs -rules, values, principles must yield to each other in varying degrees in specific situations" (Berlin, 2003: 18). New situations, new trade-offs. We must therefore maintain what has been referred to as the agonism between competing technological options, encouraging the problematization of selfevident technological fixes, the propagation of technological debates while discouraging the use of discussion stoppers (Popa et al., 2021). It is not guaranteed that in this way we will spot 'unbearable choices' from afar, but at least we remain on our toes even when choices seem undisputable when we make them.

Conclusion
The analysis presented here can be summarized as follows. The Golem and the Leviathan are two critical (normative) lenses through which we can examine both existing and emerging technologies. Seen as a Golem, a technology is essentially good and helpful, since it manages to solve a recognized problem. However, the Golem presents risks; it can turn against us. To identify these risks and be able to avoid them, we have to control the technology and we do so by intervening in a technology's development and use. Unfortunately, this intervention is troubled by the Collingridge dilemma of control: the more we know, the more expensive it is to act on that knowledge. Seen as a Leviathan, a technology is neither good nor bad -it is simultaneously both. The Leviathan is an ambivalent savior because in the solutions it provides, we see both values served, and values disserved. This makes intervention, or readiness thereto, less of a concern. Saving some values would simply flip the distribution and result in the sacrifice of the others. As a result, the more important discussion concerns the compromises we should favor, the ends we want to achieve and the ends we are ready to sacrifice. Because of this, the Leviathan gives rise to the Münchausen's trilemma of justification. When the moral space is opened and a plurality of variants of the good life appear, deciding between them (without any universal, God-given, or self-evident principles) is extremely complicated.
There are surely other images that can guide our critical investigation of technology. But the two discussed here form the two ends of a continuum stretching from monism to pluralism. At one end, the Golem offers a monist vision of our moral universe in which one value (or a set of aggregated values) dominates the rest. At the other end, the Leviathan offers a pluralist vision of our moral space because no value can systematically dominate others, making each choice a difficult one given an inevitable moral residue. Importantly, having established the two ends of the spectrum, we can see that fields such as responsible (research and) innovation, (constructive) technology assessment, risk governance and value-sensitive design have systematically prioritized questions of control over questions of alternative versions of the good. With this in focus, we now have a great number of tools, terminologies, techniques and tactics for intervening in the socio-technical system in order to control or prevent the misbehavior of all kinds of Golems, from nanotechnologies to digital technologies and to biomedical technologies. The image is much less promising, however, on the Leviathan end of the spectrum. There is comparatively little 1 3 conceptualization and operationalization of how to deal with precarious equilibria and moral residues. The insightful literature around moral overload constitutes an exception, but we have seen that both the available tools for quantitatively and qualitatively weighing options and the solution of innovating our way out are in a real sense missing the point since they presuppose the very principles that are up for discussion in a Leviathan approach. For example, we currently do not have any methodology for maintaining equilibrium through tit-for-tat strategies, i.e., prioritizing of some values for this technology, deprioritizing those values for that technology, with an eye for maintaining equilibrium. More fundamentally still we do not have any methodology for identifying the values that are affected by specific technologies. We use pre-defined lists and fish out the relevant ones based on the subjective interpretation of 'what is at stake' for each technology (Brey, 2012(Brey, , 2017. This is of course an admirable start, but the image of the Leviathan suggests that it is not so much the name of the value that we struggle to identify, and not even its relevance for a certain technological pathway (although these are intricate questions). It is the meaning of the identified values in a specific context. This meaning is crucial for both understanding the resulting trade-off and making moral decisions about it. As Walzer puts it, "if we understand what [a value] is, what it means to those for whom it is a [value], we understand how, by whom and for what reasons it ought to be distributed" (Walzer, 1983: 8). A more equal distribution of efforts for seeing technologies as Golems and seeing them as Leviathans would give us a better instrumentarium for dealing with the trade-offs from past and future technologies.

Competing interests
The author declares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.