1 Introduction

Artificial Intelligence (AI) technologies challenge ethical choices at the limit of or beyond our knowledge. The technology is not only moving faster than our regulation, but faster than our imagination. The struggles of creating frameworks for a responsible artificial intelligence is thoroughly discussed in the growing literature on AI and ethics. In the wait of the EU AI act, and indeed parallel to the EU AI act, where the protection of fundamental rights, and the human-centric, ethical, and responsible use of AI technologies is a central ambition [1, 2], the ideas of bottom-up initiatives for addressing AI and ethics are flourishing. There are calls for making ethics core in AI education [3]; for introducing an oath for AI developers, in line with the Hippocratic oath in medicine [4]; and for establishing codes of conduct binding companies to do good [5, 6]. Further, there are calls for a multi-stakeholder coordination, through such mechanisms as regulatory sandboxes, introduced by policy makers in a rising number of nations [7], emphasizing the importance of knowledge exchange of technology and jurisdiction in a new and innovating field. Yet again at the system level, there is a call for the growth of ethical digital ecosystems [7, 8].

All these possible measures are potentially interesting and important. They aim to aid us become willing, and able, to develop ethical AI through binding us to a set of ideals, or to close knowledge gaps to help us identify good AI from potentially bad. However, the current discussion about these measures fail to address the elephant in the room; what is in fact good and bad AI?

Linking the AI and ethics debate with the notion of uncertainty, a familiar construct in the entrepreneurship literature, we propose that AI creates situations of the unknown-unknowns, or what Phan and Wood call true uncertainty [9].Footnote 1 While often referred to as a rare occasion, we argue that true uncertainty is at the heart of the AI development, where choices often need to be made under probabilities that are either unknown or even non-existent [11]. This challenges the idea of ethical AI needing us to seek a “one truth”, but rather puts things such as values, imagination, and creativity on the centre stage [10]. Under conditions of true uncertainty, it is not enough to appeal to the good of the heart (what is in fact good?), nor to reduce knowledge gaps (what is in fact knowledge?), we need to foster moral imagination, to create spaces for imagining and creating, or indeed avoiding, possible futures.

We propose that the existing measure of regulatory sandboxes, frequently positioned as a way for policy makers to mitigate knowledge gaps and enhance ethical compliance, has great promise for doing so, in fact it might even be one of the sandbox’s most valuable roles. Using examples from the Norwegian regulatory sandbox for responsible AI, we point to how elements in this collaborative model—its focus on openness, societal benefit and exploration—has the capacity to truly harness moral imagination, and with that, responsible innovation, helping us grasp and prepare for possible futures in the present.

2 Ethical AI under true uncertainty

AI is imminently tied to the notion of uncertainty and unpredictable change. The fundamental problems of organising and acting in ill-structured information environments and relating to a future that is opaque and largely unpredictable [12], are central to foundational theories of entrepreneurial action, making that a relevant literature stream to bring forth [13,14,15]. The economist Frank H. Knight [16], made an early, and much referred to, distinction between risk and uncertainty, emphasising that these conditions are radically different. While decisions under risk are decisions where the decision-maker knows the probability of the possible outcomes, decisions under uncertainty are decisions where probabilities are unknown (apriori uninsurable) or indeed non-existent. In a 2020 editorial in Academy of Management Perspectives, Phan and Wood delineated contexts in which decisions are made under uncertainty into four parts: Known-knowns (certainty), known-unknowns (risk), unknown-knowns (Knightian uncertainty), and unknown-unknowns (true uncertainty) [9]. They emphasised that uncertainty indeed consists of two distinct parts; the unknown-knowns and the unknown-unknowns, where only the latter is labelled true uncertainty [9, 11]. This specification gives us a more robust and precise conceptual vocabulary for understanding uncertainty and thereby also exploring its consequences. While known-unknowns and to some extent unknown-knowns are conditions that can eventually be overcome via knowledge acquisition and learning, true uncertainty differs fundamentally in being originated in conditions that are inherently indeterminant and “unknowable,” and thus cannot be mitigated [17].

Phan and Wood claim that situations of true uncertainty are rare. They argue that in hindsight most doomsday scenarios could have been anticipated, had it not been for signals having been ignored and decision makers being unprepared or not ready to respond, due to such factors as cognitive myopia, political exigency and emotional insensitivity [9]. While the rarity of true uncertainty might be, in fact, true, we argue that in AI this category is not only relevant; it is highly present. AI triggers complex situations where we meet not only the known-unknowns that we know we are ignorant of, but also the unknown-unknowns, that we do not know we do not know. The true uncertainty that AI triggers is grounded in a range of different factors, from the special kind of notorious and inherent opacity that AI and autonomous systems have, where the algorithms’ behaviour is not comprehensive by human beings [18, 19], to the speed of development [20], to the disruptiveness of the technology, to the way AI applications processes and aggregate information in a way humans cannot [21], to the way AI technology crosses into completely different professions and industries. This makes it almost impossible to predict the development trajectory of the technology or ethical dilemmas which might follow, let alone to convincingly calculate potential gains or potential harm.

One way of getting out of the conundrum of making the “right decision” under the uncertainty that AI provides, is to stop searching for what is right, and start searching for how we get to that answer in the first place. A way of harnessing ethics under true uncertainty is by creating spaces for moral imagination. While the concept moral imagination lacks a common and succinct definition [22], we will here use the definition of Johnson, focussing on the ability to “imaginatively discern various possibilities for acting within a given situation to envision the potential help and harm that are likely to result from a given action” [23]. In their work, Caldwell and Moberg define three somewhat related components that characterises this moral imagination: first, it requires sensitivity to the moral aspects of decisions. Second, it encompasses taking the perspective of those involved in the decision context. And third, it requires considering alternatives beyond the conventional. Indeed, moral imagination entails the ability to imagine new possibilities about what is just, good, and virtuous [22], and it helps the individual to think more creatively in relation to what is morally viable [24]. Fostering the ability to develop fresh interpretations and imagine unconventional alternatives enables us to better achieve what is more virtuous and desirable [25], and thereby also avoiding the more dangerous ethical pitfalls. The core question is then, which conditions might lead to its emergence?

2.1 A regulatory sandboxes approach to moral imagination

We propose regulatory sandboxes as one answer to the need for an explorative space harnessing moral imagination. A regulatory sandbox is an example of a “soft law” mechanism in emerging technologies, introduced in highly regulated industries such as finance and energy, or related to specific spheres or regulations, such as AI or GDPR, with the goal of promoting responsible innovation/and or competition, addressing regulatory barriers to innovation and advancing regulatory learning [26]. Started originally in fintech, the regulatory sandbox approach has now gained traction in the data protection community in Europe, focussing on utilising personal data in innovative and safe ways, with sandboxes established both in the UK, France, Iceland and Norway. While the scope of the sandboxes varies somewhat, the Norwegian sandbox was initially specified as a sandbox for responsible artificial intelligence. AI is also the specific focus of regulatory sandbox announced by the government of Spain and the European Commission in June 2022. While the project is being financed by the Spanish government, it will be open to other member states, and could potentially become a pan-European AI regulatory sandbox [27].

Importantly, the sandbox is not a new regulatory law, in fact it is not a law at all, it is a project environment where digital innovators connect with regulatory regimes and explore the compliance of their technologies and possible ways forward. As such, the regulatory sandboxes clearly answer to the need of closing the knowledge gap going from unknown-knowns to known-knowns, or indeed from unknown-knowns to known-unknowns, reducing regulatory uncertainty. The sandbox mirrors an entrepreneurial process, where entrepreneurs form a belief, test their beliefs, and respond to the feedback received [28], here with a focus on the product-regulatory fit. While still a recent phenomenon, there is emerging research on fin tech sandboxes showing that they may play a vital role in increasing the influx of venture capital into the fintech ecosystem by indeed reducing regulatory uncertainty [29]. Further, research by Hellman, Montag and Vulkan [30] show strong evidence in support of positive spillovers from sandbox entry in terms of subsequent birth and fundraising of high-growth start-ups at the industry level.

However, and this is our point here; in addition to its immediate effects of reducing uncertainty, this particular collaborative mechanism also has a unique potential for curating an environment for exploring what Phan and Wood, building on Knight’s work, labelled the unknown-unknowns. Beyond closing knowledge gaps, a regulatory sandbox is potentially an environment for exploring the boundaries of ethics, exploring hypothetical risk and uncertainties, or more fundamentally, fostering a moral imagination, a topic much less touched upon. In this perspective, the sandbox could be more than a means to an end, it could be an end in itself. The regulative sandbox can, at its best, allow for a new type of openness and responsiveness, that is much called for by a range of intellectuals, concerned with the role of policymakers in solving future challenges [31]. It can mitigate the concerns about the inability to control social change and govern the potential impacts of innovation [32]. As an alternative to ethically calculating pros- and cons, or indeed searching for or enforcing the “right” law, sandboxes can help both regulators and digital new ventures perform the essential task of “reading things that are not yet on the page” [33] and to collaboratively imagine alternative possibilities [10], harnessing responsible innovations. As such, it aligns well with the Responsible Research and Innovation framework, introduced by Stilgoe, Owen and Macnaghten [34], proposing a series of reflexive, constructive and participatory ways of governing innovation, focussing on anticipation, reflexivity, inclusion, and responsiveness. It also aligns well with the ideas of responsive regulations, where regulatory sandboxes can be viewed as an intermediary stage, leading towards a code of conduct and over time new regulations, ideas that have been developed by amongst others Ranchordas [21]. In this perspective, ethical principles are expected outcomes of ethics, not ethics itself.

The regulatory sandboxes embrace a range of different regimes and practices and are, in the managerial literature, still a much under-explored phenomenon [35], thus we will use the concrete example of the Norwegian regulative sandbox for responsible AI to argue our case. This sandbox was established by the Data protection authorities in 2020 as a direct response to a governmental white paper on AI governance. The overall objective of the regulatory sandbox is to promote the development and implementation of ethical and responsible AI from a privacy perspective. Concretely, it is set up as a 6-month programme, conducted as 4–6 workshops with regulators and selected projects, offering dialogue-based and in-depth guidance of concrete cases of AI technologies, helping individual organisations ensure compliance with relevant regulations and the development of solutions that take privacy into account [36].

There are three aspects of the way the Norwegian sandbox is set up that is highly relevant to making it apt for ethical explorations. For one, the sandbox process is designed for maximum openness. There is a strong focus on knowledge sharing within each project, across projects and from each project, with project plans being published and insights and examples arising from sandbox projects being shared extensively both online and in a range of public events. We argue that this focus on openness and transparency has the potential of enhancing an ethical mindset, the nurturing of an ethical identity of the organisations involved as well as stimulating creativity and innovation amongst both organisations and regulators, stretching knowledgeable minds to the limit. Second, the sandbox is set up with a clear mandate to deliver benefits for the participating organisations, the regulators, and the industry and society overall, a focus which influences the uptake to the sandbox as well as its process. All innovative projects admitted have a societal interest beyond the interest of the individual company or organisation. This focus on the projects’ vital societal role can potentially nurture and enforce empathy and human-centred thinking, influencing both technological development and regulatory practices. Third, the sandbox is a non-binding collaborative space outside the realm of the formal supervision process. As such, it can enable collaborative exploration and imagination, not immediately linked to managerial decision making. The regulatory sandboxes provide not only a different place for reasoning, outside the realm of the hardcore law, they also provide a different way of reasoning, enabling a relationship between digital innovators and regulators based on dialogue, reflexivity, openness, and learning [36].

3 The challenges of designing for moral imagination

These are early days of sandbox development efforts. The sandbox initiative is still in the making, and to play a key role in harnessing moral imagination there are a range of challenges that need to be addressed. In the literature surrounding sandboxes, there is already expressed a fear of sandboxes “legitimising” or even “risk washing” the new venture technologies, unless it is consciously designed and critically assessed [37].

One relevant topic is whose moral imagination are we harnessing? Are we attracting, or indeed selecting the “simple” or the “good”, while the digital innovators in dire need of moral imagination simply ignore the tool or are left behind? One way of addressing this concern is to ensure transparency. In the case of the Norwegian regulatory sandbox for responsible artificial intelligence, applications are assessed and selected by an internal Data Protection Authority committee, in collaboration with an external reference group to secure the social relevance and benefit. The list of applicants is published on the website, both of which could be perceived to be mitigating measures.

Another relevant concern is, what type of imagination are the sandboxes curating? While we here focus on the regulatory sandbox’s role in promoting moral imagination, it is possible to conceive of imaginations going astray. Value creation imagination, not merely moral imagination, provides one of the key drivers of entrepreneurial theorising, as entrepreneurs are supposing, conceiving, and considering various new possible futures [38]. For aligning both regulators and entrepreneurs in this quest, one is dependent on building a strong common belief in the value of moral imagination, not just for ethics’ own sake, or indeed for societal benefit, but for the innovative entrepreneur, who needs to translate fuzzy imaginations to concrete beliefs that can be explored, and later formed into a solution that can be launched in a market. While the collective effort in the sandboxes can generate a wider view of the entrepreneurial opportunity space, it might not necessarily succeed in nurturing the moral imagination unless it is consciously and carefully designed into the process. Importantly, what we do want to avoid, is sandboxes becoming their own dangerous flip sides; a “wild west” for AI companies looking to exploit regulatory loopholes, big enterprises influencing regulators in their own interest or indeed on the regulator side, regulators even more effectively closing down newly learnt about and knowledge-pushing innovative experiments.

4 Conclusion

The purpose of this commentary has been to call for a deeper thought into what might create a more ethical AI, as AI is leading us into a world that is increasingly more unknown and unknowable. We as humans can build systems more complex than we can manage, complete with behaviours that we cannot predict. Introducing the notion of ethical AI under conditions of true uncertainty, we argue that moral imagination is the remedy to true uncertainty and a regulatory sandbox is its apt purveyor.

We do not prescribe regulatory sandboxes over hard-line regulations, nor over professional codes of conduct. In fact, these are parallel as opposed to opposing developments. Rather, we think of this as a regulatory landscape where we need a range of different toolkits; ranging from “codes of conducts”, “soft practices” to hard law. We propose that in this landscape, the sandboxes could play a vital, but currently underplayed role, in not only harnessing regulatory certainty but in harnessing the much-needed moral imagination, and with that, a responsible and imaginative, not merely compliant innovation.

Responsible innovation is one of the most important design challenges of the twenty-first century. If we want this, we need to design the regulative sandboxes not as pure testing grounds, but as places for exploration and moral imagination. Our sandboxes need not to be puzzles, but mazes, where regulators and innovative entrepreneurs can exercise their moral imagination to consider the possible, imaginary and non-imaginary socio-technical implications of what they create or what they allow, a core element in securing responsible innovations. While we pointed to three relevant and promising aspects of the Norwegian sandbox in AI, we do, however, need empirical research to guide our way forward.

As a conclusion, we repeat the call of Alvarez and Porac in their introduction to the AMR special topic forum, stating that managing under true (or what they call fundamental) uncertainty is a very different kind of animal compared to managing under more predictable environments, and that there is great need for more research into what this truly means for strategy, organisation, management and entrepreneurship [39]. Adding to the call, we state there is great need for more empirical research on what true uncertainty means for the development of an ethical AI.