The preceding section outlined the CCVSD approach and what merits it has as a branch of the VSD methodology.
The CCVSD methodology that van Wynsberghe (2013b) formulated is able to adapt itself to the values of care that are associated with the domain in which care robots find themselves, and this adaptability to discrete design programs is a fundamental directive of VSD (Friedman & Hendry, 2019; Friedman & Kahn Jr., 2002; 2003). Given that the artefacts examined in this paper are of a similar nature to one another, using CCVSD as a starting point is logical. CCVSD is a useful methodology in this context because it concentrates on the impact that care robots may have on care practices and gives a normative foundation to VSD. Our primary aim is to develop the CCVSD further by providing a more nuanced normative approach. When the ‘care practice’ at stake is impacted by AI-driven systems, an evaluative and justificatory analysis arguably requires a reference not only to care ethics, but also to considerations operating on a societal level in which such practice is embedded. There might also be other actors beyond individual caregivers and care-receivers. The proposed shift to VSD-AI4SG is one way to address this issue.
By combining CCVSD with other norms and values that are specifically adapted to autonomous systems such as AI4SG norms (Floridi et al., 2020), and the values of the HLEG AI (High-Level Expert Group on AI, 2019), the design of AI systems can be made care-sensitive to avoid doing harm, and be actively directed towards social good, even beyond the deployment domain. Umbrello and van de Poel (2021) argue for the implementation of UN Sustainable Development Goals (SDGs) as a useful approximation of what can be collectively believed to be valuable societal ends (Umbrello & van de Poel, 2021, p. 1). The following subsections describe their approach, which is then deployed in Sect. 5.
VSD for AI
Various considerations need to be taken into account when reviewing the design of AI systems. There is no longer any doubt as to whether AI systems will have significant and lasting sociocultural, economic and ethical impacts (Baum, 2016; Khakurel et al., 2018) although, many of the ethical impacts that AI systems are implicated in are not explicitly accounted for in the original value protocols that VSD scholars have proposed for other ICTs (Friedman & Kahn Jr.,2002; 2003; Umbrello, 2019). The values proposed by the HLEG AI (High-Level Expert Group on AI, 2019) provide an excellent starting point for considering values that are explicitly implicated by AI systems. Having protocols of AI specific values are useful for ensuring a certain level of top-down alignment when engaging in AI design programmes, despite the almost certain need for bottom-up stakeholder engagement and value elicitations to make AI alignment robust and holistic (Umbrello & van de Poel, 2021).
The design of these types of AI systems needs to avoid doing harm and contribute to social good. Umbrello and van de Poel (2021) propose the UN Sustainable Development Goals (SDGs) as a larger set of values for social good to design AI systems for human values (discussed further in the next sub-section). The approach they propose is fundamentally predicated on three sources of values (Fig. 4): (1) avoiding harm, which should be construed as boundary conditions or design constraints, (2) doing good, which is associated with designing primarily for social good can be construed as design requirements and criteria, and (3) reflection of a specific context, that can take the form of avoiding harm and/or doing good. The three sources of value can overlap in many cases, although they require individual attention in the design process.
While the third contextual source can widely differ depending on a number of varying relevant factors, Umbrello and van de Poel (2021) argue that it is nonetheless useful to use protocols of values for (1) and (2):
-
The first tier of values should be taken into account in any application of AI. To ensure that AI does not do more harm than good, they propose making use of the values articulated by the HLEG AI and translated through the more concrete AI4SG norms into technical design requirements.
-
The second tier of values that they actively seek to promote is social good through AI. They propose using the SDGs as first-order operationalisations of what it means to contribute to social good through AI. Here the idea is that the SDGs, to which an AI application contributes, will be specific for that application (Umbrello & van de Poel, 2021, p. 5)
The next sub-section discusses the SDGs (tier 2) and sub-Sect. 4.3 discusses the AI4SG meanings and factors (tier 1).
SDGs
In 2018 the United Nation drafted a proposal of objectives that are to be designed and implemented for a safe and sustainable future, with the ultimate goal of global peace (United Nations, 2018). The foundation of this proposal is built on 17 actionable sustainable development goals (SDGs) (Fig. 5). The goals are presented as being necessarily combinatory and complimentary rather than hierarchically ordered or prioritized. The ultimate objective is a synergistic and symbiotic approach to achieving all of these SDGs.
The UN’s underlying philosophical approach frames technologies in an interactional way, arguing that technologies co-vary with their societal and cultural contexts, rather than being purely deterministic artefacts or instrumental tools. This institutional direction allows the SDGs to be tackled holistically rather than haphazardly, and likewise envisions technologies not only as a potential problem that exacerbates issues, but as a potential solution (Umbrello & van de Poel, 2021). Umbrello and van de Poel (2021) use the SDGs as a higher-order source of values in the VSD for AI systems (alongside others such as those used by the HLEG AI) in line with the global trend towards a set of common goals. Given that technology is a central force in the exacerbation, as well as amelioration, of the issues that the SDGs are proposed to address, they provide a useful set of higher-order guidelines to design it for human values.
AI for Social Good Factors
Umbrello and van de Poel (2021) argue that the most comprehensive and streamlined summary of the AI4SG factors are those recently produced by Floridi et al. (2020). The seven factors that are particularly relevant for the design of social good in AI are: (1) falsifiability and incremental deployment; (2) safeguards against the manipulation of predictors; (3) receiver-contextualized intervention; (4) receiver-contextualized explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticisation (Floridi et al., 2020, p. 3). Although these seven factors are discussed separately, like the SDGs, they are nonetheless co-dependent and co-vary with one another, making them inextricably linked in an effort to achieve AI4SG. Umbrello and van de Poel (2021) argue that the
seven factors each relate, in some way, to at least one of the four ethical principles that EU High-Level Expert Group on AI lays out: respect for human autonomy, prevention of harm, fairness and explicability. This mapping on to the more general values of ethical AI is not insignificant, any divergences from these more general values has potentially deleterious consequences. What the seven factors are meant to do then is to specify these higher-order values into more specific ‘norms’ and design requirements (Umbrello & van de Poel, 2021, p. 8) (Fig. 6).
For the purposes of brevity, this paper does not discuss the definitions of the seven factors outlined in Floridi et al. (2020). However, Umbrello and van de Poel (2021) argue that the AI4SG factors function like ‘norms’ following on from van de Poel’s (2013) characterization of norms as being framed as ‘maximizing’ or ‘minimizing’ certain value or design requirements, thus bridging the gap between abstract values (e.g., HLEG AI, UN SDGs) and concrete design requirements. This is discussed further in the next sub-section.
AI4SG-VSD Process
As outlined in the Introduction, the aim of this paper is to draw from the AI4SG methodology of designing by using VSD as part of a proposal for the future of care robots for the elderly Umbrello and van de Poel (2021). In other words, the UN SDGs and HLEG AI principles are used as the aims from which more specific values can be derived for doing good, while the normative AI4SG principles are used as the basis for avoid harm. Figure 7 illustrates how engineers can initiate investigations in their design program. Although differing from one project to another, the proposed framework provides a general outline that practitioners can follow to ensure they touch on the fundamental points proposed in this framework.
The four-stage iterative process that Umbrello and van de Poel (2021) propose is composed of: (1) context, (2) value identification, (3) formulating design requirements, and (4) prototyping.
Context
The sociocultural contexts in which a technology is being developed is crucial to its design and deployment. The empirical investigations central to VSD methodology become particularly useful here in enrolling stakeholders and eliciting their values to ensure a more symbiotic mapping of values to design requirements (Fig. 2) and outcomes.
Value Identification
A starting list of values can be useful in determining a more cohesive and symbiotic set of values. Umbrello and van de Poel (2021) propose three sources of values:
-
(1)
Values that are to be promoted by the design (i.e., UN SDGs).
-
(2)
Values that should be respected, particularly those relevant to the AI: respect for human autonomy, prevention of harm (nonmaleficence), fairness and explicability (i.e., HLEG AI).
-
(3)
Values that are context specific and are not covered by (1) and (2) but which are derived from the analysis of a specific context in the first phase, in particular values held by stakeholders (e.g., emotional attachment in the context of care robot design).
It is during this phase that the values in consideration are interpreted and defined as per the VSD method of conceptual investigation. A normative approach to upholding these values throughout the design process becomes explicit.
Formulating Design Requirements
Using the proposed values of the previous subsection, we consider how these abstract values can be made into more concrete design requirements. The ‘values hierarchy’ (Fig. 2) is one method for translating this type of value-to-design requirement. The different sources of values are translated in different ways. The SDGs, for example, are to be considered throughout the design process as being designed for as much as possible, and provides a higher-level aim for design values. Whereas the HLEG AI principles are construed as boundary conditions or constraints that provide what can be understood as the minimum necessary conditions for acceptable design. Regarding the context for design, stakeholder elicitations and theoretical value lists are based on context and provide an important way for how any uncovered values are translated into design requirements. VSD has several established methods for undertaking this type of translation from stakeholder elicitations and other empirical investigations of stakeholder values. These methods include: value scenarios (Nathan et al., 2007), value sketches (Woelfer et al., 2011), value-oriented coding manual (Kahn Jr. et al., 2003), value hierarchies (Longo et al., 2020; van de Poel, 2013), value-oriented mock-up, prototype, or field deployment (Czeskis et al., 2010), value dams and flows (Denning et al., 2013), and value sensitive action-reflection model (Yoo et al., 2013).
Prototyping
Directly aligned with ‘value-oriented mock-up, prototype, or field deployment’ that Friedman and Hendry (2019) discuss, prototyping is the fourth stage where design requirements can be tested. More specifically, it is the
development, analysis, and co-design of mock-ups, prototypes and field deployments to scaffold the investigation of value implications of technologies that are yet to be built or widely adopted. Mock-ups, prototypes or field deployments emphasize implications for direct and indirect stakeholders, value tensions, and technology situated in human contexts.
(Friedman & Hendry, 2019, p. 62)
The first AI4SG ‘norm’ echoes this: “AI4SG designers should identify falsifiable requirements and test them in incremental steps from the lab to the “outside world”” (Floridi et al., 2020, p. 7). In other words, unforeseen or emergent values may come into play in the post-deployment stage, despite a system aligning with all the requisite design requirements, norms, and values pre-deployment (van de Poel, 2016). If such emergent factors do come into play during this step, another iteration of the four-stage cycle may be needed to integrate and align the design (Umbrello & van de Poel, 2021, p. 15).
AI4SG Within a Care Context: PHAROS 2.0
In this section, we begin to integrate and further extend VSD within the AI4SG framework by situating it within the care domain. This is done by drawing on previous work on VSD within care practices, and more precisely, work on Care Centered VSD. In order to do that, we provide an example of a specific system that has recently been developed for the assistance of elderly people.
Physical Assistant Robot System (PHAROS) 2.0, is a socially assistive robot that monitors, evaluates, and advises older adults in performing physical activities and exercises at home (Martinez-Martin et al., 2019). This has been developed to help healthcare professionals evaluate the physical activity of elderly patients with illness outside of a hospital setting, and stimulate their overall health status (bodily and cognitively). PHAROS 2.0 can help by: providing exercise descriptions, both visually and verbally; recognising the type of exercise being done via visual input; giving feedback to the medical staff; and providing tailored recommendations to its users. All those tasks are accomplished with the use of machine learning techniques, such as Deep Learning (DL), which lets the system learn data features. Specifically, PHAROS 2.0 uses a combination of recurrent neural networks (RNN) and convolutional neural network (CNN) to identify and evaluate the physical exercises of users (Martinez-Martin et al., 2019, 2020). We examine the design of the PHAROS 2.0 prototype using the framework described above (i.e., Fig. 7).
Context
As a tool for domiciliary care, the PHAROS 2.0 is designed to: relieve the burden on and to assist caregivers, homecare assistants, and healthcare professionals; promote a more comfortable treatment to elderly people outside of a hospital setting; and assist in the performance of activities for daily living (ADLs). Its deployment is also aligned with the promotion of active ageing—a combination of domains, e.g. familiar, social, professional,—that is designed to promote adequate responses to the needs of older adults and keep them active and engaged in social and individual activities (Paúl et al., 2017). For example, the NHS promotes active ageing and has provided a list of physical exercises that older people, or people with cognitive or physical disabilities, can perform at home.Footnote 2 Design systems such as PHAROS 2.0 should aim to balance values tensions and remove the moral overload of prima facie conflicting values (van den Hoven et al., 2012). The prioritization of one value over another is strictly dependent on the context of use, which is either a hospital, a home, or a nursing home. For example, values of privacy and safety may be preferable to human contact and trust in domestic settings, since in such contexts the relation of trust between caregivers and care-receivers is already established and does not need to be prioritized (see also van Wynsberghe, 2013b). Therefore, the development of systems such as PHAROS 2.0 may assist in resolution of these kinds of value tensions in a way that still reduces health risks (bodily or cognitively) as much as possible.
Value Identification
Values That Are to be Promoted by the Design, in Particular Deriving from the SDGs
The design of PHAROS 2.0 can be viewed as part of a larger network in support of #SDG 3, ‘Ensuring healthy lives and promoting well-being at all ages’. In particular, it may promote target #SDG 3.8: ‘the achievement of universal health coverage, including financial risk protection, access to quality essential health-care services, and access to safe, effective, quality and affordable essential medicines and vaccines for all’. Accessibility, accuracy, and affordability are central ethical values in the healthcare domain, which are of particular importance to designers, organizations, and industries that aim to develop systems that are driven by AI and machine learning. PHAROS 2.0 may help to create a more efficient and interconnected care network for individual caregivers and care-receivers, and for the healthcare infrastructure as a whole. Practically speaking, care robots can prevent unnecessary surgical interventions on vulnerable groups, and encourage meaningful and personalised care practices. These systems are beginning to develop beyond their original context to include care practices that require a social component. Although PHAROS 2.0 has already been deployed, to meet with new design requirements it will be integrated with social interaction skills in the near future (Martinez-Martin et al. 2019, p. 4).
Values That Should be Respected, in Particular Those Values That Have Been Identified in Relation to AI: Respect for Human Autonomy, Prevention of Harm (Nonmaleficence), Fairness and Explicability
This second source value is values that are to be promoted in AI.
Respect for Human Autonomy
We are increasingly interacting with systems that are imbued with autonomous decision-making in different domains. These systems influence our lives in various ways, from shaping the context in which the individual makes a decision, to altering interactions between individuals, and assumptions of democratic participation. Human autonomy in this context is the balance between an agent’s retaining as much freedom of choice as possible and the delegation of decision-making to systems. Systems in turn should be designed in such a way as to promote autonomy, to avoid cases in which a systems’ efficacy falls short in making consistent and coherent decisions on the behalf of its human users (Floridi et al., 2018). Regarding care practices in CCVSD, the degree of a system’s autonomy can be evaluated through using three of van Wynsberghe’s four fundamental values of care: attentiveness, competence and reciprocity in the tasks. Human autonomy is an important consideration in what van Wynsberghe calls systems’ ‘appearance of moral agency’ (van Wynsberghe, 2016, p. 313). In other words, because systems are used in inherently ethical contexts i.e. being responsible for vulnerable groups, such as elderly people, this could compromise human autonomy. Thus, the ‘appearance of moral agency’ in the PHAROS 2.0 should be sustained and advanced by a contextual analysis of the care practices at stake and, with deference to the respective personal choices of caregivers in their assistance, and care-receivers’ in their treatments. For example, the designers of PHAROS 2.0 have proposed a new version of its recommendation strategy to meaningfully engage its users in the active ageing process. In fact, systems that are designed to give personal recommendations have enriched information provided by the user, which results in a more tailored workout that has a batch of exercises instead of just one (Martinez-Martin et al., 2019, pp. 8–13).
Prevention of Harm (Nonmaleficence)
This value seeks to avoid harm, and the risk of harm, by understanding systems’ capability and limitations. In the case of PHAROS 2.0, harm may occur due to the way the system has been designed for users. By promoting the well-being and safety of its users, the system risks valuing this over the users’ need for human contact and autonomy. The PHAROS 2.0 and other similar systems have raised privacy concerns because of their access to a users’ personal information. Systems observe and record care-receivers in order to provide effective care, but this has led to concern about the data practices of storing, archiving, collecting data, and monitoring care-receivers. PHAROS 2.0 has several databases with users’ personal information, including their health condition, exercise information, and their caregivers’ information (Martinez-Martin et al., 2019, p. 9). This data collection potentially infringes upon the privacy of care-receivers and the healthcare system more broadly, risking an exposure of privileged information to being shared with third parties. In terms of the design of AI-driven systems what can be discussed and problematized is the quality of the training data and the reliability of the algorithms used for predictions, such as the one used for recommendation in PHAROS 2.0.
Compared to the CC Framework, nonmaleficence can be subsumed under the value of competence, which assesses systems’ capability and limitations in executing a task. These systems’ capabilities may include: safety, efficiency, quality of task execution, force feedback, tactile perception, and other capabilities (van Wynsberghe, 2012, p. 111). Nevertheless, values such as efficiency and privacy are obscured in CCVSD, because they are considered as exclusively driven by either consequences or duties, and lack the ability to take into consideration the overall development of users’ ethical character (van Wynsberghe, 2016). However, we can arguably include these values in the list of moral elements, as they gain increasing relevance in the design and deployment of AI-driven systems.
Fairness
According to Floridi et al. (2018), the value of fairness can be framed as justice and can be defined in a tripartite way: (1) Using AI to correct past wrongs such as eliminating unfair discrimination; (2) Ensuring that the use of AI creates benefits that are shared (or at least sharable); (3) Preventing the creation of new harms, such as the undermining of existing social structures. The value of fairness in care practices refer to the allocation of healthcare resources and services on the basis of objective and fair health related needs and factors. This should also include the values of accessibility and affordability, which are strictly aligned to SDG #3 as noted above (5.2.1).
Explicability
This means that AI systems should be intelligible and transparent, and there should be at least one human agent that can be considered accountable for how the system works (Floridi et al., 2018). In van Wynsberghe’s works explicability is not explicitly mentioned. However explicability may be linked to the notion of trust, which she writes is a “hybrid event between the human caregiver and the robot” (van Wynsberghe, 2013a, p. 428). Trust is closely aligned with the value of responsibility (van Wynsberghe, 2012, 2016) insofar as they relate to the capacity to be held accountable and liable, but the idea of trust is never formally introduced or systematically used in the discussion of moral elements in the CC framework. PHAROS 2.0 replaces homecare assistants or specialised instructors in the monitoring and advising of older adults. In the near future it may also replace those actors in providing the elderly with company and helping them in emergency situations. The delegation of tasks to these systems may lead to other scenarios such as the disappearance of certain types of medical and caregiving professions, and the reallocation of expertise and responsibilities in healthcare systems.
Finally, such systems replace caregivers and can introduce new forms of attentiveness and competence that may lead, over time, to the establishment of trusting bonds. In this scenario, trust is not a bond between two actors interacting, but between multiple actors. These include the healthcare systems, the healthcare staff that are assisted, the third-party providers that implement the systems, and the institutions and policy makers that regulate the systems’ introduction and use. To further explore trust and its link with explicability, the focus of our approach should not be on the reciprocal engagement of the robot compared to the human, but on how the “forum” (van Wynsberghe, 2016) of trust has changed from being associated with the traditional relationship between caregivers and care-receivers to a new and unprecedented one.
Context-Specific Values That Are not Covered by (5.2.1) and (5.2.2) by Which Derive from the Analysis of the Specific Context in Phase, in Particular Values Held by Stakeholders
We have shown how the PHAROS 2.0 system has been developed in response to the ever-increasing elderly population and the need to promote active aging via a more personalised approach to the elderly population’s overall health status (bodily and cognitively). Many of the values and side effects of the PHAROS 2.0’s deployment have been discussed, such as the values of general health and well-being (under 5.2.1) and autonomy, non-maleficence, fairness as justice, and explicability (under 5.2.2). For example, PHAROS 2.0 may give a user a false sense of security with regard to their general health or well-being, or it may increase the users’ dependency on technological systems to the detriment of their autonomy. It may also lead to unintended discriminations due to the systems’ potential lack of accessibility and affordability, and thus its fairness, in its dissemination to potential users. However, other values can be less clearly subsumed under the two source values outlined above. A contextual analysis serves to consider such classes of values, which are related to stakeholders’ values and preferences (see 5.1).
One of the possible values at play in the context-specific level may be emotional attachment, which is strictly aligned with trust. Current robots are said to be incapable of giving the ‘‘real compassion and empathy or understanding’’ that is found in human companionship (Sharkey, 2014). From a CCVSD perspective systems such as PHAROS 2.0 should be designed in a way that promotes the value of responsiveness or reciprocity (van Wynsberghe, 2016) to encourage human autonomy and, potentially, the foundation to build a bond of trust between the care-receiver and the care robot. Therefore, at this stage, VSD methodological tools such as envisioning cards (Friedman & Hendry, 2012), which are designed to evoke considerations and discussion, may help in reconstructing stakeholders’ values and in the modelling of physical human–robot interactions in response to users’ preferences. It is of crucial importance also for the future development of care practices and the long-term and indirect effects that context-specific values are thoroughly understood, and then translated into design requirements.
Formulating Design Requirements
There is a variety of instruments and methods in the VSD methodology that can be adopted to help designers actualize necessary design requirements into any given design. As we have demonstrated, the ‘values hierarchy’ (Fig. 2) is particularly useful as a way to illustrate and trace design requirements from norms and value, and vice versa. Figure 8 is one example of how to visualize the translation of higher-level values, through AI4SG norms and into technical design requirements.
Figure 8 visualizes the abstract value of respect for human autonomy. It is translated through various AI4SG norms (3, 5, and 7), which are illustrated in Fig. 6, into more concrete design requirements. Umbrello and van de Poel (2021) construe the AI4SG factors as norms, rather than as abstract values in design in light of recent work by Floridi et al. (2020) which describes norms as being imperatives for designers. It should be noted that the context of use will naturally change any given combination of values, norms and subsequent design requirements. Figure 8 is just one illustration of how this process, and given our earlier example of the PHAROS 2.0, the value of respect for human autonomy is not necessarily the predominant value in a design. To this end, “there is no exclusive nor exhaustive route for satisfying a value translation” (Umbrello & van de Poel, 2021, p. 19). Both prevention of harm and explicability for example overlap with respect for human autonomy, given that they implicate AI4SG norms 5 and 7 respectively (see Fig. 9). As a result, these values mutually co-vary and, in many cases, should be used to operationalize each other. Design requirements that are translated from the value of explicability, for example, can be used as a route for engaging with and operationalizing the value of situational fairness and prevention of harm.
Functionally speaking, the AI4SG norms are apt at avoiding most of the harms that may emerge as a consequence of AI systems deployment, but not all. However, this does not mean that such systems will necessarily positively contribute to social good. “Global beneficence” that is, contributing to global good, is an inherent part of engaging with higher-level values like the actual operationalization of the SDGs as discussed above. For this reason, our paper has adopted this multi-tiered approach to VSD. By combining values specific to AI and stakeholder values, with due consideration of SDG targets, as met by AI4SG norms, concerns around the sanctioning of AI systems that do not respect these central ethical principles can be mitigated.
Prototyping
Prototyping requires building mock-ups based on design requirements (see Sect. 4.4.4). In doing so, the technologies move out of the conceptual space to become imbibed with the values of the stakeholders. Widespread testing type activities take place to determine if any of the design decisions prove to be intractable in practice that were not during their development in the design space. This is also the stage that determines if there are any emerging technical or ethical issues that were not foreseen at previous stages of the VSD methodology (van de Poel, 2020). Because care robots are in limited deployment rather than ubiquitous rollout, these systems retain the ability to be recalled from operation and brought back into the design sphere. This means that more iterations of the VSD approach can be undertaken to account for any unforeseen issues that might emerge post-deployment. Unlike the SARS-CoV-2 contact-tracing app which has obvious incentives for immediate deployment (Umbrello & van de Poel, 2021), the development and deployment of the PHAROS 2.0 robot is not motivated by similar immediate need. Technologies like PHAROS 2.0 have the advantage of small scale testing and direct stakeholder mock-ups, which means that they are more capable of affirming design values and of avoiding the harms that would have inevitably emerged if it were deployed as part of an emergency rollout.
Prototyping is not strictly a lab-driven activity that is focused solely on testing technical aspects in a decontextualized setting. On the contrary, a crucial part of the prototype stage is to observe and evaluate the social and ethical effects of a design from its limited deployment in the field. Prototyping is a form of interactional design that draws on a participatory design approach where designers and stakeholders work together in an organisational or context-related setting like that described by Bødker (2015). The PHAROS 2.0 robot is a particularly apt case. Values such as respect for human autonomy can be affirmed through various design decisions, such as integrating an automatic receiver recognition, which allows PHAROS 2.0 devices to communicate with multiple agents such as medical staff, homecare professionals, and patients. On the other hand, other values such as situational fairness may require of the moment insights with stakeholders to understand the more nuanced behaviors that affect the salient design of more abstract values. For this reason, it may be more effectual to begin with and gradually scale up small-scale mock-ups to ensure that the progressive iterations of the methodology sufficiently account for the changing and emerging values in each deployment and are accounted for in the design (c.f., van de Poel, 2020). This type of approach can aid designers to discover new values, prompting another iteration of the cycle that might not have been triggered otherwise.