Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations

In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making (ADM) are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making (HDM), namely the allocation of COVID-19 vaccines to the public. In particular, we elaborate on the role of trust and social group preference on the legitimacy of vaccine allocation. We conducted a survey with a 2 × 2 randomized factorial design among n = 1602 German respondents, in which we utilized distinct decision-making agents (HDM vs. ADM) and prioritization of a specific social group (teachers vs. prisoners) as design factors. Our findings show that general trust in ADM systems and preference for vaccination of a specific social group influence the legitimacy of vaccine allocation. However, contrary to our expectations, trust in the agent making the decision did not moderate the link between social group preference and legitimacy. Moreover, the effect was also not moderated by the type of decision-maker (human vs. algorithm). We conclude that trustworthy ADM systems must not necessarily lead to the legitimacy of ADM systems.


Introduction
The spreading novel coronavirus SARS-CoV-2 and the ongoing global COVID-19 pandemic coincide with the worldwide proliferation of computer technology in everyday life.Consequently, computer systems have also been widely regarded as a viable instrument in combating the pandemic (Bragazzi et al. 2020;Calandra and Favareto 2020;Jacob and Lawarée 2020;Malik et al. 2020;Nguyen et al. 2020;Sipior 2020).For instance, aiming to mitigate the loss of life and to find treatment, cures, and vaccines against the virus, the necessary medical research is unthinkable without computers.Beyond their general use as research instruments for medicine and public health, computer systems nowadays are also seen as a potent and helpful tool in tackling the more social issues of a pandemic.As a prime example, systems of automated decision-making (ADM) have been deployed to automatically and fairly prioritize persons for vaccination better to coordinate the vaccination of the population against the coronavirus.
Because vaccination prioritization is a hotly debated social issue, and the unreflected use of technology may come with severe social consequences, this implementation of ADM receives particular public scrutiny.
In many cases where ADM systems were deployed, it became quickly apparent that their decisions for prioritization were biased, leading to backlash and outright rejection (Ciesielski, Zierer, and Wetter 2021;Guo and Hao 2020).
Nevertheless, even if ADM systems consistently followed a formally fair algorithm as intended by their makers, the public may still question the algorithmic systems' decisions despite their formal attainment of optimization goals.After all, algorithms may arrive at optimized decisions that correspond to formally correct and fair outcomes but are unintuitive to a lay public as decisions may entirely oppose people's social preferences and moral ideas.Negative evaluations of controversial public decision-making pose a general social problemregardless of whether those decisions are based on human decision-making (HDM) or ADM.Notwithstanding, the question arises whether the type of decision-maker (ADM vs. HDM) influences the decision evaluation.
To shed light on this issue, in this paper, we first ask to what extent ADM is perceived as a viable solution for the distribution of the vaccine and examine the role of trust as an explaining factor for viability perceptions.Second, we investigate the impact of decisions concerning the prioritization in the coronavirus vaccine allocation on perceptions of the legitimacy of the decision.In particular, we examine the consequences for decision legitimacy when decisions are unpreferred.Contrasting such perceptions concerning ADM to a situation where humans decide about prioritization, we also inquire whether the trust in the agent making the decision moderates the supposed relationship between the favorability of a decision and its legitimacy perception and whether the proposed mechanisms differ between the two decision-making agents.
Drawing a quota sample from a German online access panel, results indicate ambivalence in the general perception of ADM as a viable tool for disseminating vaccines against the coronavirus among the German population.However, higher general trust in ADM systems is positively related to a more favorable assessment of the viability of their use in vaccine distribution.Using a factorial survey design that randomly varies the prioritization of different social groups (prisoners vs. teachers) in vaccine distribution and the agent deciding such prioritization (ADM vs. HDM), results also suggest that decisions that assign a higher priority to unpreferred groups are perceived to be less legitimate.Contrary to the authors' expectations, the trust in the agent making the decision did not moderate this relationship, and there is also no difference between ADM and HDM concerning a moderating effect of trust.
As ADM systems may have adverse and especially discriminating consequences and the use of ADM systems hinges on widespread public acceptance, the resulting insights into the determinants of public support concerning ADM provide valuable information regarding their implementation.We consequently discuss implications for executives, politicians as well as actors from civil society.

ADM Systems in Prioritization of the Coronavirus Vaccine Distribution
The societal distribution of limited goods, such as medical resources and especially vaccines, is a social challenge that warrants research attention (Grover, McClelland, and Furnham 2020;Huseynov, Palma, and Nayga 2020;Ratcliffe 2000).The prioritization of vaccination is a hotly debated public issue as the world faces the threat of a global pandemic in early 2021.The roll-out of the international vaccination program against the coronavirus saw itself confronted with a limited amount of vaccine that needed to be distributed to the human population as rapidly and effectively as possible.
As a result, such a vaccine distribution process often relies on many, especially multi-faceted data points from patients, namely their age, work occupation, or pre-existing health issues determining an individual risk status that leads to the decision about (non-)prioritization of a person.The rule-based distribution then usually technical formulations that structure and evaluate such input data according to pre-determined distribution criteria for assigning vaccines.
The more data points considered and the more sophisticated the allocation formula, the more difficult it becomes for human decision-makers to assess whom to vaccinate first and to establish an order for vaccination.Consequently, computer systems have been deployed to assist in managing this process.Moreover, formalized algorithms and computer systems can be used in the pre-determined vaccination distribution organization and thus providing guidance in identifying and implementing better-optimized distributions.In a simulation study, "using an age-stratified mathematical model paired with optimization algorithms," (Matrajt et al. 2020, 1) a research group shows how different optimizing strategies lead to different recommendations on whom to vaccinate first.
In theory, if one aims for a fine-grained allocation based on extensive data-processing, digital tools may better optimize the allocation of vaccines and do so more quickly.Thus, an algorithm may also relieve medical or administrative staff in times of crisis.Consequently, ADM may be seen as a viable solution for allocating coronavirus vaccines -at least when it comes to the bureaucratic perspective of public management and administrative decision-makers (Wirtz and Müller 2018).
In practice, despite great hopes for better outcomes, ADM systems have often not been able to protect what appear to be the most vulnerable groups and have led to unintended and morally questionable decisions.Deployed as a tool to prioritize people for vaccination against the coronavirus, ADM systems, too, have shown to produce incorrect and biased decisions that have been regarded as morally wrong and unfair.
When an algorithm was tasked to distribute the first batch of vaccines against the coronavirus at the Stanford Medical Center in the US in December 2020, only a few physicians from the front lines were prioritized (Guo and Hao 2020).While not all reasons for this results are entirely public, a report by the MIT Technology Review highlights that the inclusion of age of employees was critical, since it prioritized old and young staff.However, according to the report, "frontline workers [. . . ] are typically in the middle of the age range" (Guo and Hao 2020).Secondly, another criticism was, that to "expose to patients with covid-19" (Guo and Hao 2020) was not included as a factor.The resulting algorithm's preference for administrators or doctors working from home resulted in a backlash against the ADM system, protests from the hospital's residents, and quite some public attention (Wu and Isaac 2020).
In the state of Bavaria in Germany in early 2021, an algorithm was used to assign vaccination appointments to a pre-defined risk group that consisted of people of 80 years or older as well as younger persons that were assigned to the respective risk group due to having a high-risk profile, e.g.medical staff (Ciesielski, Zierer, and Wetter 2021).Appointments prioritized people with a higher score based on their age.However, the algorithm assigned a randomly chosen value between 80 and 100 to persons below 80 years of age.Consequently, the algorithm discriminated against the younger octogenarians that were simply assigned their true age and thus had a lower chance to receive an appointment than younger people.The chance of being assigned randomly to be older than 80 years is 95% for the younger persons in the risk group.As a result, only a few 80-year-olds received an appointment for vaccination, causing complaints and extra effort and expenses as the underrepresented group had to be manually contacted.

ADM for the Social Good and its Public Perception
Such anecdotal evidence is in line with recent research that suggests that the implementation of ADM in public administration is far from unproblematic (Hartmann and Wenzelburger 2021).Even the most well-intentioned ADM may falsely discriminate against certain groups, i.e., such systems often violate the "established weighting of relevant ethical concerns in a given context" (Heinrichs 2021, 1).These general concerns regarding discrimination and biases have recently instigated substantial research activity that addresses the social implications of ADM implementation (Crawford et al. 2016).
To better guide the intricate development process of automated computer systems for the 'social good, ' Berendt (2019) proposes four questions that need to be asked in advance regarding the means and end of ADM implementation: "What is the problem [. . .]?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics?"(p.44).Accordingly, fighting the pandemic threat of the coronavirus by distributing vaccines is a goal that certainly benefits the social good and, as depicted above, may be tackled using ADM.However, any attempt at appropriately answering Berendt's questions reveals that ADM implementation may prove a complex and intricate task in this regard.Depending on different assumptions and preferences, approaches and results of ADM may vary extensively.For example, depending on preference, the respective solution to the problem of too many infections, or deaths, or too much economic damage, could be defined as either "lower case numbers (of certain groups)," or "lower the death rate," or even "ensure a fast return to normal life," or all of the above.The problem can be defined by various stakeholders, e.g., expert commissions, politicians, the media, or the broad public.Then, as part of the knowledge problem, one must consider how problems and solutions using ADM are framed by stakeholders via mediated public communication and received and understood by all parties involved, especially the public.Eventually, it is hard to determine important side effects and unintended dynamics well in advance when it comes to ADM.
Consequently, to better guide the implementation process and prevent respective problems regarding the social good, the European Union offers specific guidelines that include the demand to promote trustworthy ADM as a solution for opaque and inaccessible applications.Thus, it becomes clear that ADM's public perceptions are of utmost importance in the intricate implementation process of ADM systems.For that reason, the usage of ADM in combating the coronavirus pandemic serves as a prime example.With a particular focus on the public perception regarding questions of ADM discrimination and trust in the decision-making agent, this raises important research questions that we investigate in this study.
As a result, our paper adds to the pre-existing literature in three important ways.First, it provides novel insights into the public perception of ADM in combating the coronavirus.Second, it sheds light on potential issues of ADM implementation, primarily when resulting decisions are perceived as unpreferable.Third, it addresses the effect of trust in agents making important decisions, contrasting human and automated decision-making.

The Perceived Viability of ADM in the Distribution of Vaccines
At the outset of addressing consequences by implementing ADM systems stand the general public perception and assessment of the viability of ADM in the distribution of vaccines.In general, expectations concerning the use of ADM systems in decision-making include the decisions to be quicker, more consistent, and in general more robust compared to human decisions that are also expected to adhere to specific distribution formulas (Dawes, Faust, and Meehl 1989;Kaufmann and Wittmann 2016;Kuncel et al. 2013).Accordingly, in recent years, the use of ADM systems in public administration has gained considerable traction (Wirtz and Müller 2018) and -as demonstrated by the two case examples from the US and Germany above -is also implemented for the distribution of COVID-19 vaccines.
In general, people may be aware of the possibilities of computer systems and their implementation regarding distribution processes, even if they have not yet heard of specific ADM use cases, whether regarding vaccine distribution or else.For instance, there is a general awareness concerning the widely discussed impact of Artificial Intelligence (AI) on society (Kelley et al. 2019).Strictly speaking, ADM may not necessarily be AI.In terms of public perception, it still can be argued that computer systems that autonomously make decisions are at least associated with AI from a lay perspective (Cave, Coughlan, and Dihal 2019;Liang and Lee 2017).Research shows that many countries generally have rather favorable attitudes towards AI (Kelley et al. 2019;Zhang and Dafoe 2019).However, in specific contexts, consequential decisions of AI may be perceived as threatening (Kieslich, Lünich, and Marcinkowski 2021).
The first question of interest for our research is whether people perceive ADM systems as a viable solution for distributing vaccines against the coronavirus.We thus ask the following research question:

Trust in Algorithms and the Perceived Viability of ADM
ADM systems are often considered 'black boxes' because it is often impossible to make the inner workings of such systems transparent and comprehensible (Ananny and Crawford 2018).They operate with millions of data points and predict outcomes using opaque self-learning algorithms.Most systems are so complex that even developers and researchers sometimes fail to understand how the machine came to a specific conclusion (Burrell 2016;Diakopoulos 2016).Accordingly, the high complexity of such systems may lead to a lack of comprehension, especially among the broad public with little technical knowledge and expertise about the underlying technology (Fine Licht and Fine Licht 2020).Consequently, it may prove challenging to comprehend data-driven decisions fully.
In such situations, trust becomes an essential factor that influences the formation of attitudes and decisionmaking.A prominent definition of trust -that is also adopted by AI researchers (e.g., Glikson and Woolley 2020) -is the one of Mayer, Davis, and Schoorman (1995), who define trust as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" (p.712).Thus, recognizing the complexity of the systems, people are often put in situations where they have to rely on the decisions of ADM while not being able to check and verify themselves whether the decision-making process and the ultimate solution are exemplary.
Hence, a common goal of researchers and politicians is to create systems that are trustworthy.For instance, the European approach towards AI set by a high-level expert group actively strives for trustworthy AI design that is also applicable to ADM systems (European Commission 2019).According to the EU guidelines, trustworthiness can be achieved through the fulfillment of seven ethical principles and resulting key requirements for trustworthy intelligent systems: human oversight, technical robustness and safety, privacy, transparency, fairness, societal and environmental well-being, and accountability (for an overview of global ethical guidelines, see Jobin, Ienca, and Vayena 2019).The main intention is to strengthen the public trust in such systems that subsequently will lead to acceptance of their implementation.1 Empirical research shows that trust is a driver for positive opinions about technology acceptance (Hoff and Bashir 2015).Moreover, Shin (2021a) and Shin (2021b) found evidence that trust in algorithms positively influences perceptions of algorithmic performance.Additionally, Shin and Park (2019) report that people who show high trust in algorithms evaluate algorithms more positively in terms of satisfaction and usefulness than people who show lower trust in algorithms.Experimental research by Robinette et al. (2016) suggests that participants followed algorithmic instructions given by a robot in a high-risk situation due to (over)trust, even after seeing it make mistakes.Accordingly, we hypothesize as follows: H1.The trust in ADM will be positively related to accepting an ADM as a viable solution for distributing COVID-19 vaccines.

Social Preferences in the Evaluation of the Distribution Process
However, the general assessment of ADM's viability is only a tiny puzzle piece of the social challenge of vaccine distribution.A more significant issue looms when it comes to the actual decision-making and its results.Just because people perceive decision-making as viable in general, a resulting decision itself is still individually evaluated and may subsequently be questioned due to various reasons.After all, even if an ADM system consistently arrives at formally fair decisions, this would not automatically mean that the decisions are generally endorsed.In this regard, ADM decisions are not different from decisions made by humans.For instance, people may still call into question the algorithmic systems' decisions despite their formal attainment of optimization goals: Be it that the decisions are unintuitive, or incomprehensible, or opposed to an individual's social preferences and moral ideas (Brown et al. 2019;Grgic-Hlaca et al. 2018).
When it comes to the actual distribution of limited public goods (e.g., the distribution of vaccines against the coronavirus), decisions that favor one social group over another may thus prove as a problem.The literature on the assessment of distribution problems has repeatedly shown that people exhibit not only material self-interest in their evaluation of decisions but also social preferences."A person exhibits social preferences if the person not only cares about the material resources allocated to her but also cares about the material resources allocated to relevant reference agents" (Fehr and Fischbacher 2002, C2).
Ultimately, when applying the questions by Berendt (2019) mentioned above to the given case of allocation of vaccinations, it is likely that different results may occur, and hence, different ADM systems can be developed and deployed.If one aims to identify the persons highest at risk for catching the coronavirus, particularly vulnerable groups are prisoners and teachers (Burki 2020;Gaffney, Himmelstein, and Woolhandler 2020;Kahn et al. 2020).Thus, an ADM system may easily derive a solution that treats both groups equally or may even prioritize one group over the other.
However, it has been demonstrated by prior research (Fallucchi, Faravelli, and Quercia 2021;McKneally and Sade 2003) that such decisions about the allocation of medical resources in the prioritization of different social groups will be questioned and perceived as illegitimate as people regard them as objectionable on moral grounds.For instance, several studies found that patients' characteristics and lifestyles influenced public perception on who to treat first concerning organ transplantation.People would allocate significantly less medical treatment to smokers, persons with high alcohol consumption, or promiscuous behavior (Furnham, Ariffin, and McClelland 2007;Huynh, Furnham, and McClelland 2020;Ubel et al. 2001).Personal life choices can lead to the preference of one affected group over another in the eyes of the public.
Consequently, potential decision outcomes of ADM applications need to come into focus, especially those that may be perceived as controversial.In this study, we do not wish to disentangle the specific motivations for a social preference given a particular decision.Our investigation instead focuses on the consequences of decisions that violate social preferences.Despite the best intentions, ADM, as well as HDM, may frequently result in controversial and unpopular decisions.
Concerning the allocation of scarce medical resources, studies showed that, in the case of the corona pandemic, the public prioritized treatment of younger, respectively sickest patients (Grover, McClelland, and Furnham 2020;Huseynov, Palma, and Nayga 2020).Transferred to the application of our study: Especially when public sentiment suggests that unfavored groups should not be receiving any advantages, decisions regarding vaccine distribution that are perceived as unfavorable by the public may be seen as illegitimate.For instance, prisoners are being punished for a crime they committed and are subsequently often stigmatized and disadvantaged (Falk, Walkowitz, and Wirth 2009;Kjelsberg, Skoglund, and Rustad 2007), especially in contrast to teachers that enjoy high esteem with the majority of the German population (dbb beamtenbund und tarifunion 2020).Consequently, decisions that favor a group with lower social prestige compared with a group of high social prestige may be publicly questioned as the need and merits of the latter group are rated as higher as that of the former group -irrespective of the algorithmic conclusions aimed at optimization that drove the decision-making in the first place.Therefore, it is assumed that in cases where decisions favor groups that are of lower social prestige, the disapproval of early vaccination of the respective group with the general public will result in lower legitimacy of the decision.Accordingly, we hypothesize as follows: H2.The disapproval of early vaccination of a social group will be negatively related to the legitimacy of early vaccination of the respective group.

The Moderating Role of Trust in the Agent Making the Decision
Concerning the importance of trust in the evaluation of ADM discussed above, trust may not only be an explanatory variable when it comes to general perceptions of the viability of ADM applications.Trust may more specifically be a decisive factor in a situation in which people encounter a decision by an agent that a) is not fully comprehensible to them and b) is objectionable to them in its outcome.Trust may then be a deciding factor as people that show higher trust may still perceive a decision as legitimate even though they do not prefer the outcome.People with lower trust in the agent making the decision will not be affected by their lack of trust and will perceive the decision as illegitimate.
Empirical research showed that trust in algorithms moderated the effect of transparency, fairness, and accountability perceptions on satisfaction with an algorithm; for people with a high trust level, the positive effect of the relationship between the perception and satisfaction of the ethical principles were higher than for those people with low trust (Shin and Park 2019).Another study of Ye et al. (2019), focusing on adopting AI in medicine in China, found that trust in AI and medical staff negatively moderated the effect of perceived usefulness on intention to use the respective technology.Hence, we hypothesize as follows: H3. Trust in the agent making the decision will moderate the negative relation between disapproval of early vaccination of a social group and the legitimacy of early vaccination, such that this negative relationship will be weaker when trust in the agent making the decision is higher.

Differences Between Automated Decision-making and Human Decision-making
The general impetus to implement ADM systems is to arrive at better decisions than human decision-making (König and Wenzelburger 2021).For instance, the use of ADM in public administration is often expected to be superior, namely faster and cheaper, but also more reliable, impartial, and objective than HDM (Wirtz and Müller 2018).However, even if that were the case, the public assessment of important decisions may deviate for various reasons, as already suggested above.Despite the best intentions, decisions by ADM, as well as by HDM,-while technically correct and optimally aiming at the desired results -may still be negatively perceived.
In this regard, two contrasting strands of the literature are highlighted concerning the acceptance or rejection of algorithms and algorithmic advice, respectively, compared to human judgment: Algorithmic aversion (Dietvorst, Simmons, and Massey 2015; Dietvorst and Bharti 2020) and algorithmic appreciation (Logg, Minson, and Moore 2019).Notably, the research object of those studies are algorithms that cannot be perfect in their predictions, which always come with some degree of uncertainty.Such algorithms are used daily for recommendation purposes or forecasting tasks.
Algorithmic aversion studies mostly argue that algorithms are unfavored compared to humans, even if they perform better.Seminal work was done by Dietvorst and colleagues, who found empirical evidence that people reject algorithms when they have seen them perform and making a mistake (Dietvorst, Simmons, and Massey 2015).This finding persisted when participants directly compared an algorithm that factually made better decisions than a human.In another study, Dietvorst and Bharti (2020) argue that algorithmic aversion is correlated with the uncertainty of a given situation; that means that a problem cannot be solved deterministically, but can only be derived by a system, e.g. the prediction of stock market prices.The higher the uncertainty of a situation, the more algorithms are rejected.On the other side, Logg, Minson, and Moore (2019) found that laypeople predominantly tend to follow the advice of algorithms more than the advice of non-expert humans.However, this algorithm appreciation vanished when a human gave advice or when they have to choose between their one prediction and an algorithmic one.These findings are supported by Thurman et al. (2019), who tested algorithmic appreciation for the use case of news recommendation and found that algorithmic recommendation was preferred to expert recommendation.
Thus, several factors seem to play a role regarding the acceptance or rejection of algorithms, especially if a comparison is drawn to human decision-making.Firstly, the context in which an algorithm is used is of importance.Studies suggest that uncertainty of a situation can lead to different degrees of algorithmic acceptance.Secondly, the role of the human to which an algorithm is compared plays a crucial role.If human decision-makers or advisers are considered experts, they are mostly preferred over algorithms (even if they make worse decisions).However, this is not true for all contexts.
In our study, we argue that if an ADM makes such a decision on vaccine distribution, the negative effect of trust will be weaker in comparison to human decision-making.This is because we consider vaccine distribution to be a high-risk situation in which it has been shown that people show overreliance on algorithmic advice (Robinette et al. 2016).Accordingly, we hypothesize:

H4. The type of agent making the decision moderates the interaction effect of the disapproval of vaccination of a social group and trust in the agent making the decision on the legitimacy of the decision, such that the negative relationship between disapproval of vaccination and legitimacy of early vaccination is weaker for ADM making the decision when trust in ADM is high compared to humans making the decision when trust in humans is high.
Figure 1 shows the conceptual model for the three hypotheses H2, H3, and H4.

Method
To answer the research question and hypotheses, we conducted a cross-sectional factorial survey using a questionnaire with standardized response options.To assess the findings, we performed the data analysis in R (version 4.0.3)using the packages lavaan (Rosseel 2012) and semTools (Jorgensen et al. 2019).We pre-registered our research question, hypotheses as well as the measurement of the variables (https: //osf.io/xhvwr).

Procedure and survey design
For screening purposes, respondents first had to indicate some demographic information.Afterward, the respondents answered questions concerning their opinions on the current coronavirus pandemic, especially on the political handling of the corona situation and their opinions on the current state and progress of vaccination.We also included a question asking for a hypothetical vaccination prioritization of different social groups as well as trust in the standing commission on vaccination (STIKO), which is in charge of recommending vaccine prioritization in Germany.Next, after assessing knowledge of artificial intelligence (AI) 2 , participants were given a brief explanation of the term AI.After that, they answered questions regarding their attitudes and opinions on AI.In the following, participants were introduced to the use case -vaccination distribution through an ADM system.Thereby, ADM was explained as some form of AI.Respondents rated their trust in such a system before they were confronted with the experimental condition.
Each participant was presented with one out of four possible scenarios, following a 2x2 design.Participants were told that I) either an ADM system or the STIKO (as human commission making decisions -HDM) set up a vaccination distribution plan with the result that II) either teachers or prisoners would be prioritized.Following up, respondents rated the output legitimacy of the decision as well as their fairness perception of the distribution process.To conclude, participants were thanked, debriefed, and redirected to the provider of the OAP where they received monetary compensation for participation.

Sample
Participants were recruited with the online access panel (OAP) of the market research institute respondi that is certified according to ISO 26362.To avoid overrepresentation and skew in the sample composition quotas were used as a stopping rule.Survey field time was between March, 26 and April 12, 2021.At this time, vaccination against Covid-19 in Germany was not open for anybody, but dependent on predefined risk groups by the STIKO.
Altogether 12000 respondents from the OAP were invited to participate in the survey.The questionnaire was accessed by 3359 persons and 3048 persons started answering the questionnaire.Of those 1184 persons were screened out as their respective quotas were already exhausted or they were not eligible for our survey as they did not belong to the investigated population.At last, 1740 respondents completed the questionnaire successfully.The dropout rate was 6.1%, and dropouts were equally distributed over all pages of the questionnaire.Additionally, we filtered out those participants who answered the questionnaire in less than 4 minutes and 30 seconds.In a pre-test, the authors determined this as the minimum amount of time to reasonably answer the questionnaire.The final sample consists of 1602 participants.

Measurement
Approval of early vaccination for a social group.For the measurement of preference for early vaccination of a social group, respondents were confronted with a list of social groups, including teachers as well as prisoners in the closed penal system (full item wordings can be found in the appendix).For each group, respondents had to rate how they would like the idea that this specific group received prioritization for an early vaccination against the coronavirus on a five-point Likert scale (1=do not like; 5=like, -1=cannot judge).A Welch two Sample t-test shows that an early vaccination of teachers (M =4.31, SD=1.04) was significantly more preferred over an early vaccination of prisoners in the closed penal system (M =2.09, SD=1.3), t(2549.8)=-49.83, p=0.General trust in ADM.The general trust in ADM was measured via four items on a five-point Likert scale ranging from 1=do not agree at all to 5=totally agree.While the underlying construct is called general trust in ADM the question wordings addressed systems of artificial intelligence.We used this approach as a) we assumed a greater familiarity of respondents with the term of artificial intelligence compared with automated decision-making and b) the tested scales used for the assessment of our constructs were adopted from similar research contexts that predominantly referred to AI.The scale was adapted from the measurement of trust in recommender AI proposed by Shin (Shin 2021a) and the used items read as follows: • "I trust that AI systems can make correct decisions." • "I trust the decisions made by AI systems." • "Decisions made by AI systems are trustworthy." • "I believe that decisions made by AI systems are reliable." 2 As discussed above ADM-systems may be regarded as a form of Artificial Intelligence (AI).Consequently, some questions used in the questionnaire link to the terminology of AI.On the one hand, it arguably is a more familiar term for the German public than ADM.On the other hand, we aimed for measuring some attitudes concerning the technology on a broader level.

The four indicators suggest good factorial validity (see table 3).
Viability of ADM for vaccine distribution.Assessing the perceived viability of ADM for vaccine distribution respondents had to rate three statements on a five-point Likert scale ranging from 1=do not agree at all to 5=totally agree.
• "Computer-based decision systems are useful for the vaccine distribution process." • "I support the use of computer-based decision systems in the vaccine distribution process." • "The use of computer-based decision systems for vaccine distribution would help solve the problems of vaccine distribution." The three indicators suggest good factorial validity (see table 3).
Trust in the agent (ADM/HDM) making decisions for vaccine distribution.Trust in ADM for vaccine distribution was equally measured as the general trust in ADM mentioned above except that we changed the word "AI" to "a computer system in the vaccine distribution," respectively "the STIKO in the vaccine distribution." Before assessing group differences using latent factor modeling, the necessary measurement invariance of the indicators (Putnick and Bornstein 2016) is examined by the following stepwise procedure.A first model assessed configural invariance (M1).In a second model (M2) we check for metric invariance by constraining the factor loadings and comparing the two models using a χ 2 -difference-test.A non-significant χ 2 -difference-test suggests that the model with equality constraints does not fit worse than the model without such constraints and the respective model parameters are considered to be equal.Afterward, a third model (M3) with constrained indicator intercepts is used to check for scalar invariance by also comparing it to M2 using a χ 2 -difference-test.A model that passes this test for measurement invariance suggest strong factorial invariance.In a final step, constraining the residual variances of the indicators in a fourth model (M4) we test for residual invariance.
Table 1 suggests that there is factorial invariance for the measurement of trust in the agent making decisions for vaccine distribution.The four indicators suggest good factorial validity (see table 3).Legitimacy of the decision for vaccination prioritization.Legitimacy of the decision was measured with four items on a five-point Likert scale ranging from 1=do not agree at all to 5=totally agree.An exemplary item was "I accept the decision."The scale items were adopted from Starke and Lünich (2020) and read as follows: • "I accept the decision." • "I agree with the decision." • "I am satisfied with the decision." • "I recognize the decision." A test for measurement invariance suggests strong factorial invariance of the indicators measuring the legitimacy of the decision (see table 2).The four indicators suggest good factorial validity (see table 3).Accordingly, when using the latent factors of trust in the agent making the decision and legitimacy of the decision for vaccination prioritization in the structural regression models of the analysis, because of factorial invariance equality constraints between the groups will be imposed on the factor loadings and the indicator intercepts.

Results
Viability of ADM for vaccine distribution.Addressing RQ1 we ran a latent factor analysis.In this analysis and the following, effect coding was used for factor scaling, a procedure that "constrains the set of indicator intercepts to sum to zero for each construct and the set of loadings for a given construct to average 1.0" (Little, Slegers, and Card 2006, 62).The eventual factor is scaled like the indicators, which especially in the case at hand helps with interpretation.As there were three indicators, the model is fully identified and there are no degrees of freedom and no model fit.
Given the measurement on a five-point Likert scale, the mean of the latent factor (M =2.88, SD=1.14, 2.94)) suggests that on average the respondents were undecided whether ADM is to be seen as a viable solution for the distribution of the vaccine (RQ1).All in all, there was no outright endorsement or rejection of ADM systems for vaccine distribution.
Relationship between the general trust in ADM and the viability of ADM for vaccine distribution.To test the H1 of a positive relationship between the general trust in ADM and the viability of ADM for vaccine distribution, a structural regression model was tested that included both constructs as latent factors.The model shows good fit (χ 2 (13)=28.38,p=0.01;RMSEA=0.03CI [0.01, 0.04]; TLI =1).
The parameter estimate of the regression coefficient suggests a significant and strong effect of trust in AI on the perceived viability of ADM for vaccine distribution (β=0.67,SE= 0.03, p=0, β standardized =0.56).Accordingly, H1 is accepted.
Relationship between the disapproval of a social group's vaccination prioritization and the legitimacy of early vaccination.To test H2, we estimated a structural regression model.This model includes the factorial survey condition as an independent variable using a dummy coded predictor ('vaccinate teachers first' = 0 vs. 'vaccinate prisoners first' = 1).The model shows good fit (χ 2 (17) =135.38,p=0;RMSEA=0.09 CI [0.08,0.11];TLI =0.98).The inadequate fit suggested by the RMSEA may be attributed to the model's few degrees of freedom (Kenny, Kaniskan, and McCoach 2015).
The parameter estimate of the regression coefficient suggests a significant medium negative effect of the factorial predictor on the perceived legitimacy of the decision (β=-0.66,SE=0.06, p=0, β standardized =-0.28).
That means that the decision first to vaccinate a non-preferred group was judged as less legitimate than the decision first to vaccinate a group where early vaccination was generally preferred.Accordingly, H2 is accepted.
Moderation effect of trust in the agent making the decision.H3 assumes that the trust in the agent making the decision will moderate the relation between preference and decision legitimacy.More specifically, we expected that this negative relation will be weaker when trust in the agent is high.
To test H3 and subsequently H4, we again estimated a structural regression model.This model includes as independent variables the factorial survey condition as a dummy coded predictor ('vaccinate teachers first' = 0 vs. 'vaccinate prisoners first' = 1) and the trust in the agent making the decision.Additionally, a latent factor serving as the moderator variable was estimated based on indicators calculated as the products of the condition variable and the trust indicators using indProd-function from the package semTools (Jorgensen et al. 2019).
The parameter estimate of the moderators regression coefficient suggests no significant effect of the moderator variable on the perceived legitimacy of the decision (β=0.02,SE=0.05, p=0.74, β standardized =0.01).That means, that trust in the agent making the decision had no moderating effect on the relationship between the disapproval of early vaccination of a social group and the legitimacy of a decision for early vaccination.Accordingly, H3 is rejected.
Moderation effect of the agent making the decision.H4 assumes a difference between a condition in which either ADM or HDM make decisions about early vaccination, in that the negative relationship between disapproval of vaccination and legitimacy of early vaccination is weaker for ADM making the decision when trust in ADM is high compared to humans making the decision when trust in humans is high.
We estimated a structural regression model identical to the model estimated for H3.To assess the difference of the parameter estimates of the moderation, however, performing multi group analysis this model compares the two groups in which either an ADM system decided about vaccine prioritization or humans (i.e., the STIKO).
Furthermore, a test for parameter differences suggests that there is no significant difference of the moderating effect of trust between the two conditions (β=0.1, SE=0.1, p=0.34, β standardized =0.05).H4 is also rejected.

Discussion
In focusing on AI implementation against one of the biggest current challenges for humanity, namely Covid-19, our study adds to the current research of a hotly debated social issue.As AI applications are already in extensive use that will most likely increase over the coming years, it is crucial to understand how the public perceives their widespread deployment, especially in high-risk situations.Here, we mainly focused on the role of trust and its effect on the legitimacy of publicly preferred vs. unpreferred solutions.
The results of the factorial survey suggest that the German public is altogether indifferent about ADM usage to allocate vaccination against the coronavirus.Answering our research question, the individual technological approach to tackle this important current issue is not rejected but also not overly welcomed by German citizens.This insight is in line with research that suggests that while German citizens are generally in favor of AI (bitkom 2018), they often show little interest in AI and specific use cases (Meinungsmonitor Künstliche Intelligenz 2021).Overall, there is low involvement of the German public regarding the actual implementation of ADM systems.
In confirming H1, we see that trust in ADM leads to greater acceptance of the use of ADM in the allocation of coronavirus vaccines.This finding is also consistent with previous research showing that trust positively affects perceived satisfaction and usefulness of ADM systems.Hence, building trust in ADM systems proves to be a fruitful way to legitimatize AI use in public administration decision-making.Consequently, it may be assumed that efforts to promote the use of ADM systems in the management of current crises fall on open ears, especially with people who are generally in favor of the respective innovations and who show considerable trust in their beneficial potential.
However, as initially well-received deployments may lead to unpopular and consequence-laden outcomes, we subsequently contrasted vaccine allocation decisions of high public preference with decisions of low public preference.Our findings reveal that ethical considerations might not be in line -or even strongly opposepublic preferences.For instance, prisoners are at high risk of the coronavirus (Burki 2020).However, public sentiment strongly opposes the idea of prioritizing the respective group.This disapproval of early vaccination for an unpopular social group is negatively related to the legitimacy of early vaccination for the respective group.
These findings correspond to the literature on the allocation of scarce medical resources.Personal characteristics and life choices affect social preferences and influence how the public legitimates a prioritization of respective groups.Prisoners are being punished for a crime they committed, and the social preference of such persons is low in the German population, especially in contrast to teachers.Hence, public preference depends on the specific social characteristics the respective groups possess (Luyten, Tubeuf, and Kessels 2020;Sprengholz et al. 2021).Existing studies on the allocation of scarce resources concerning Covid-19 often do not differentiate between the groups affected but rather on the ethical ground principles on which decisions are based (Huseynov, Palma, and Nayga 2020;Grover, McClelland, and Furnham 2020).Thus, further studies should elaborate on our findings and probe into different preference patterns among the public to mitigate the detrimental effects of unpopular decisions on accepting ADM systems.
In a subsequent step, we asked whether trust moderates the link between social preferences and legitimacy.After all, trusting someone to make the right call may help to accept an otherwise unpopular decision.Contrary to expectations, in situations of significant discrepancy between expectations and actual outcomes, trust does not moderate the effect of social group preference on legitimacy.Furthermore, there was no difference between ADM and HDM.This finding has far-reaching implications.Based on the respective goal formulation, algorithms are expected to produce accurate and objective results.On the one hand, respective ADM systems are supposed to arrive at ethically sound decisions (e.g., as required by the high-level expert group of the European Commission 2019).On the other hand, correct and ethically tenable outcomes may not be in line with the opinions of the broad public.As the overarching goal is to build trustworthy AI systems, this points to a potential major conflict as not all these demands may be met satisfactorily.Hence, we show that trustworthy AI may not be the solution to every ethical problem in the eye of the public.As ADM gets integrated in more and more parts of societal life, it is crucial to have these findings in mind.We are far away from a point, where people trust wholeheartedly rely on the decisions of a machine.Legitimacy is first and foremost influenced, at least in our case, by public preferences of the solution an agent proposes.

Implications
While we highly welcome the necessity of ethical AI guidelines, we observe that ADM decisions and demands for trustworthy AI may sometimes not be in line but direct conflict with public perceptions of AI's output.Thus, alongside the development of ethical AI in technical terms, companies and researchers also have to acknowledge the relevance of public opinion.As seen in the case of vaccine distribution in the US (Guo and Hao 2020) and Germany that often created false, unexpected, and unpopular results, particular outcomes may backfire and fuel public outrage against the use of ADM.Hence, decision-makers must weigh ethical considerations and the public's will in light of probable public resistance against ADM decisions.
As another potential remedy to the detected dilemma, studies focusing on Explainable AI (XAI) highlight the importance of explaining ADM decisions to citizens (for an overview, see Miller 2019).Empirical studies found that explaining ADM decisions leads to greater trust in those systems and, in turn to greater acceptance (Shin 2021a).Thus, further studies could enhance our design and test if the more or less detailed and comprehensible explanation for a decisive outcome would soften the negative effect of social group preference on decision legitimacy.After all, the conflict between ethical decisions and their negative public perception in light of public opinion may be mitigated with specific communicative strategies involving convincing explanations that make the inner workings of ADM comprehensible to a lay audience.

Conclusion
The vaccination program against the novel coronavirus currently poses a challenge of global dimension and, as such, is the subject of a controversial social debate.Decision-makers have to allocate scarce medical resources considering many factors, including practical and moral questions but also in consideration of public opinion.ADM systems are deployed to support this process in providing suggestions or even autonomously deciding upon the rank order for vaccination.
Our research suggests that, generally, the usage of ADM in combating the coronavirus pandemic is perceived ambivalently as a viable strategy with the German public and that the general trust in AI is an essential driver of such viability perceptions.However, irrespective of actual discrimination -be it necessary or faulty -by ADM, we show that as soon as publicly unpreferred decisions regarding the allocation of vaccines are proposed, these decisions are perceived as less legitimate.We subsequently inquired about the moderating role of trust in agents making decisions on the legitimacy of unpreferred decisions in the allocation process.Contrary to expectations, the trust in the agent making the decision did not have the expected mitigating effect.As there was also no difference between human decision-makers or ADM, this raises important questions concerning the expected future deployments of ADM in administrative decision-making.
As there are potentially many ethically correct and preferable yet widely unpopular decisions that ADM systems will propose in the future, we conclude that there are severe challenges for current initiatives promoting the implementation of trustworthy AI.

Table 1 :
Measurement Invariance Trust