1 Introduction

The implementation and use of digital technologies such as artificial intelligence (AI) have far-reaching implications for organizations, work processes, and workers [1, 2]. Intelligent technologies such as AI are defined as systems that exhibit intelligent behavior by analyzing their environment and—with some degree of autonomy—taking actions to achieve specific goals. AI-based systems can be purely software based, operating in the virtual world (e.g., voice assistants, image analysis software, search engines, speech and facial recognition systems) or embedded in hardware devices (e.g., advanced robots, autonomous vehicles, drones or Internet of Things applications) [3]. AI systems are increasingly able to make decisions and take actions (at least partially) autonomously that were previously completed by humans [4, 5]. This ability has consequences for human–computer interactions (HCI) and for human workers in particular, their decision-making and ultimately impairs their autonomy and control over their work actions. The implementation of digital technologies in organizational work contexts also raises questions regarding the worker’s involvement including their specific knowledge and attitudes toward such systems [6, 7]. Work or job autonomy is “the degree to which the job provides substantial freedom, independence and discretion in scheduling the work and in determining the procedures to be used in carrying it out” [8]. Autonomy also involves “the worker’s self-determination, discretion or freedom, inherent in the job, to determine several task elements” [9]. As new or at least changing working conditions due to digital technologies can be accompanied by alienation, this article argues that digitalization raises ethical questions beyond technology such as the quality and meaning of work, which also need to be discussed and considered with regard to digital ethics [10,11,12,13,14].

Digital ethics refer to the “demanding task of … navigating between social rejection and legal prohibition in order to reach solutions that maximize the ethical value of digital innovation to benefit our societies, all of us, and our environments” [15]. Thus, relevant for digital ethics are the working conditions of human workers and the division of work (HCI), work tasks, and the corresponding responsibilities. Of particular interest is that the use of intelligent technologies in work contexts has the potential to increase but also to decrease autonomy of human workers and that human workers must maintain control over themselves and their actions in digital work contexts [16, 17]. Otherwise, there is the threat of (further) alienation [18,19,20,21]. Alienation is understood as an individual condition in which the relationship of a person to the work is not intact, i.e., the person does not identify with the work or/and is dissatisfied. However, so far, autonomy as well as worker involvement and their connection with alienation have not been considered to be important factors for HCI when it comes to digital ethics, although these concepts are crucial in digital work contexts. Therefore, we analyze the relationship of HCI, worker involvement, autonomy, and alienation as central aspects for digital ethics. In addition, we broaden the view to elements of worker involvement in organizations that interact with autonomy and impact alienation as well. This leads to the following research questions:

Q1

What role does HCI play in worker’s alienation?

Q2

How does worker involvement and autonomy interact with alienation?

The empirical basis is a representative quantitative survey of 1989 workers in Germany, which was conducted in 2021 and investigates the perceptions of workers affected by (intelligent) digital technologies. The results show that autonomy and/or involvement at work has a high relevance for the use of digital technologies and that this has an impact on the feeling of alienation from work. Since workers with a high level of autonomy and/or worker involvement engage more intensively with digital technologies in their work, those factors are important prerequisites for successfully engaging with digital technologies and ultimately avoiding (further) alienation. While digital ethics currently covers aspects that are important for the use of HCI, there is a blind spot regarding autonomy and/within worker involvement, HCI, and alienation, which is why these factors need to be anchored in a comprehensive conceptualization of digital ethics. Against this background, this paper contributes to (a) showing the relevance of HCI for digital ethics, (b) highlighting the relevance of autonomy and workers’ involvement in relation to alienation through HCI, and (c) emphasizing the relevance of paying attention to as well as ultimately designing working conditions in the context of digital ethics.

The paper is organized as follows. After the introduction (Sect. 1), we present the state of research and a conceptual background for digital ethics and the concepts of alienation, autonomy as well as involvement, and HCI in Sect. 2. Subsequently, we describe the data and methods in Sect. 3 and introduce our findings in Sect. 4. Finally, we discuss the results in Sect. 5 and conclude in Sect. 6.

2 State of research and conceptual background

In this chapter, we summarize the state of research to gain insights concerning the use of AI systems. Especially qualitative studies have been of interest to deeper understand motivations, sense-making and learning processes from a user perspective (2.1). Next, we lay out the state of research on digital ethics (2.2). In Sect. 2.3, we discuss the interplay of alienation and autonomy in the workplace and in Sect. 2.4, we discuss the interplay of alienation and worker involvement. In Sect. 2.5, we derive our hypotheses.

2.1 AI from a user perspective

For example, Liao et al. [22] show via qualitative data and grounded theory analyses that explainability of AI (XAI) is a crucial condition for autonomy and involvement of users. Especially for autonomy in action and decision-making XAI is important, as disagreeable, unexpected or unfamiliar output of AI systems need explanations for people to assess the AI’s judgment to make an informed decision. And even when users’ decision aligns with the AI’s, explanations could help enhance decision confidence or generate hypothesis about the causality for follow-up actions. Another important motivation for users is to appropriately evaluate the capability of the AI system, both to determine the overall system adoption (e.g., evaluating data quality, transferability to a new context, whether the model logic aligns with domain knowledge), and at the operational level to beware of the system’s limitations. Being involved in the planning and implementation process, furthermore, helps workers to adapt usage or interaction behaviors to better utilize the AI and also to convince users to invest in the system. Premnath and Arun [23] found, using qualitative interview data, that HR management can improve recruiting using AI. After recruiting AI is helping HR to develop, customize, and track training and development programs. In HR, the use of AI, according to their qualitative data, leads to cost reduction, more efficient employee deployment, and time and ultimately to more job satisfaction among employees. Their data, however, also point to challenges. HR managers mentioned fear of being replaced and overall skepticism due to a lack of understanding of AI systems in use. They also highlight the implemented biases in AI systems as a challenge in HR management. They state that education in ethical aspects of AI is crucial for a successful implementation and usage. Yu and Yu [24] claim a wide acceptance of AI in education, followed by serious concerns about its ethics. They use qualitative and quantitative methods to show principles of ethics in the use of AI in education. Those principles are transparency, privacy, justice, non-maleficence, and responsibility. The principles are also in line with the overview of Jobin et al. [109]. With transparency, they refer to the requirement that when AI is used in education or information transfer, the specific parameters, source, distribution of responsibility of the AI should be disclosed between AI users, educational researchers, and practitioners. Privacy is closely related to the control over personal information. Justice in the use of AI in education is connected with the distribution of hardware and learning resources. The AI-based educational systems must be arranged in an unbiased manner, bringing benefits to all students and teachers. The ethical guideline of non-maleficence requires that AI should not bring any harm to human beings in various fields, including education. Harm consists but is not limited to physical or private violations, distrust, non-skillfulness, or any negative effect on infrastructure, regulation, social welfare, psychological state, emotion, or economy. Developers must also take into account their responsibility to secure the benefits, advantages, usefulness, and anything good of AI technologies. Malodia et al. [25] conducted a mixed-methods approach to inquire the motivation of using AI-enabled voice assistants. First, they employed a qualitative study to identify the values relevant to the use of voice assistants: social identity, personification, convenience, perceived usefulness, and perceived playfulness. They found that perceived usefulness and perceived playfulness had a positive effect on information search and task function. Users appreciate voice assistants because these experiences satisfy hedonic motives. Therefore, the greater the perceived playfulness, the more likely users are to use voice assistants. Similarly, users perceive voice assistants as useful and are likely to use them for both information search and task functions. They also found that social identity is positively associated with usefulness and playfulness. Their results also suggest that users are motivated to use voice assistants because these assistants provide them with opportunities to express their social identity in a digitally dominated social environment. Furthermore, as their need for self-expression is satisfied, users develop an emotional connection with their voice assistants. For these reasons, users consider the use of voice assistants to be both useful and enjoyable.

It becomes clear that involvement, autonomy, and overall ethical implications are essential to acceptance and, hence, a successful use of AI systems. In the next subchapter, we highlight the state of research concerning digital ethics with regard to the use of digital technologies, such as AI.

2.2 Digital ethics and the use of digital technologies

Digital ethics are important for the design and use of digital technologies. In the course of the digital transformation, digital ethics have developed as a broad field of research [26,27,28]. In this context, research is primarily concerned with the question of what ethical behavior prevails in the development and use of technology (also called “moral machines” [29]), such as the “behavior” of autonomous vehicles when they have to decide between two accident options. Ethical issues of modern, increasingly intelligent technologies are being addressed in a variety of disciplines such as business ethics [30], critical algorithm research [31,32,33], and workplace surveillance [34, 35]. In addition, in these times of digital transformation—such as the use of algorithm-based decision-making [36]—management is being negotiated with a central focus on issues of privacy [37], accountability [38, 39], transparency [40,41,42], power [43, 44], and social control [45,46,47]. Other aspects being negotiated relate to data protection and security as well as transparency of technical systems or the division of tasks between humans and technologies [3, 15]. Moreover, ubiquitous ethical decisions and considerations [24] arise in the development of digital technologies, requiring further interdisciplinary research and recommendations for applying the technologies to business practice—although in business practice, “the opacity enables strategic and ethical missteps such as unfair and discriminatory predictions and the dehumanizing treatment of individuals while simultaneously making it more difficult to define clear accountability for such problems” [37].

The need for further discussions of ethical issues is particularly strong with respect to the ethics of AI systems to which possibilities of (partially) autonomous decision-making and action are attributed [3, 48]. This refers to the concept of HCI conceptualizing the division of work tasks, decisions, and actions between humans and technologies. An analytical grid capturing HCI distinguishes between the level of automation that considers human information processing and the level of automation by which a function is taken over by the automated system [49]. This takeover can range from complete appropriation of functions by the system, without involving humans, to sole decision-making by humans without technical support. Research shows that clearly dividing tasks between humans and AI is important [50]. Humans need to define tasks, goals, and existing boundaries and then control the technology [51]. However, intelligent technologies are increasingly taking on functions such as information processing, decision-making, and action instruction. Recent research on HCI notes that “digital devices create illusions of agency. There are times when users feel as if they are in control when in fact they are merely responding to stimuli [due to] devices … designed to deliver precisely” the feeling of autonomy [52]. Specifically, the “Ethical Guidelines for Trustworthy AI” of the Expert Group on AI of the European Union [3, 53] are highly relevant regarding digital ethics. The guidelines have been widely received and have even found their way into a new ISO standard [54,55,56].

Like other approaches to digital ethics, these normative AI guidelines of the EU focus on technical consistency, maturity, reliability, and security; on requirements for the quality and security of data in relation to privacy; on transparency of AI systems, including their mode of operation, data sources, and responsibilities; on possible biases and requirements for equal treatment of and by AI systems; and on aspects of positive, sustainable societal developments (economic, environmental, social). This acceptance emphasizes the need for AI systems to be accountable at all times, for clear responsibilities to be ensured and thus for processes and decisions to be traceable, and for difficulties of or with the system and their effects. The approach taken by the guidelines addresses aspects of human (action) control as well as the primacy of human interests, decisions, and actions over those of the systems.

In this sense, with and through AI, human supervision and control over work processes remain guaranteed as these systems should support humans at work as well as the meaningfulness of work [3]. However, the needs of human workers in the context of digital ethics are considered only marginally, most likely within the overarching framework of organizational ethics [57]. Deficits exist with respect to alienation or/and, for example, the consideration of other than formal task requirements at work, which are of great importance in organizations as well as for workers and their work identity [57,58,59]. Within the use of digital technologies, however, the requirements for ethics in organizations and the role of ethics in the technical design of systems are changing, especially in the ethical design of work—and technology including implementation processes. Consideration is being given to the increasing use of digital technologies and their effects on work and to the need to shape HCI humanely. This void in research on digital ethics and in attempts to implement digital ethics in business practice [23] is the starting point for our focus on workers’ perception of alienation.

2.3 Alienation and autonomy

Alienation is most important in the context of work, locating within it (and beyond) factors such as autonomy in the context of HCI. Alienation from work is essentially about the experience of meaningfulness in what workers do or whether their activities feel meaningful. The experience of meaningfulness is promoted, for example, by workers having the feeling of autonomy and a certain degree of control over their own actions. Furthermore, meaningfulness is based on the degree of self-realization, which is increased in particular by a high degree of autonomy [60]. Autonomy can be promoted through challenging tasks. However, it is indispensable that the organization also provides the resources and tools that are necessary to master the work task [61]. However, there is increasing evidence that transformation of work processes offer fewer opportunities for autonomy, co-determination, etc., and therefore workers find it difficult to perceive their work as meaningful [62]. With increasing use of digital technology, positive factors of autonomy are counteracted, since technologies (can) take over more and more parts in work processes. In particular, this may pose a threat to the autonomy of workers, particularly when workers have only little insights into the development, use and functioning of (digital) technologies and are only little involved in (decision-making) processes toward the use of digital technologies and their development.

2.4 Alienation and worker involvement

Following Marx [63], alienation, especially alienation of labor, occurs, when workers feel “a sense of loss and unfairness, of powerlessness and loss of dignity, which is prone to provoke resentment, anger and frustration. Capital produces alienation in both its objective and subjective garbs” [64]. It also includes workers being comprehensively involved in the production process, so they can perceive the effect of their actions. It is also important that they can perform varied activities and/or that their contribution to the production process is recognized and valued. This ultimately leads to workers perceiving their work as important or unimportant. If workers can identify themselves with their work, their activities, and the products (and possibly also the organization for which they work), this has a positive effect on reducing alienation. Thus, meaningfulness always prevails when there is a connection between the goals pursued at work and private goals. Previous studies on the effects of autonomy and worker involvement are, however, ambivalent. Gardell [65] states, based on a quantitative study, that alienation from work leads to a withdrawal from interest in change processes that might lead to more autonomous tasks and increased worker influence. In a later study, Gardell [66] points out that worker involvement leads to a richer job content, as well as a more effective use of productive resources. Kalleberg et al. [67] point to a disagreement about the consequences of participation for employee well-being. They summarize several studies with inconsistent results. While some researchers highlight the positive effects of involvement and autonomy [68], others point to the intensification of workload and stress through enhanced autonomy and involvement [69]. In their own study, Kalleberg et al. [67] found that, while involvement and autonomy are linked to more positive than negative outcomes for workers, they highlight that special training is needed for workers to take advantage of their opportunities and benefit from participating in decision-making. Otherwise, autonomy and involvement can lead to enhanced stress. However, worker involvement tends to increase job satisfaction, especially among higher-skilled personnel and more demanding jobs because employees develop a sense of responsibility if they participate in the organization and if they regard it as a common good, not a source of profit for someone else [70]. Following (digital) ethics, it can be added that the worker’s involvement, their perspectives, and (ethical) requirements are increasingly important beyond technical development [7, 71,72,73].

2.5 Derivation of hypotheses

Worker’s perceived degree of autonomy is decisive for their definition of a workplace situation as either positive or negative in relation to the meaningfulness. These definitions will shift with the technological change in their work and the subsequent changing degree of worker involvement. From this perspective, the interaction with technology possibly stimulates and enhances the worker’s involvement [74]. Worker involvement [75, 76] and autonomy [77] are important presumptions for HCI.

Our contribution is to show the effect of involvement and autonomy on alienation, so we derive the following hypotheses.

H1

The intensity of workers working with HCI increases alienation from work.

Alienation is closely related to the worker’s perception of autonomy. In digital work contexts, using intelligent technologies affects workers’ autonomy regarding the distribution of decisions and actions taken by AI systems at work or by humans. Previous research shows that the use of intelligent technologies such as AI restricts human autonomy and self-determination and entails various dangers or challenges [78], and reveals that—both intentionally and unintentionally—autonomy is sometimes compromised by the use of digital technologies [79]. We, thus, hypothesize:

H2

HCI in combination with respectively providing autonomy reduces alienation.

H3

HCI in combination with respectively providing worker involvement reduces alienation.

3 Data and methods

To test the hypotheses, we conducted an online survey from April to July 2021 (see Appendix B for the questionnaire. The study was conducted in German. The authors translated the items to English where no original English items were available). The sample of n = 1989 respondents is representative for Germany for the variables age, federal state (i.e., the location of respondent’s workplace; Bundesland), and respondent’s qualification (see Appendix A). A research agency supported us in recruiting respondents. Participation in the study was voluntary. Respondents were asked to fill out the attached questionnaire online by the research agency and were incentivized monetarily by the agency. The mean age of the sample is 42.3 years with a standard deviation of 12.9 years. The minimum age was deliberately set at 18 years and the maximum age is 73 years. Respondents were given the options to define themselves as male, female or diverse. 47% of respondents chose the option female, 52.7% chose male, and 0.3% chose diverse. Regarding qualification, around one third of the respondents have a professional qualification (Anerkannter Berufsabschluss), around 20% of respondents are academics and 13% do not have a professional qualification. Around 50% of respondents work for large companies with 250 workers or more 23% work for medium size companies with 50 to 249 workers, 17% work for small companies with 10 to 49 workers, and 10% work for very small companies with 10 or less workers.

We addressed the concepts of alienation [80], worker involvement [81, 82], and autonomy [83, 84] using well-established scales. Following Breaugh [85], we distinguished autonomy regarding work criteria, methods, and scheduling. Work criteria autonomy is the ability of workers to choose or modify the criteria used for evaluating performance. Work method autonomy refers to workers’ freedom in selecting strategies related to tasks, and work schedule autonomy is workers’ ability to choose work timings and durations. Since a scale on HCI use at work was missing so far, we developed based on Ren and Bao [86]. Their categorization of human computer interfaces was the theoretical basis for our operationalization of the use of HCI. In our study, we focused on the interfaces between human and technology to get further insights in the distribution and use of the effects of intelligent/digital technologies at work. Especially the use of different interfaces between human and machine working conditions like safety, workload, and interface design requires more research [87]. For listening and speaking, we addressed natural language command input from human to technology and a natural language output from technology to human. For the dimension reading and writing, we refer to natural language writing recognition as command input from human to technology. And as for the visual dimension, we asked for the use of technologies for object recognition, self-driving/moving machines, technologies that can recognize human motions, and human face recognition. Moreover, we questioned a general cooperation with smart automated machines and robots and also smart wearables that can be worn as gloves or goggles and are, therefore, a direct human body to technology interaction. We evaluated this new item regarding reliability with a Cronbach’s ∝ of 0.9369 as a good quality measure.

To test our hypotheses, we conducted a structural equation model (SEM). SEM offers various advantages in respect to other statistical analyses (e.g., regression, variance analyses) [88]. On the one hand, SEM allows for the simultaneous estimation of several effects. On the other hand, SEM allows for the consideration of manifest as well as latent constructs and, therefore, the testing of hypothetical, theoretical models. In the present study, multiple effect relationships between latent variables are investigated. Therefore, a SEM with latent variables is applied. This consists of a structural model and the measurement models of the exogenous and endogenous variables (see Fig. 1). The structural model comprises the hypothesized effect relationships between the latent variables. The measurement models are used to empirically capture the latent variables. In the present case, reflective indicators are used, which are also assigned error terms (ε) [89].

Fig. 1
figure 1

Theoretical model

The structural equation model is controlled for the variables age, gender, and job level (Fig. 2). Job levels have been inquired with a self-estimation on the following items: unskilled or semi-skilled job (12.56%), professionally oriented job (54.63%), complex specialist job (19.68%), or highly complex job (13.13%). The control variables show that an above-average number of young male with high job levels use HCI [90, 91].

Fig. 2
figure 2

Results structural equation model with standardized values. CFI = 0.987, TLI = 0.960, SRMR = 0.021, RMSEA = 0.050, Prob > χ2 = 0.0001

As pre-analysis we tested for reliability using Cronbach’s alpha and sampling adequacy using Kaiser–Meyer–Olkin. Table 1 shows the results of the factor analysis for the four constructs HCI use, alienation, worker involvement, and autonomy. The table shows factor loadings and sampling adequacy of each item as well as Cronbach’s alpha as goodness of fit measures. All ∝ values are above a threshold of 0.7 and are, therefore, considered acceptable [92]. The Kaiser–Meyer–Olkin measures are all above a threshold of 0.6 and are, therefore, also considered acceptable [93, 94]. All item’s factor loadings are above a threshold of 0.3 and are, hence, interpretable. The items can be considered valuable to their factors [95].

Table 1 Constructs and measurement items

Figure 2 shows the structural equation model. The goodness of fit measures of the model are all acceptable. We chose maximum likelihood parameter estimation over other estimation methods because the data were distributed normally [96]. The model was significant with χ2 (5) = 25.50 and Prob > χ2 = 0.0001. The model seems to be a good fit with CFI 0.987 and TLI 0.960. The RMSEA shows a good fit with a value of 0.057 and the SRMR also suggests a close model fit with a value of 0.021. The model consists of n = 1619 respondents with 370 cases omitted due to missing values.

The RMSEA can be described as a “badness-of-fit index” [97]. Accordingly, a high RMSEA indicates that the formulated model can only be poorly approximated to the collected data. The range of values of the RMSEA is between zero and one. To conclude an acceptable model fit, the RMSEA should not exceed 0.08 [98]. The SRMR describes the standardized deviation between the model-theoretical and the empirical covariance matrix [96]. Accordingly, a value of zero indicates that the formulated model fits the data perfectly. According to Hu and Bentler [99], an acceptable model fit can be assumed if the SRMR has a value below 0.08. Furthermore, the χ2 value is divided by the number of degrees of freedom of the formulated model and used as a descriptive quality criterion [100]. It is also true here that a smaller value indicates a better model fit. For an acceptable model fit, it is often required that the ratio between χ2 value and degrees of freedom does not exceed three [101, 102]. The TLI and the CFI represent incremental fit indices. With the help of these quality criteria, a comparison is made between the formulated model and a base model [99]. Generally speaking, it is checked whether the formulated model shows a relevant improvement over the base model. The TLI makes this comparison based on the χ2-difference and the degrees of freedom of the models. In addition, when using the CFI, possible biases of the χ2-distribution are taken into account. In the literature, similar thresholds are required for both indices. TLI and the CFI should have a minimum value of 0.95 [99].

In the overall picture, the global fit indices, thus, suggest an acceptable model fit.

4 Results

The model shows that the increase of one standard deviation in HCI use increases alienation from work 0.32 standard deviations. However, the model also shows that worker involvement decreases alienation by 0.34 standard deviations. Autonomy has no meaningful effect in this model which can be explained by the high inter-correlation of 0.3 standard deviations between autonomy and worker involvement, which we controlled for in the model. It also becomes visible that job level has an effect on both autonomy and worker involvement, so that one standard deviation higher job level leads to 0.16 standard deviations higher autonomy and 0.21 standard deviations higher worker involvement. Our results show that H1: The intensity of workers working with HCI increases alienation from work can be assumed. Collaboration with machines (HCI) leads to increased alienation from work (by 0.32 standard deviations) in principle and in the absence of supporting measures such as the guarantee of certain freedom of action for workers (autonomy) or their involvement in the design, implementation, and use of HCI (worker involvement). However, we can also assume H2: HCI in combination with, respectively, providing autonomy reduces alienation and H3: HCI in combination with, respectively, providing worker involvement reduces alienation. The model shows that autonomy and worker involvement both reduce the alienation caused by HCI. While the effect on alienation seems to be mostly for worker involvement, there is a high interaction effect between autonomy and worker involvement. This is why both concepts are important for reducing alienation. Based on our study and this model, we can conclude that when HCI is implemented using a participatory work environment where workers are involved in change processes and workers are given autonomy in HCI use, it can reduce alienation from work. Otherwise, however, alienation from work increases. We can also state that the positive effects of autonomy and worker involvement are more likely to occur for workers with higher job levels.

5 Discussion

5.1 Summary of findings

This study investigated the relationship between HCI and alienation with autonomy and worker involvement as central aspects of digital ethics. Analysis of our representative online survey of a German sample in 2021 yielded three novel insights that indicate the relevance of the worker’s perspective, respectively, workers’ perception of alienation regarding digital ethics. First, results showed the high relevance of using digital technology at work for autonomy and alienation, as HCI leads to more alienation. Second, workers perceiving autonomy and worker involvement feel less alienated from work. Third, the positive effects of autonomy and worker involvement are more likely to occur for workers with higher job levels. Concerning our research questions, we can state that (Q1) the use of HCI at work without any supporting measures will lead to increased worker’s alienation. We can also state, that (Q2) worker involvement and autonomy interact strongly and will both decrease the effect of HCI on alienation. So, to ethically introduce HCI to work, autonomy and worker involvement must be considered.

5.2 Contributions

Prior research has widely neglected to investigate digital ethics from a worker’s perspective. Using a representative sample of German workers, we address this gap and show that concepts such as autonomy, worker involvement, and alienation affect the intensity of using HCI. The first contribution of this paper lies in our investigation of HCI as relevant to digital ethics. HCI is relevant to digital ethics because using collaborative technologies influences work processes and alters work tasks [47]. In the course of implementing digital technologies, changes in the responsibilities and autonomy of human workers may occur since such systems may take over certain tasks in work processes and make decisions [45]. Another relevant issue are the opportunities and processes for worker involvement in the design and implementation of digital technologies at work. To date, research has lacked a proper scale to analyze the intensity of using different technologies at work. Drawing on the seminal work of Ren and Bao [77], we developed a scale focusing on different areas of digital technologies’ implementation, namely listening and speaking, reading and writing, and the visual dimension.

As a second contribution, this investigation reveals the relevance of autonomy, worker involvement, and alienation in the context of digital ethics. To date, digital ethics have mainly concerned aspects such as technical consistency, maturity, transparency, reliability and security, the requirements for data, and data quality and protection, and have focused on fairness, accountability, and traceability of decisions [3, 15, 48, 103]. Until now, human workers have been excluded from this context, producing a blind spot with regard to the worker’s needs when using digital technologies. Our results point to the importance of considering aspects such as autonomy and worker involvement—factors that must be included if digital ethics should provide guidance as to what is right or wrong in using intelligent technologies at work. Especially human autonomy must be considered in the context of digital ethics since autonomy is important for intrinsic motivation and has several other positive effects on humans’ psychological states with the effect to avoiding alienation. Autonomy [8], worker involvement [64], and possible threats of (further) alienation require attention. If workers have the feeling that they can make decisions and contribute to change processes, this has a positive effect not only on HCI use but also on the perception of alienation from work. Conversely, this can lead to greater identification with the work and the organization and, for example, counteract challenges such as fluctuation and a shortage of skilled workers.

This represents our third contribution in terms of the practical implications for business practice of our empirical findings. Previous research has shown that the use of digital technologies affects worker’s health [104] and well-being [105]. Consequently, it is important to analyze and explore both the antecedents and effects of HCI use. For the digital transformation to succeed in organizational practice, workers must have the necessary resources. They must be adequately trained and informed about the technology, and—perhaps more importantly—they must be able to deal autonomously with this information and make decisions. We recommend that technologies and workflows be designed so that workers have at least some influences over what technology is used and when. It is particularly important that workers perceive technology as a support rather than a monitoring or control device [30, 31]. Therefore, the implementation of digital technologies must be considered in conjunction with organizational policies regarding digital ethics, worker involvement in the implementation of digital technologies, and, for example, the possibility of formal and informal worker training [106, 107].

5.3 Limitations and avenues for further research

The results of the study are subject to limitations. First, the study only investigated employees in Germany. Other contractual relationships (e.g., freelancers) and other countries were not considered. Future studies could examine both contractual and cross-cultural differences and test the effects found in this study. Second, this study focuses on workers in organizational contexts. Future studies could broaden the perspective by capturing the views of managers or by considering organizational characteristics. Third, because we do not examine organizational developments, further research, such as organizational case studies, could focus on how the demands of digital ethics manifest themselves in organizational practices and how this manifestation affects workers’ perceptions of autonomy, involvement, and alienation. The impact of different training offerings—as a certain form of worker involvement in the digital transformation—could also be differentiated. Additionally, although the phenomenon of a digital divide that became visible in our data along variables such as gender, education and age is highly relevant for AI ethics, we could not investigate this further in this paper.

6 Conclusion

In conclusion, considering worker’s perspective in digitalization processes, the concept of alienation is highly relevant for AI ethics. To ethically design and implement HCI to workplaces, supporting measures are mandatory to give workers the resources required. We can show that the threat of alienation through the implementation of digital technologies and resulting HCI at workplaces can be reduced when autonomy and worker involvement are high. Thus, a participatory and autonomous work environment is crucial for a successful digital transformation. In this respect, autonomy and worker involvement are important on all job levels. Our research shows that currently higher job levels, such as complex specialist jobs and highly complex jobs, benefit more from worker involvement and autonomy. It remains a future task to enable all job levels with the resources required to successfully work with digital technologies.