Abstract
The ongoing development and adoption of digital technologies such as AI in business brings ethical concerns and challenges. Main topics are the design of digital technologies, their tasks, and competencies in organizational practice, and their collaboration with humans. Previous guidelines on digital ethics mainly consider technological aspects such as the nondiscriminatory design of AI, its transparency, and technically constrained (distributed) agency as priorities in AI systems, leaving the consideration of the human factor and the implementation of ethical guidelines in organizational practice unclear. We analyze the relationship between human–computer interaction (HCI), autonomy, and worker involvement with its impact on the experience of alienation at work for workers. We argue that the consideration of autonomy and worker involvement is crucial for HCI. Based on a quantitative empirical study of 1989 workers in Germany, the analysis shows that when worker involvement is high, the effect of HCI use on alienation decreases. The study results contribute to the understanding of the use of digital technologies with regard to worker involvement, reveal a blind spot in widespread ethical debates about AI, and have practical implications with regard to digital ethics in organizational practice.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The implementation and use of digital technologies such as artificial intelligence (AI) have far-reaching implications for organizations, work processes, and workers [1, 2]. Intelligent technologies such as AI are defined as systems that exhibit intelligent behavior by analyzing their environment and—with some degree of autonomy—taking actions to achieve specific goals. AI-based systems can be purely software based, operating in the virtual world (e.g., voice assistants, image analysis software, search engines, speech and facial recognition systems) or embedded in hardware devices (e.g., advanced robots, autonomous vehicles, drones or Internet of Things applications) [3]. AI systems are increasingly able to make decisions and take actions (at least partially) autonomously that were previously completed by humans [4, 5]. This ability has consequences for human–computer interactions (HCI) and for human workers in particular, their decision-making and ultimately impairs their autonomy and control over their work actions. The implementation of digital technologies in organizational work contexts also raises questions regarding the worker’s involvement including their specific knowledge and attitudes toward such systems [6, 7]. Work or job autonomy is “the degree to which the job provides substantial freedom, independence and discretion in scheduling the work and in determining the procedures to be used in carrying it out” [8]. Autonomy also involves “the worker’s self-determination, discretion or freedom, inherent in the job, to determine several task elements” [9]. As new or at least changing working conditions due to digital technologies can be accompanied by alienation, this article argues that digitalization raises ethical questions beyond technology such as the quality and meaning of work, which also need to be discussed and considered with regard to digital ethics [10,11,12,13,14].
Digital ethics refer to the “demanding task of … navigating between social rejection and legal prohibition in order to reach solutions that maximize the ethical value of digital innovation to benefit our societies, all of us, and our environments” [15]. Thus, relevant for digital ethics are the working conditions of human workers and the division of work (HCI), work tasks, and the corresponding responsibilities. Of particular interest is that the use of intelligent technologies in work contexts has the potential to increase but also to decrease autonomy of human workers and that human workers must maintain control over themselves and their actions in digital work contexts [16, 17]. Otherwise, there is the threat of (further) alienation [18,19,20,21]. Alienation is understood as an individual condition in which the relationship of a person to the work is not intact, i.e., the person does not identify with the work or/and is dissatisfied. However, so far, autonomy as well as worker involvement and their connection with alienation have not been considered to be important factors for HCI when it comes to digital ethics, although these concepts are crucial in digital work contexts. Therefore, we analyze the relationship of HCI, worker involvement, autonomy, and alienation as central aspects for digital ethics. In addition, we broaden the view to elements of worker involvement in organizations that interact with autonomy and impact alienation as well. This leads to the following research questions:
Q1
What role does HCI play in worker’s alienation?
Q2
How does worker involvement and autonomy interact with alienation?
The empirical basis is a representative quantitative survey of 1989 workers in Germany, which was conducted in 2021 and investigates the perceptions of workers affected by (intelligent) digital technologies. The results show that autonomy and/or involvement at work has a high relevance for the use of digital technologies and that this has an impact on the feeling of alienation from work. Since workers with a high level of autonomy and/or worker involvement engage more intensively with digital technologies in their work, those factors are important prerequisites for successfully engaging with digital technologies and ultimately avoiding (further) alienation. While digital ethics currently covers aspects that are important for the use of HCI, there is a blind spot regarding autonomy and/within worker involvement, HCI, and alienation, which is why these factors need to be anchored in a comprehensive conceptualization of digital ethics. Against this background, this paper contributes to (a) showing the relevance of HCI for digital ethics, (b) highlighting the relevance of autonomy and workers’ involvement in relation to alienation through HCI, and (c) emphasizing the relevance of paying attention to as well as ultimately designing working conditions in the context of digital ethics.
The paper is organized as follows. After the introduction (Sect. 1), we present the state of research and a conceptual background for digital ethics and the concepts of alienation, autonomy as well as involvement, and HCI in Sect. 2. Subsequently, we describe the data and methods in Sect. 3 and introduce our findings in Sect. 4. Finally, we discuss the results in Sect. 5 and conclude in Sect. 6.
2 State of research and conceptual background
In this chapter, we summarize the state of research to gain insights concerning the use of AI systems. Especially qualitative studies have been of interest to deeper understand motivations, sense-making and learning processes from a user perspective (2.1). Next, we lay out the state of research on digital ethics (2.2). In Sect. 2.3, we discuss the interplay of alienation and autonomy in the workplace and in Sect. 2.4, we discuss the interplay of alienation and worker involvement. In Sect. 2.5, we derive our hypotheses.
2.1 AI from a user perspective
For example, Liao et al. [22] show via qualitative data and grounded theory analyses that explainability of AI (XAI) is a crucial condition for autonomy and involvement of users. Especially for autonomy in action and decision-making XAI is important, as disagreeable, unexpected or unfamiliar output of AI systems need explanations for people to assess the AI’s judgment to make an informed decision. And even when users’ decision aligns with the AI’s, explanations could help enhance decision confidence or generate hypothesis about the causality for follow-up actions. Another important motivation for users is to appropriately evaluate the capability of the AI system, both to determine the overall system adoption (e.g., evaluating data quality, transferability to a new context, whether the model logic aligns with domain knowledge), and at the operational level to beware of the system’s limitations. Being involved in the planning and implementation process, furthermore, helps workers to adapt usage or interaction behaviors to better utilize the AI and also to convince users to invest in the system. Premnath and Arun [23] found, using qualitative interview data, that HR management can improve recruiting using AI. After recruiting AI is helping HR to develop, customize, and track training and development programs. In HR, the use of AI, according to their qualitative data, leads to cost reduction, more efficient employee deployment, and time and ultimately to more job satisfaction among employees. Their data, however, also point to challenges. HR managers mentioned fear of being replaced and overall skepticism due to a lack of understanding of AI systems in use. They also highlight the implemented biases in AI systems as a challenge in HR management. They state that education in ethical aspects of AI is crucial for a successful implementation and usage. Yu and Yu [24] claim a wide acceptance of AI in education, followed by serious concerns about its ethics. They use qualitative and quantitative methods to show principles of ethics in the use of AI in education. Those principles are transparency, privacy, justice, non-maleficence, and responsibility. The principles are also in line with the overview of Jobin et al. [109]. With transparency, they refer to the requirement that when AI is used in education or information transfer, the specific parameters, source, distribution of responsibility of the AI should be disclosed between AI users, educational researchers, and practitioners. Privacy is closely related to the control over personal information. Justice in the use of AI in education is connected with the distribution of hardware and learning resources. The AI-based educational systems must be arranged in an unbiased manner, bringing benefits to all students and teachers. The ethical guideline of non-maleficence requires that AI should not bring any harm to human beings in various fields, including education. Harm consists but is not limited to physical or private violations, distrust, non-skillfulness, or any negative effect on infrastructure, regulation, social welfare, psychological state, emotion, or economy. Developers must also take into account their responsibility to secure the benefits, advantages, usefulness, and anything good of AI technologies. Malodia et al. [25] conducted a mixed-methods approach to inquire the motivation of using AI-enabled voice assistants. First, they employed a qualitative study to identify the values relevant to the use of voice assistants: social identity, personification, convenience, perceived usefulness, and perceived playfulness. They found that perceived usefulness and perceived playfulness had a positive effect on information search and task function. Users appreciate voice assistants because these experiences satisfy hedonic motives. Therefore, the greater the perceived playfulness, the more likely users are to use voice assistants. Similarly, users perceive voice assistants as useful and are likely to use them for both information search and task functions. They also found that social identity is positively associated with usefulness and playfulness. Their results also suggest that users are motivated to use voice assistants because these assistants provide them with opportunities to express their social identity in a digitally dominated social environment. Furthermore, as their need for self-expression is satisfied, users develop an emotional connection with their voice assistants. For these reasons, users consider the use of voice assistants to be both useful and enjoyable.
It becomes clear that involvement, autonomy, and overall ethical implications are essential to acceptance and, hence, a successful use of AI systems. In the next subchapter, we highlight the state of research concerning digital ethics with regard to the use of digital technologies, such as AI.
2.2 Digital ethics and the use of digital technologies
Digital ethics are important for the design and use of digital technologies. In the course of the digital transformation, digital ethics have developed as a broad field of research [26,27,28]. In this context, research is primarily concerned with the question of what ethical behavior prevails in the development and use of technology (also called “moral machines” [29]), such as the “behavior” of autonomous vehicles when they have to decide between two accident options. Ethical issues of modern, increasingly intelligent technologies are being addressed in a variety of disciplines such as business ethics [30], critical algorithm research [31,32,33], and workplace surveillance [34, 35]. In addition, in these times of digital transformation—such as the use of algorithm-based decision-making [36]—management is being negotiated with a central focus on issues of privacy [37], accountability [38, 39], transparency [40,41,42], power [43, 44], and social control [45,46,47]. Other aspects being negotiated relate to data protection and security as well as transparency of technical systems or the division of tasks between humans and technologies [3, 15]. Moreover, ubiquitous ethical decisions and considerations [24] arise in the development of digital technologies, requiring further interdisciplinary research and recommendations for applying the technologies to business practice—although in business practice, “the opacity enables strategic and ethical missteps such as unfair and discriminatory predictions and the dehumanizing treatment of individuals while simultaneously making it more difficult to define clear accountability for such problems” [37].
The need for further discussions of ethical issues is particularly strong with respect to the ethics of AI systems to which possibilities of (partially) autonomous decision-making and action are attributed [3, 48]. This refers to the concept of HCI conceptualizing the division of work tasks, decisions, and actions between humans and technologies. An analytical grid capturing HCI distinguishes between the level of automation that considers human information processing and the level of automation by which a function is taken over by the automated system [49]. This takeover can range from complete appropriation of functions by the system, without involving humans, to sole decision-making by humans without technical support. Research shows that clearly dividing tasks between humans and AI is important [50]. Humans need to define tasks, goals, and existing boundaries and then control the technology [51]. However, intelligent technologies are increasingly taking on functions such as information processing, decision-making, and action instruction. Recent research on HCI notes that “digital devices create illusions of agency. There are times when users feel as if they are in control when in fact they are merely responding to stimuli [due to] devices … designed to deliver precisely” the feeling of autonomy [52]. Specifically, the “Ethical Guidelines for Trustworthy AI” of the Expert Group on AI of the European Union [3, 53] are highly relevant regarding digital ethics. The guidelines have been widely received and have even found their way into a new ISO standard [54,55,56].
Like other approaches to digital ethics, these normative AI guidelines of the EU focus on technical consistency, maturity, reliability, and security; on requirements for the quality and security of data in relation to privacy; on transparency of AI systems, including their mode of operation, data sources, and responsibilities; on possible biases and requirements for equal treatment of and by AI systems; and on aspects of positive, sustainable societal developments (economic, environmental, social). This acceptance emphasizes the need for AI systems to be accountable at all times, for clear responsibilities to be ensured and thus for processes and decisions to be traceable, and for difficulties of or with the system and their effects. The approach taken by the guidelines addresses aspects of human (action) control as well as the primacy of human interests, decisions, and actions over those of the systems.
In this sense, with and through AI, human supervision and control over work processes remain guaranteed as these systems should support humans at work as well as the meaningfulness of work [3]. However, the needs of human workers in the context of digital ethics are considered only marginally, most likely within the overarching framework of organizational ethics [57]. Deficits exist with respect to alienation or/and, for example, the consideration of other than formal task requirements at work, which are of great importance in organizations as well as for workers and their work identity [57,58,59]. Within the use of digital technologies, however, the requirements for ethics in organizations and the role of ethics in the technical design of systems are changing, especially in the ethical design of work—and technology including implementation processes. Consideration is being given to the increasing use of digital technologies and their effects on work and to the need to shape HCI humanely. This void in research on digital ethics and in attempts to implement digital ethics in business practice [23] is the starting point for our focus on workers’ perception of alienation.
2.3 Alienation and autonomy
Alienation is most important in the context of work, locating within it (and beyond) factors such as autonomy in the context of HCI. Alienation from work is essentially about the experience of meaningfulness in what workers do or whether their activities feel meaningful. The experience of meaningfulness is promoted, for example, by workers having the feeling of autonomy and a certain degree of control over their own actions. Furthermore, meaningfulness is based on the degree of self-realization, which is increased in particular by a high degree of autonomy [60]. Autonomy can be promoted through challenging tasks. However, it is indispensable that the organization also provides the resources and tools that are necessary to master the work task [61]. However, there is increasing evidence that transformation of work processes offer fewer opportunities for autonomy, co-determination, etc., and therefore workers find it difficult to perceive their work as meaningful [62]. With increasing use of digital technology, positive factors of autonomy are counteracted, since technologies (can) take over more and more parts in work processes. In particular, this may pose a threat to the autonomy of workers, particularly when workers have only little insights into the development, use and functioning of (digital) technologies and are only little involved in (decision-making) processes toward the use of digital technologies and their development.
2.4 Alienation and worker involvement
Following Marx [63], alienation, especially alienation of labor, occurs, when workers feel “a sense of loss and unfairness, of powerlessness and loss of dignity, which is prone to provoke resentment, anger and frustration. Capital produces alienation in both its objective and subjective garbs” [64]. It also includes workers being comprehensively involved in the production process, so they can perceive the effect of their actions. It is also important that they can perform varied activities and/or that their contribution to the production process is recognized and valued. This ultimately leads to workers perceiving their work as important or unimportant. If workers can identify themselves with their work, their activities, and the products (and possibly also the organization for which they work), this has a positive effect on reducing alienation. Thus, meaningfulness always prevails when there is a connection between the goals pursued at work and private goals. Previous studies on the effects of autonomy and worker involvement are, however, ambivalent. Gardell [65] states, based on a quantitative study, that alienation from work leads to a withdrawal from interest in change processes that might lead to more autonomous tasks and increased worker influence. In a later study, Gardell [66] points out that worker involvement leads to a richer job content, as well as a more effective use of productive resources. Kalleberg et al. [67] point to a disagreement about the consequences of participation for employee well-being. They summarize several studies with inconsistent results. While some researchers highlight the positive effects of involvement and autonomy [68], others point to the intensification of workload and stress through enhanced autonomy and involvement [69]. In their own study, Kalleberg et al. [67] found that, while involvement and autonomy are linked to more positive than negative outcomes for workers, they highlight that special training is needed for workers to take advantage of their opportunities and benefit from participating in decision-making. Otherwise, autonomy and involvement can lead to enhanced stress. However, worker involvement tends to increase job satisfaction, especially among higher-skilled personnel and more demanding jobs because employees develop a sense of responsibility if they participate in the organization and if they regard it as a common good, not a source of profit for someone else [70]. Following (digital) ethics, it can be added that the worker’s involvement, their perspectives, and (ethical) requirements are increasingly important beyond technical development [7, 71,72,73].
2.5 Derivation of hypotheses
Worker’s perceived degree of autonomy is decisive for their definition of a workplace situation as either positive or negative in relation to the meaningfulness. These definitions will shift with the technological change in their work and the subsequent changing degree of worker involvement. From this perspective, the interaction with technology possibly stimulates and enhances the worker’s involvement [74]. Worker involvement [75, 76] and autonomy [77] are important presumptions for HCI.
Our contribution is to show the effect of involvement and autonomy on alienation, so we derive the following hypotheses.
H1
The intensity of workers working with HCI increases alienation from work.
Alienation is closely related to the worker’s perception of autonomy. In digital work contexts, using intelligent technologies affects workers’ autonomy regarding the distribution of decisions and actions taken by AI systems at work or by humans. Previous research shows that the use of intelligent technologies such as AI restricts human autonomy and self-determination and entails various dangers or challenges [78], and reveals that—both intentionally and unintentionally—autonomy is sometimes compromised by the use of digital technologies [79]. We, thus, hypothesize:
H2
HCI in combination with respectively providing autonomy reduces alienation.
H3
HCI in combination with respectively providing worker involvement reduces alienation.
3 Data and methods
To test the hypotheses, we conducted an online survey from April to July 2021 (see Appendix B for the questionnaire. The study was conducted in German. The authors translated the items to English where no original English items were available). The sample of n = 1989 respondents is representative for Germany for the variables age, federal state (i.e., the location of respondent’s workplace; Bundesland), and respondent’s qualification (see Appendix A). A research agency supported us in recruiting respondents. Participation in the study was voluntary. Respondents were asked to fill out the attached questionnaire online by the research agency and were incentivized monetarily by the agency. The mean age of the sample is 42.3 years with a standard deviation of 12.9 years. The minimum age was deliberately set at 18 years and the maximum age is 73 years. Respondents were given the options to define themselves as male, female or diverse. 47% of respondents chose the option female, 52.7% chose male, and 0.3% chose diverse. Regarding qualification, around one third of the respondents have a professional qualification (Anerkannter Berufsabschluss), around 20% of respondents are academics and 13% do not have a professional qualification. Around 50% of respondents work for large companies with 250 workers or more 23% work for medium size companies with 50 to 249 workers, 17% work for small companies with 10 to 49 workers, and 10% work for very small companies with 10 or less workers.
We addressed the concepts of alienation [80], worker involvement [81, 82], and autonomy [83, 84] using well-established scales. Following Breaugh [85], we distinguished autonomy regarding work criteria, methods, and scheduling. Work criteria autonomy is the ability of workers to choose or modify the criteria used for evaluating performance. Work method autonomy refers to workers’ freedom in selecting strategies related to tasks, and work schedule autonomy is workers’ ability to choose work timings and durations. Since a scale on HCI use at work was missing so far, we developed based on Ren and Bao [86]. Their categorization of human computer interfaces was the theoretical basis for our operationalization of the use of HCI. In our study, we focused on the interfaces between human and technology to get further insights in the distribution and use of the effects of intelligent/digital technologies at work. Especially the use of different interfaces between human and machine working conditions like safety, workload, and interface design requires more research [87]. For listening and speaking, we addressed natural language command input from human to technology and a natural language output from technology to human. For the dimension reading and writing, we refer to natural language writing recognition as command input from human to technology. And as for the visual dimension, we asked for the use of technologies for object recognition, self-driving/moving machines, technologies that can recognize human motions, and human face recognition. Moreover, we questioned a general cooperation with smart automated machines and robots and also smart wearables that can be worn as gloves or goggles and are, therefore, a direct human body to technology interaction. We evaluated this new item regarding reliability with a Cronbach’s ∝ of 0.9369 as a good quality measure.
To test our hypotheses, we conducted a structural equation model (SEM). SEM offers various advantages in respect to other statistical analyses (e.g., regression, variance analyses) [88]. On the one hand, SEM allows for the simultaneous estimation of several effects. On the other hand, SEM allows for the consideration of manifest as well as latent constructs and, therefore, the testing of hypothetical, theoretical models. In the present study, multiple effect relationships between latent variables are investigated. Therefore, a SEM with latent variables is applied. This consists of a structural model and the measurement models of the exogenous and endogenous variables (see Fig. 1). The structural model comprises the hypothesized effect relationships between the latent variables. The measurement models are used to empirically capture the latent variables. In the present case, reflective indicators are used, which are also assigned error terms (ε) [89].
The structural equation model is controlled for the variables age, gender, and job level (Fig. 2). Job levels have been inquired with a self-estimation on the following items: unskilled or semi-skilled job (12.56%), professionally oriented job (54.63%), complex specialist job (19.68%), or highly complex job (13.13%). The control variables show that an above-average number of young male with high job levels use HCI [90, 91].
As pre-analysis we tested for reliability using Cronbach’s alpha and sampling adequacy using Kaiser–Meyer–Olkin. Table 1 shows the results of the factor analysis for the four constructs HCI use, alienation, worker involvement, and autonomy. The table shows factor loadings and sampling adequacy of each item as well as Cronbach’s alpha as goodness of fit measures. All ∝ values are above a threshold of 0.7 and are, therefore, considered acceptable [92]. The Kaiser–Meyer–Olkin measures are all above a threshold of 0.6 and are, therefore, also considered acceptable [93, 94]. All item’s factor loadings are above a threshold of 0.3 and are, hence, interpretable. The items can be considered valuable to their factors [95].
Figure 2 shows the structural equation model. The goodness of fit measures of the model are all acceptable. We chose maximum likelihood parameter estimation over other estimation methods because the data were distributed normally [96]. The model was significant with χ2 (5) = 25.50 and Prob > χ2 = 0.0001. The model seems to be a good fit with CFI 0.987 and TLI 0.960. The RMSEA shows a good fit with a value of 0.057 and the SRMR also suggests a close model fit with a value of 0.021. The model consists of n = 1619 respondents with 370 cases omitted due to missing values.
The RMSEA can be described as a “badness-of-fit index” [97]. Accordingly, a high RMSEA indicates that the formulated model can only be poorly approximated to the collected data. The range of values of the RMSEA is between zero and one. To conclude an acceptable model fit, the RMSEA should not exceed 0.08 [98]. The SRMR describes the standardized deviation between the model-theoretical and the empirical covariance matrix [96]. Accordingly, a value of zero indicates that the formulated model fits the data perfectly. According to Hu and Bentler [99], an acceptable model fit can be assumed if the SRMR has a value below 0.08. Furthermore, the χ2 value is divided by the number of degrees of freedom of the formulated model and used as a descriptive quality criterion [100]. It is also true here that a smaller value indicates a better model fit. For an acceptable model fit, it is often required that the ratio between χ2 value and degrees of freedom does not exceed three [101, 102]. The TLI and the CFI represent incremental fit indices. With the help of these quality criteria, a comparison is made between the formulated model and a base model [99]. Generally speaking, it is checked whether the formulated model shows a relevant improvement over the base model. The TLI makes this comparison based on the χ2-difference and the degrees of freedom of the models. In addition, when using the CFI, possible biases of the χ2-distribution are taken into account. In the literature, similar thresholds are required for both indices. TLI and the CFI should have a minimum value of 0.95 [99].
In the overall picture, the global fit indices, thus, suggest an acceptable model fit.
4 Results
The model shows that the increase of one standard deviation in HCI use increases alienation from work 0.32 standard deviations. However, the model also shows that worker involvement decreases alienation by 0.34 standard deviations. Autonomy has no meaningful effect in this model which can be explained by the high inter-correlation of 0.3 standard deviations between autonomy and worker involvement, which we controlled for in the model. It also becomes visible that job level has an effect on both autonomy and worker involvement, so that one standard deviation higher job level leads to 0.16 standard deviations higher autonomy and 0.21 standard deviations higher worker involvement. Our results show that H1: The intensity of workers working with HCI increases alienation from work can be assumed. Collaboration with machines (HCI) leads to increased alienation from work (by 0.32 standard deviations) in principle and in the absence of supporting measures such as the guarantee of certain freedom of action for workers (autonomy) or their involvement in the design, implementation, and use of HCI (worker involvement). However, we can also assume H2: HCI in combination with, respectively, providing autonomy reduces alienation and H3: HCI in combination with, respectively, providing worker involvement reduces alienation. The model shows that autonomy and worker involvement both reduce the alienation caused by HCI. While the effect on alienation seems to be mostly for worker involvement, there is a high interaction effect between autonomy and worker involvement. This is why both concepts are important for reducing alienation. Based on our study and this model, we can conclude that when HCI is implemented using a participatory work environment where workers are involved in change processes and workers are given autonomy in HCI use, it can reduce alienation from work. Otherwise, however, alienation from work increases. We can also state that the positive effects of autonomy and worker involvement are more likely to occur for workers with higher job levels.
5 Discussion
5.1 Summary of findings
This study investigated the relationship between HCI and alienation with autonomy and worker involvement as central aspects of digital ethics. Analysis of our representative online survey of a German sample in 2021 yielded three novel insights that indicate the relevance of the worker’s perspective, respectively, workers’ perception of alienation regarding digital ethics. First, results showed the high relevance of using digital technology at work for autonomy and alienation, as HCI leads to more alienation. Second, workers perceiving autonomy and worker involvement feel less alienated from work. Third, the positive effects of autonomy and worker involvement are more likely to occur for workers with higher job levels. Concerning our research questions, we can state that (Q1) the use of HCI at work without any supporting measures will lead to increased worker’s alienation. We can also state, that (Q2) worker involvement and autonomy interact strongly and will both decrease the effect of HCI on alienation. So, to ethically introduce HCI to work, autonomy and worker involvement must be considered.
5.2 Contributions
Prior research has widely neglected to investigate digital ethics from a worker’s perspective. Using a representative sample of German workers, we address this gap and show that concepts such as autonomy, worker involvement, and alienation affect the intensity of using HCI. The first contribution of this paper lies in our investigation of HCI as relevant to digital ethics. HCI is relevant to digital ethics because using collaborative technologies influences work processes and alters work tasks [47]. In the course of implementing digital technologies, changes in the responsibilities and autonomy of human workers may occur since such systems may take over certain tasks in work processes and make decisions [45]. Another relevant issue are the opportunities and processes for worker involvement in the design and implementation of digital technologies at work. To date, research has lacked a proper scale to analyze the intensity of using different technologies at work. Drawing on the seminal work of Ren and Bao [77], we developed a scale focusing on different areas of digital technologies’ implementation, namely listening and speaking, reading and writing, and the visual dimension.
As a second contribution, this investigation reveals the relevance of autonomy, worker involvement, and alienation in the context of digital ethics. To date, digital ethics have mainly concerned aspects such as technical consistency, maturity, transparency, reliability and security, the requirements for data, and data quality and protection, and have focused on fairness, accountability, and traceability of decisions [3, 15, 48, 103]. Until now, human workers have been excluded from this context, producing a blind spot with regard to the worker’s needs when using digital technologies. Our results point to the importance of considering aspects such as autonomy and worker involvement—factors that must be included if digital ethics should provide guidance as to what is right or wrong in using intelligent technologies at work. Especially human autonomy must be considered in the context of digital ethics since autonomy is important for intrinsic motivation and has several other positive effects on humans’ psychological states with the effect to avoiding alienation. Autonomy [8], worker involvement [64], and possible threats of (further) alienation require attention. If workers have the feeling that they can make decisions and contribute to change processes, this has a positive effect not only on HCI use but also on the perception of alienation from work. Conversely, this can lead to greater identification with the work and the organization and, for example, counteract challenges such as fluctuation and a shortage of skilled workers.
This represents our third contribution in terms of the practical implications for business practice of our empirical findings. Previous research has shown that the use of digital technologies affects worker’s health [104] and well-being [105]. Consequently, it is important to analyze and explore both the antecedents and effects of HCI use. For the digital transformation to succeed in organizational practice, workers must have the necessary resources. They must be adequately trained and informed about the technology, and—perhaps more importantly—they must be able to deal autonomously with this information and make decisions. We recommend that technologies and workflows be designed so that workers have at least some influences over what technology is used and when. It is particularly important that workers perceive technology as a support rather than a monitoring or control device [30, 31]. Therefore, the implementation of digital technologies must be considered in conjunction with organizational policies regarding digital ethics, worker involvement in the implementation of digital technologies, and, for example, the possibility of formal and informal worker training [106, 107].
5.3 Limitations and avenues for further research
The results of the study are subject to limitations. First, the study only investigated employees in Germany. Other contractual relationships (e.g., freelancers) and other countries were not considered. Future studies could examine both contractual and cross-cultural differences and test the effects found in this study. Second, this study focuses on workers in organizational contexts. Future studies could broaden the perspective by capturing the views of managers or by considering organizational characteristics. Third, because we do not examine organizational developments, further research, such as organizational case studies, could focus on how the demands of digital ethics manifest themselves in organizational practices and how this manifestation affects workers’ perceptions of autonomy, involvement, and alienation. The impact of different training offerings—as a certain form of worker involvement in the digital transformation—could also be differentiated. Additionally, although the phenomenon of a digital divide that became visible in our data along variables such as gender, education and age is highly relevant for AI ethics, we could not investigate this further in this paper.
6 Conclusion
In conclusion, considering worker’s perspective in digitalization processes, the concept of alienation is highly relevant for AI ethics. To ethically design and implement HCI to workplaces, supporting measures are mandatory to give workers the resources required. We can show that the threat of alienation through the implementation of digital technologies and resulting HCI at workplaces can be reduced when autonomy and worker involvement are high. Thus, a participatory and autonomous work environment is crucial for a successful digital transformation. In this respect, autonomy and worker involvement are important on all job levels. Our research shows that currently higher job levels, such as complex specialist jobs and highly complex jobs, benefit more from worker involvement and autonomy. It remains a future task to enable all job levels with the resources required to successfully work with digital technologies.
References
Brynjolfsson, E., McAfee, A.: The second machine age: work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company, New York (2014). https://doi.org/10.1414/84259
Susskind, R.E., Susskind, D.: The future of the professions: how technology will transform the work of human experts. Oxford University Press, Oxford (2015). https://doi.org/10.1093/oso/9780198713395.001.0001
European Commission. Ethics guidelines for trustworthy ai. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 (2019). Accessed 08 June 2021
Balasubramanian, N., Ye, Y., Xu, M.: Substituting human decision-making with machine learning: implications for organizational learning. Acad. Manag. Rev. 47, 448–465 (2022). https://doi.org/10.5465/amr.2019.0470
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J.W., Christakis, N.A., Couzin, I.D., Jackson, M.O.: Machine behaviour. Nature 568, 477–486 (2019)
De Cremer, D.: With ai entering organizations, responsible leadership may slip! AI Ethics 2, 49–51 (2022). https://doi.org/10.1007/s43681-021-00094-9
McGuire, J., De Cremer, D.: Algorithms, leadership, and morality: Why a mere human effect drives the preference for human over algorithmic leadership. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00192-2
Hackman, J.R., Oldham, G.R.: Development of the job diagnostic survey. J. Appl. Psychol. 60, 159–170 (1975). https://doi.org/10.1037/h0076546
de Jonge, J.: Job autonomy, well-being, and health: a study among dutch health care workers. Rijksuniversiteit Limburg, Maastricht (1995). https://doi.org/10.26481/dis.19960125jj
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F.: Ai4people—an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Mind. Mach. 28, 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5
Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14, 627–660 (2020). https://doi.org/10.5465/annals.2018.0057
Martin, K.: Ethical implications and accountability of algorithms. J. Bus. Ethics 160, 835–850 (2019). https://doi.org/10.1007/s10551-018-3921-3
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. 3, 1–21 (2016). https://doi.org/10.1177/2053951716679679
Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20, 1–3 (2018). https://doi.org/10.1007/s10676-018-9450-z
Floridi, L.: Establishing the rules for building trustworthy ai. Nat Mach Intell 1, 261–262 (2019)
Fietkau, J., Balthasar, M.: Compatibility of support and autonomy in personalized hci. Schriften zur soziotechnischen Integration 6, 1–16 (2020). https://doi.org/10.18726/2020_8
Nylin, M., Johansson Westberg, J., Lundberg, J.: Reduced autonomy workspace (raw)—an interaction design approach for human-automation cooperation. Cogn. Technol. Work 24, 261–273 (2022). https://doi.org/10.1007/s10111-022-00695-2
Blauner, R.: Alienation and freedom: the factory worker and his industry. Chicago U. Press, Oxford (1964). https://doi.org/10.2307/2574777
Seeman, M.: On the personal consequences of alienation in work. Am. Sociol. Rev. (1967). https://doi.org/10.2307/2091817
Kon, I.S.: The concept of alienation in modern sociology. Soc. Res. 34, 507–528 (1967)
Danaher, J., Nyholm, S.: Automation, work and the achievement gap. AI Ethics 1, 227–237 (2021). https://doi.org/10.1007/s43681-020-00028-x
Liao, Q.V., Gruen, D., Miller, S.: Questioning the ai: Informing design practices for explainable ai user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15. (2020). https://doi.org/10.1145/3313831.3376590
Premnath, S., Arun, A.: A qualitative study of artificial intelligence application framework in human resource management. Xi'an Univ. Archit. Tech. 11, 1193–1209 (2020)
Yu, L., Yu, Z.: Qualitative and quantitative analyses of artificial intelligence ethics in education using vosviewer and citnetexplorer. Front. Psychol. (2023). https://doi.org/10.3389/fpsyg.2023.1061778
Malodia, S., Islam, N., Kaur, P., Dhir, A.: Why do people use artificial intelligence (ai)-enabled voice assistants? IEEE Trans. Eng. Manag. (2021). https://doi.org/10.1109/TEM.2021.3117884
Anderson, M., Anderson, S.L.: Machine ethics. Cambridge University Press, Cambridge (2011). https://doi.org/10.1017/CBO9780511978036
Becker, S.J., Nemat, A.T., Lucas, S., Heinitz, R.M., Klevesath, M., Charton, J.E.: A code of digital ethics: laying the foundation for digital ethics in a science and technology company. AI Soc. (2022). https://doi.org/10.1007/s00146-021-01376-w
Danks, D.: Digital ethics as translational ethics. In: Vasiliu-Feltes, I., Thomason, J. (eds.) Applied ethics in a digital world, pp. 1–15. IGI Global, Pennsylvania (2022). https://doi.org/10.4018/978-1-7998-8467-5
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., Rahwan, I.: The moral machine experiment. Nature 563, 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6
Martin, K., Freeman, R.E.: Some problems with employee monitoring. J. Bus. Ethics 43, 353–361 (2003). https://doi.org/10.1023/A:1023014112461
Ananny, M.: Toward an ethics of algorithms: convening, observation, probability, and timeliness. Sci. Technol. Hum. Values 41, 93–117 (2016). https://doi.org/10.1177/0162243915606523
Kitchin, R.: Thinking critically about and researching algorithms. Inf. Commun. Soc. 20, 14–29 (2017). https://doi.org/10.2139/ssrn.2515786
Willson, M.: Algorithms (and the) everyday. Inf. Commun. Soc. 20, 137–150 (2017). https://doi.org/10.1080/1369118X.2016.1200645
Ball, K.: Elements of surveillance: a new framework and future directions. Inf. Commun. Soc. 5, 573–590 (2002). https://doi.org/10.1080/13691180208538807
Ball, K.: Workplace surveillance: an overview. Labor Hist. 51, 87–106 (2010). https://doi.org/10.1080/00236561003654776
Bernstein, E.S.: Making transparency transparent: the evolution of observation in management theory. Acad. Manag. Ann. 11, 217–266 (2017). https://doi.org/10.5465/annals.2014.0076
Martin, K., Nissenbaum, H.: Measuring privacy: an empirical test using context to expose confounding variables. Columbia Sci. Technol. Law Rev. 18, 176 (2016). https://doi.org/10.2139/ssrn.2709584
Diakopoulos, N.: Accountability in algorithmic decision making. Commun. ACM 59, 56–62 (2016). https://doi.org/10.1145/2844110
Neyland, D.: On organizing algorithms. Theory Cult. Soc. 32, 119–132 (2015). https://doi.org/10.1177/0263276414530477
Ananny, M., Crawford, K.: Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20, 973–989 (2018). https://doi.org/10.1177/1461444816676645
Martin, K., Parmar, B.: What firms must know before adopting AI: The ethics of ai transparency. Available at SSRN 4207128 (2022). https://doi.org/10.2139/ssrn.4207128
Stohl, C., Stohl, M., Leonardi, P.M.: Digital age| managing opacity: Information visibility and the paradox of transparency in the digital age. Int. J. Commun. 10, 15 (2016)
Beer, D.: The social power of algorithms, vol. 20, pp. 1–13. Taylor & Francis, Oxfordshire (2017). https://doi.org/10.1080/1369118X.2016.1216147
Neyland, D., Möllers, N.: Algorithmic if… then rules and the conditions and consequences of power. Inf. Commun. Soc. 20, 45–62 (2017). https://doi.org/10.1080/1369118X.2016.1156141
Ajunwa, I., Crawford, K., Schultz, J.: Limitless worker surveillance. Calif. Law Rev. (2017). https://doi.org/10.15779/Z38BR8MF94
Boyd, D., Crawford, K.: Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Inf. Commun. Soc. 15, 662–679 (2012). https://doi.org/10.1080/1369118X.2012.678878
Zuboff, S.: In the age of the smart machine: the future of work and power. Basic Books, Inc., New York (1988). https://doi.org/10.1007/BF01423360
Bathaee, Y.: The artificial intelligence black box and the failure of intent and causation. Harv JL Tech 31, 889–938 (2017). https://doi.org/10.1037//0021-9010.86.6.1191
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Transact. Syst. Man Cybern. Part A 30, 286–297 (2000). https://doi.org/10.1109/3468.844354
Sheridan, T.B.: Human–robot interaction: status and challenges. Hum. Factors 58, 525–532 (2016). https://doi.org/10.1177/0018720816644364
Seeber, I., Bittner, E., Briggs, R.O., De Vreede, G.-J., De Vreede, T., Druckenmiller, D., Maier, R., Merz, A.B., Oeste-Reiß, S., Randrup, N. (eds.): Machines as teammates: a collaboration research agenda, Proceedings of the 51st Hawaii International Conference on System Sciences (2018). https://doi.org/10.24251/HICSS.2018.055
Madary, M.: The illusion of agency in human–computer interaction. Neuroethics 15, 1–15 (2022). https://doi.org/10.1007/s12152-022-09491-1
Smuha, N.: Ethik-Leitlinien für eine vertrauenswürdige KI. Hochrangige Expertengruppe für künstliche Intelligenz, Europäische Kommission. Link: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai, Brüssel (2018)
ISO. Iso/iec tr 24028:2020 information technology—artificial intelligence—overview of trustworthiness in artificial intelligence. https://www.iso.org/standard/77608.html (2020). Accessed 28 July 2020
Harasimiuk, D.E., Braun, T.: Regulating artificial intelligence: binary ethics and the law. Routledge, Oxfordshire (2021). https://doi.org/10.4324/9781003134725
Vasse’i, R.M.: The ethical guidelines for trustworthy ai–a procrastination of effective law enforcement. Computer Law Rev. Int. 20, 129–136 (2019). https://doi.org/10.9785/cri-2019-200502
Rhodes, C.: The ethics of organizational ethics. Organ. Stud. 0, 1–17 (2022). https://doi.org/10.1177/01708406221082055
Gonzales, A.L., Hancock, J.T.: Identity shift in computer-mediated environments. Media Psychol. 11, 167–185 (2008). https://doi.org/10.1080/15213260802023433
Ten Bos, R.: Essai: business ethics and bauman ethics. Organ. Stud. 18, 997–1014 (1997). https://doi.org/10.1177/017084069701800605
Martela, F., Pessi, A.B.: Significant work is about self-realization and broader purpose: defining the key dimensions of meaningful work. Front. Psychol. (2018). https://doi.org/10.3389/fpsyg.2018.00363
Bailey, C., Yeoman, R., Madden, A., Thompson, M., Kerridge, G.: A review of the empirical literature on meaningful work: progress and research agenda. Hum. Resour. Dev. Rev. 18, 83–113 (2019). https://doi.org/10.1177/1534484318804653
Hardering, F.: Wann erleben Beschäftigte ihre Arbeit als sinnvoll? Befunde aus einer Untersuchung über professionelle Dienstleistungsarbeit. Z. Soziol. 46, 39–54 (2017). https://doi.org/10.1515/zfsoz-2017-1003
Marx, K.: Ökonomisch-philosophische Manuskripte, mew. B. Zehnpfennig. Meiner, Hamburg (1844). https://doi.org/10.28937/978-3-7873-2079-0
Harvey, D.: Universal alienation. J. Cult. Res. 22, 137–150 (2018). https://doi.org/10.1080/14797585.2018.1461350
Gardell, B.: Autonomy and participation at work. Hum. Relat. 30, 515–533 (1977). https://doi.org/10.1177/001872677703000603
Gardell, B.: Worker participation and autonomy: a multilevel approach to democracy at the workplace. Int. J. Health Serv. 12, 527–558 (1982). https://doi.org/10.2190/AW2E-4D3E-57PA-KDAP
Kalleberg, A.L., Nesheim, T., Olsen, K.M.: Is participation good or bad for workers? Effects of autonomy, consultation and teamwork on stress among workers in norway. Acta Sociologica 52, 99–116 (2009). https://doi.org/10.1177/0001699309103999
Appelbaum, E., Bailey, T., Berg, P., Kalleberg, A.L.: Manufacturing advantage: Why high-performance work systems pay off. Cornell University Press, New York (2000)
Batt, R., Doellgast, V.: Groups, teams, and the division of labor: interdisciplinary perspectives on the organization of work. Oxford University Press, Oxford (2006). https://doi.org/10.1093/oxfordhb/9780199299249.003.0008
Kociatkiewicz, J., Kostera, M., Parker, M.: The possibility of disalienated work: being at home in alternative organizations. Hum. Relat. 74, 933–957 (2021). https://doi.org/10.1177/0018726720916762
Stix, C.: Foundations for the future: institution building for the purpose of artificial intelligence governance. AI Ethics 2, 463–476 (2022). https://doi.org/10.1007/s43681-021-00093-w
Burr, C., Leslie, D.: Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00178-0
Melkevik, Å.: The internal morality of markets and artificial intelligence. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00151-x
Twining, J.E.: Alienation as a social process. Sociol. Q. 21, 417–428 (1980). https://doi.org/10.1111/j.1533-8525.1980.tb00622.x
Fountaine, T., McCarthy, B., Saleh, T.: Building the ai-powered organization. Harv. Bus. Rev. 97, 62–73 (2019)
Xu, W.: Toward human-centered ai: a perspective from human-computer interaction. Interactions 26, 42–46 (2019). https://doi.org/10.1145/3328485
Abbass, H.A.: Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cogn. Comput. 11, 159–171 (2019). https://doi.org/10.1007/s12559-018-9619-0
Ernst, C.: Artificial intelligence and autonomy: self-determination in the age of automated systems. In: Wischmeyer, T., Rademacher, T. (eds.) Regulating artificial intelligence, pp. 53–73. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-32361-5_3
Calvo, R.A., Peters, D., Vold, K., Ryan, R.M.: Supporting human autonomy in ai systems: a framework for ethical enquiry. In: Burr, C., Floridi, L. (eds.) Ethics of digital well-being: a multidisciplinary approach, pp. 31–54. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-50585-1_2
Kohr, H.-U., Fischer, A.: Politisches Verhalten und empirische Sozialforschung: Leistung und Grenzen von Befragungsinstrumenten. Juventa, München (1980)
Campion, M.A., Medsker, G.J., Higgs, A.C.: Relations between work group characteristics and effectiveness: implications for designing effective work groups. Pers. Psychol. 46, 823–847 (1993). https://doi.org/10.1111/j.1744-6570.1993.tb01571.x
De Dreu, C.K., West, M.A.: Minority dissent and team innovation: The importance of participation in decision making. J. Appl. Psychol. 86, 1191 (2001). https://doi.org/10.1037//0021-9010.86.6.1191
Benninghaus, H.: Substantielle komplexität der arbeit als zentrale dimension der jobstruktur. Z. Soziol. 16, 334–352 (1987). https://doi.org/10.1515/zfsoz-1987-0502
Snizek, W.E.: Hall’s professionalism scale: an empirical reassessment. Am. Sociol. Rev. (1972). https://doi.org/10.2307/2093498
Breaugh, J.A.: The measurement of work autonomy. Hum. Relat. 38, 551–570 (1985). https://doi.org/10.1177/001872678503800604
Ren, F., Bao, Y.: A review on human-computer interaction and intelligent robots. Int. J. Inf. Technol. Decis. Mak. 19, 5–47 (2020). https://doi.org/10.1142/S0219622019300052
Nachreiner, F., Nickel, P., Meyer, I.: Human factors in process control systems: the design of human–machine interfaces. Saf. Sci. 44, 5–26 (2006). https://doi.org/10.1016/j.ssci.2005.09.003
Hair, J.F.: Multivariate data analysis. Cengage, Boston (2009)
Fornell, C., Bookstein, F.L.: Two structural equation models: lisrel and pls applied to consumer exit-voice theory. J. Mark. Res. 19, 440–452 (1982). https://doi.org/10.1177/002224378201900406
Lutz, C.: Digital inequalities in the age of artificial intelligence and big data. Hum. Behav. Emerg. Technol. 1, 141–148 (2019). https://doi.org/10.1002/hbe2.140
Vasilescu, M.D., Serban, A.C., Dimian, G.C., Aceleanu, M.I., Picatoste, X.: Digital divide, skills and perceptions on digitalisation in the european union—towards a smart labour market. PloS One 15, e0232032 (2020). https://doi.org/10.1371/journal.pone.0232032
Cronbach, L.J.: Coefficient alpha and the internal structure of tests. Psychometrika 16, 297–334 (1951). https://doi.org/10.1007/BF02310555
Kaiser, H.F., Rice, J.: Little jiffy, mark iv. Educ. Psychol. Measur. 34, 111–117 (1974). https://doi.org/10.1177/001316447403400115
Kaiser, H.F.: A second generation little jiffy. Psychometrika 35, 401–415 (1970). https://doi.org/10.1007/BF02291817
Merenda, P.F.: A guide to the proper use of factor analysis in the conduct and reporting of research: pitfalls to avoid. Meas. Eval. Couns. Dev. 30, 156–164 (1997). https://doi.org/10.1080/07481756.1997.12068936
Kline, R.B.: Principles and practice of structural equation modeling. Guilford, New York (2005)
Chen, F.F.: Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Model. 14, 464–504 (2007). https://doi.org/10.1080/10705510701301834
Browne, M.W., Cudeck, R.: Alternative ways of assessing model fit. In: Bollen, K.A., Long, J.S. (eds.) Testing structural equation models, pp. 136–162. Sage, London (1993)
Hu, L.T., Bentler, P.M.: Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55 (1999). https://doi.org/10.1080/10705519909540118
Cangur, S., Ercan, I.: Comparison of model fit indices used in structural equation modeling under multivariate normality. J. Mod. Appl. Stat. Methods 14, 152–167 (2015). https://doi.org/10.22237/jmasm/1430453580
Homburg, C., Giering, A.: Konzeptualisierung und Operationalisierung komplexer Konstrukte: Ein Leitfaden für die Marketingforschung. Marketing (1996). https://doi.org/10.15358/0344-1369-1996-1
Homburg, C., Klarmann, M.: Die Kausalanalyse in der empirischen Betriebswirtschaftlichen Forschung-, Problemfelder und Anwendungsempfehlungen. Die Betriebswirtschaft 66, 727–748 (2006)
Zhang, J., Shu, Y., Yu, H.: Fairness in design: a framework for facilitating ethical artificial intelligence designs. Int. J. Crowd Sci. 7, 32–39 (2023). https://doi.org/10.26599/IJCS.2022.9100033
Daher, K., Fuchs, M., Mugellini, E., Lalanne, D., Abou Khaled, O.: Reduce stress through empathic machine to improve hci. In: International Conference on Human Interaction and Emerging Technologies, pp. 232–237. Springer, (2020). https://doi.org/10.1007/978-3-030-44267-5_35
Nurhas, I., Pawlowski, J.M., Geisler, S.: Towards humane digitization: A wellbeing-driven process of personas creation. In: Proceedings of the 5th International ACM In-Cooperation HCI and UX Conference, pp. 24–31. (2019). https://doi.org/10.1145/3328243.3328247
Eraut, M.: Informal learning in the workplace. Stud. Contin. Educ. 26, 247–273 (2004). https://doi.org/10.1080/158037042000225245
Noe, R.A., Tews, M.J., Marand, A.D.: Individual differences and informal learning in the workplace. J. Vocat. Behav. 83, 327–335 (2013). https://doi.org/10.1016/j.jvb.2013.06.009
Statistisches Bundesamt. Datenbank genesis-online. https://www-genesis.destatis.de/genesis/online?sequenz=tabellen&selectionname=212*#abreadcrumb (2021). Accessed 09 July 2021
Jobin, et al.: The global landscape of ai ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2
Funding
Open Access funding enabled and organized by Projekt DEAL. This work has been funded by the Ministry of Economy, Labor and Tourism Baden-Württemberg within the project "Ethical and socially responsible AI in businesses".
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A—distribution of socio-demographics
In Appendix A, we show the distributions of the variables gender, age, workplace, and vocational education according to our study and in comparison to the official German census of 2021. Figures 3, 4, 5, and 6 show the distributions of the before-mentioned variables.
Appendix B—questionnaire
2.1 Socio-demographics
2.2 Intensity of HCI use
For each example, please indicate how frequently or infrequently you encounter each in your work | |||||||
---|---|---|---|---|---|---|---|
How often does it happen that… | Very often | Often | Part/parts | Rarely | Very rarely | Never | No indication |
you work with computer-controlled machines or robots? | O | O | O | O | O | O | O |
you work with technology that you wear on your body, like smart glasses or gloves? | O | O | O | O | O | O | O |
you work with technology that understands spoken input? | O | O | O | O | O | O | O |
you work with technology that communicates with you in spoken language? | O | O | O | O | O | O | O |
you work with technology that recognizes written text? | O | O | O | O | O | O | O |
you work with technology that can recognize objects independently? | O | O | O | O | O | O | O |
you work with technology that is autonomously moving/self-propelled? | O | O | O | O | O | O | O |
you work with technology that can recognize human movements? | O | O | O | O | O | O | O |
you work with technology that can recognize human faces? | O | O | O | O | O | O | O |
2.3 Autonomy in action and decision-making
For each statement, please indicate the extent to which you agree or disagree with it | ||||||
---|---|---|---|---|---|---|
Strongly agree | Agree | Part/parts | Disagree | Strongly disagree | No indication | |
As a member in this team, I have a real say in how the team carries out its work | O | O | O | O | O | O |
Most members in this team a the chance to participate in decision making | O | O | O | O | O | O |
My team is designed to let everyone participate in decision making | O | O | O | O | O | O |
2.4 Worker involvement
For each statement, please indicate the extent to which you agree or disagree with it | ||||||
---|---|---|---|---|---|---|
Strongly agree | Agree | Part/parts | Disagree | Strongly disagree | No indication | |
I can influence the way smart technology is used in the organization | O | O | O | O | O | O |
I can determine which smart technologies I use in my job | O | O | O | O | O | O |
My job gives me freedom and independence in planning and performing work | O | O | O | O | O | O |
My decisions are reviewed by intelligent technology | O | O | O | O | O | O |
2.5 Alienation from work
For each statement, please indicate the extent to which it does or does not apply to your work | ||||||
---|---|---|---|---|---|---|
Strongly agree | Agree | Part/parts | Disagree | Strongly disagree | No indication | |
Somehow, I think, my work is important | O | O | O | O | O | O |
My work often makes me feel somehow empty | O | O | O | O | O | O |
I have the feeling that I am doing something meaningful at work | O | O | O | O | O | O |
My work is one big treadmill | O | O | O | O | O | O |
I hardly see the point of my work | O | O | O | O | O | O |
There is always something to learn from my work | O | O | O | O | O | O |
My work hardly offers me any variety | O | O | O | O | O | O |
I lack an overview of what I do at my work | O | O | O | O | O | O |
Sometimes it is almost uncomfortable for me to say what I am working on | O | O | O | O | O | O |
Other professions are actually more vital to society than mine | O | O | O | O | O | O |
The importance of my profession is sometimes over stressed | O | O | O | O | O | O |
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jungtäubl, M., Zirnig, C. & Ruiner, C. HCI driving alienation: autonomy and involvement as blind spots in digital ethics. AI Ethics 4, 617–634 (2024). https://doi.org/10.1007/s43681-023-00298-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-023-00298-1