This paper in the journal “Gruppe. Interaktion. Organisation. (GIO)” presents a study that investigated user experience characteristics as determinants of technology acceptance. Organizations planning to implement new technologies are confronted with the challenge to ensure user acceptance. Barely accepted technologies are used less often, result in lower job satisfaction, and ultimately lead to performance losses. The technology acceptance model (Venkatesh and Bala 2008) incorporates determinants of information technology use. The model’s predictors have a strong focus on interindividual user characteristics (such as computer self-efficacy) and the job context (e.g., voluntariness). Yet, what is lacking in the model, are characteristics of the technology itself that can be used as starting points to design better technologies. To bridge this gap, we introduce the User Experience Technology Acceptance Model, and provide a first test of this model. In our online survey (N = 281), we investigated how technological determinants, more specifically user experience characteristics, affected technology acceptance. Except for two paths of our proposed model, all path coefficients were significant with small to large effect sizes (f2 = 0.02–0.66). User experience predictors resulted in 60.6% of explained variance in perceived ease of use, 38.2% of explained variance in perceived usefulness, and 25.8% of explained variance in behavioral intention. Our results provide mostly support for our extension of the technology acceptance model. The technology-inherent characteristics output quality, perspicuity, dependability, and novelty were significant predictors of technology acceptance. We discuss theoretical and practical implications with the focus on technology designers, change managers, and users.
Dieser Beitrag in der Zeitschrift „Gruppe. Interaktion. Organisation. (GIO)“ stellt eine Studie vor, in der User Experience-Merkmale als Einflussfaktoren auf Technologieakzeptanz untersucht wurden. Bei der Einführung neuer Technologien sehen sich Unternehmen vor der Herausforderung, dass Benutzer diese akzeptieren. Wenig akzeptierte Technologien werden seltener eingesetzt, stehen in Verbindung mit einer geringeren Arbeitszufriedenheit und wirken sich schlecht auf die Leistung aus. Das Technology Acceptance Model (Venkatesh und Bala 2008) umfasst Faktoren, die die Nutzung von Informationstechnologien vorhersagen. Diese beinhalten vornehmlich interindividuelle Benutzermerkmale (z. B. Computer-Selbstwirksamkeit) und den beruflichen Kontext (z. B. Freiwilligkeit). Was jedoch im Modell fehlt, sind Merkmale der Technologie selbst, die als Ausgangspunkt für das Design besserer Technologien dienen können. Um diese Lücke zu schließen, präsentieren und testen wir das User Experience Technology Acceptance Model. In unserer Online-Umfrage (N = 281) haben wir untersucht, wie technologische Faktoren, insbesondere User Experience-Merkmale, die Technologieakzeptanz beeinflussen. Mit Ausnahme von zwei Pfaden waren alle Pfadkoeffizienten unseres Modells bei kleinen bis großen Effektstärken (f2 = 0,02–0,66) signifikant. Die User Experience-Prädiktoren klärten 60,6 % der Varianz der wahrgenommenen Einfachheit der Nutzung, 38,2 % der Varianz der wahrgenommenen Nützlichkeit und 25,8 % der Varianz der Nutzungsabsicht auf. Unsere Ergebnisse unterstützen größtenteils die Erweiterung des Technology Acceptance Models. Die technologiebezogenen Merkmale Output-Qualität, Durchschaubarkeit, Zuverlässigkeit und Neuartigkeit waren signifikante Prädiktoren für die Technologieakzeptanz. Wir diskutieren theoretische und praktische Implikationen mit dem Fokus auf Technologiegestaltern, Change-Managern und Anwendern.
The modern workplace is increasingly impacted by advanced technologies (Maier and Kauffeld 2020; Parker and Grote 2020). Applications cover, for example, collaborative robots (Steil and Maier 2017, 2020), smart glasses (Paruzel et al. 2020) or data science, such as machine learning, and artificial intelligence (Reich and Samet 2019). In socio-technical systems, employees, technology, and the organization are interconnected. However, regarding the design of such technologies, the users are often not consulted in the first place (Friedli and Schuh 2012) which hinders the implementation of user-friendly technology features.
Organizations planning to implement such technologies should ensure that the users also accept these, as the investment in technology implementation can be expensive. Technologies with low user acceptance are used less often (Turner et al. 2010), result in lower job satisfaction (Mariani et al. 2013), and ultimately lead to performance losses (Devaraj and Kohli 2003). In 2007, an international web survey study found that 26–34% of IT implementation projects were cancelled or failed (Emam and Koru 2008). Compared to change projects in general, this might seem like a relatively small share of failed projects, as many authors state that about 70% of change projects fail. However, on the one hand, a literature review by Hughes (2011) found that there was no empirical basis for these 70%. On the other hand, failed IT projects can entail immense costs because they often require extensive changes to the infrastructure. Thus, even a third of failed IT implementation projects is a number that should be reduced. Among other reasons for failure, respondents of the survey stated that the technology did not work as expected and that end users were not sufficiently involved. This indicates that there is a high need for action to increase technology acceptance for companies when implementing technologies.
Our study focused on the technological aspects which lead to technology acceptance in order to design a technology in a user-friendly way. The focus on a technology’s features is important because technical problems and use intensity of a technology are positively related to perceived workload and work pressure, and negatively to perceived job control, which, in turn, is related to increased worker stress (Carayon-Sainfort 1992). We contribute to the literature by expanding the well-established technology acceptance model (TAM; Venkatesh and Bala 2008) by user experience (UX) criteria (Laugwitz et al. 2008). We follow the approach of designing aspects of the technology according to the preferences and needs of the users instead of requiring the users to adapt themselves to the technology. In this way, employees can actively design their working conditions. Companies adapting our instruments in the technology implementation process assure not only a high level of technology acceptance, but also benefit from the positive effects of employee participation, such as organizational commitment (Han et al. 2010).
1 The technology acceptance model
Organizations adopt information systems to increase efficiency. However, the full potential of these systems and thus the benefits for companies can only be exploited if they are actually used. Accordingly, extensive research has been conducted in recent decades to identify factors that predict the use of technology. Since the TAM (Davis 1989; Davis et al. 1989) was postulated about 30 years ago, much research has been done on the psychometric verification (e.g., Adams et al. 1992) as well as various extensions of the model, (e.g., Karahanna and Straub 1999; Venkatesh 2000; Venkatesh and Davis 2000).
Based on the theory of planned behavior (Ajzen 1991; Ajzen and Fishbein 1977), the model is intended to predict the individual use of new technologies (Mathieson 1991; Venkatesh 1999). The TAM assumes that the actual use or intention to use the technology is determined in particular by two perceptual processes of the user. On the one hand, the use is determined by the perceived usefulness (PU) of the technology. The PU is defined as the extent to which a person is convinced that the technology use increases his or her work performance. The second factor is perceived ease of use (PEOU). PEOU is defined as the extent to which a person believes that only little or no effort is required to use the technology (Venkatesh and Bala 2008). Through these two determinants, the TAM can explain between 60–65% of the variance in the duration, frequency and intensity of using a technology (Venkatesh et al. 2008). In the first step of the present study, we expected that the original TAM would be replicated.
2 Extensions of the technology acceptance model
In the past, the TAM has been adjusted and extended numerous times. Based on the original TAM (Davis 1989; Davis et al. 1989), two further model extensions (TAM2 and TAM3) were carried out that comprise determinants of social influence, system characteristics, and individual attitudes that affect PU and PEOU (Venkatesh and Bala 2008; Venkatesh and Davis 2000). Other Studies on antecedents of PU and PEOU examined, for example, top management support for the use of the technology, or the user’s self-efficacy (Abbasi et al. 2011; Lee and Lehto 2013). Investigated antecedents of behavioral intention or use were, for example, perceived relevance of change, or the amount of information a user has about the technology (Elwood et al. 2006; Pikkarainen et al. 2004). These studies have in common that the investigated antecedents described characteristics of the user, the context, or the task.
Despite the large number of empirical studies on TAM extensions, only few have investigated technological features as determinants of PU, PEOU, and behavioral intention. Kim et al. (2009), for example, compared the technology acceptance of a variety of technology features of an audit software that differed in complexity. They conducted a survey study with internal auditors, who used the audit software in their work, and found that more complex features, such as the analysis of big data sets, were associated with a decreased PEOU, compared to simpler features, such as database queries. Others have examined characteristics of a specific technology, for example, the influence of e‑shopping quality criteria, such as web site design or customer service, on PEOU (Ha and Stoel 2009), or the influence of content richness and vividness of YouTube videos on PU (Lee and Lehto 2013).
The fact that there are only few studies on technology-inherent determinants is particularly surprising because in earlier versions of the TAM, Davis (1993) actually assumed system design features as an antecedent of PU and PEOU. More specifically, he analyzed if there was a difference between the use of an electronic mail system compared to a text editor in predicting PU and PEOU. In another study, Davis and Bostrom (1993) compared two types of computer interfaces regarding their effect on PEOU. Yet, in the TAM2, the system features had been mostly omitted as determinants (Venkatesh and Davis 2000). The only technology-inherent determinant in later TAM versions is output quality.
3 The user experience technology acceptance model
Following the reasoning of the socio-technical systems approach (Trist and Bamforth 1951), the consideration of technology characteristics is highly relevant in the prediction of technology acceptance. This approach assumes that organizations, technological systems, and employees are not independent of each other. A change in one of these areas also influences the other areas in some way. An optimization therefore cannot take place exclusively in one area; all areas must be optimized together. In the wake of the rapid technological changes triggered by digitalization, the socio-technical systems approach is again moving into the focus of research (e.g. Davies et al. 2017; Maier et al. 2020; Paulsen et al. 2020). Therefore, it is sensible to investigate technological determinants of technology acceptance.
Hornbæk and Hertzum (2017) recently pointed out that the mere information that a technology needs to be useful and easy to use in order to be accepted does not provide enough guidance for practitioners for the selection or modification of competing technologies. The authors therefore suggested UX as a meaningful extension of the TAM, and reviewed the literature of both approaches to investigate overlapping constructs. They concluded that there is a lack of empirical research combining TAM and UX. That is why we decided to investigate UX characteristics as antecedents of PU and PEOU and propose the User Experience Technology Acceptance Model (UX TAM; see Fig. 1). UX has emerged as a research field in human-computer interaction with the aim to consider not only the functionality and usability of a system, but also its experiential attributes (Hassenzahl 2003). Most UX models differentiate between pragmatic and hedonic characteristics of a technology (e.g., Hassenzahl 2003; Khalid 2006). Pragmatic characteristics describe features that help the user achieve his or her goal. Hedonic attributes make a technology experience exciting and stimulating (Hornbæk and Hertzum 2017).
Consequently, we also split up UX. Besides hedonic quality, we chose to distinguish between functionality and usability as aspects of pragmatic quality. By dividing pragmatic quality, we were able to account for the distinction between PU and PEOU of the TAM. We assumed that PU would be related to functionality criteria, more specifically efficiency and output quality. Efficiency of a technology is given when the user does not have to put unnecessary effort into solving a task (Schrepp 2015). Output quality refers to the “degree to which an individual believes that the system performs his or her job tasks well” (Venkatesh and Bala 2008, p. 277). Furthermore, we expected PEOU to be related to usability criteria, more specifically perspicuity and dependability. Perspicuity refers to the degree that a technology is easy to understand, and its use is easy to learn. Dependability of a technology is given when the user can rely on the technology and is in control of it (Schrepp 2015). Lastly, we assumed that hedonic quality criteria were directly associated with behavioral intention. The criteria were stimulation and novelty. Stimulation refers to the degree that the use of the technology is exciting and motivating. Novelty of a technology is given when it is innovative and arouses the user’s interest (Schrepp 2015).
Although there is little literature on the overlap between UX and TAM, there are some indications for our assumptions. First, a technology that is designed in accordance with UX is also more likely to allow the user to regulate his or her actions. According to action regulation theory, there are five phases complete actions are comprised of, beginning with goal development up to the final processing of feedback (e.g. Zacher and Frese 2018). An appropriate design of UX supports the user in the early orientation and action planning phase because action alternatives become more transparent and foreseeable, which makes it easier for the user to choose a course of action. Second, there is already some research that supports these theoretical assumptions: In a study on the acceptance of e‑procurement technologies, Brandon-Jones and Kauppi (2018) found that efficiency- and quality-related indicators, such as order processing speed and order accuracy, were related to PU. Usability-related indicators, such as ease of navigation, were associated with both PU and PEOU, but the relation with PEOU was stronger. Kim and Shin (2015) found that the hedonic quality of a smart watch was significantly correlated with a positive attitude toward the technology, which in turn predicted the intention to use it. Concluding, we derived the following hypotheses.
The (a) efficiency and (b) output quality of a technology are positively related to its perceived usefulness.
The (a) perspicuity and (b) dependability of a technology are positively related to its perceived ease of use.
The (a) stimulation and (b) novelty of a technology are positively related to the user’s behavioral intention.
Our sample consisted of N = 281 participants that had recently learned how to use the software R. We recruited the participants by contacting statistics lecturers all over Germany, and asking them to distribute the online survey among their students. The participants had a mean age of 23.29 (SD = 6.12; Min = 18; Max = 61), 69.4% were female, 28.1% male, and 2.5% did not indicate their gender. On average, the participants had used R for 7.30 months (SD = 10.89; Min = 0.25; Max = 84).
We investigated the acceptance of the software R. R is an open source software environment and programming language which can be easily extended with packages that cover a wide range of use cases. The participants in our study use it for statistical data analyses but, beyond that, it has increasingly gained popularity for machine learning (e.g., predictive modeling; Kuhn 2008) and artificial intelligence (e.g., natural language processing; Silge and Robinson 2016). The study was administered online. First, the participants completed questionnaires regarding technology acceptance, and afterwards they answered questions about UX characteristics. All questions were voluntary.
We assessed the technology acceptance criteria PU, PEOU, and use with the respective scales of the TAM3 (Venkatesh and Bala 2008). For reasons of comprehensibility, we decided to measure behavioral intention with the subscale of the unified theory of acceptance and use of technology (UTAUT), instead of the TAM3 (Maruping et al. 2017). As there existed no German translations of either of these instrument yet, we translated them ourselves using the collaborative and iterative translation technique (Douglas and Craig 2007; German items are in the appendix). Additionally, we changed the wording so that the items fit the context of learning R. Except for use, all scales were measured with a seven-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), and the option prefer not to say. Perceived usefulness and perceived ease of use both consisted of four items. A sample item for perceived usefulness is “Using the system improves my performance in my studies”, a sample item for perceived ease of use is “Interacting with the system does not require a lot of mental effort”. Behavioral intention consisted of three items. A sample item is “I intend to use the system in the next three months”. Technology use was assessed with the question “On average, how much time do you spend on the system each day?” and participants indicated the time in hours and minutes.
We used the User Experience Questionnaire by Laugwitz et al. (2008) and the scale output quality from the TAM3 to assess UX characteristics. Output quality was measured with three items and a seven-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), and the option prefer not to say. A sample item is “The quality of the output I get from the system is high”. Of the User Experience Questionnaire, we used the scales efficiency, perspicuity, dependability, stimulation, and novelty. All scales consisted of four items and were measured with a semantic differential with seven graduations. A sample item for efficiency is “fast—slow”, for perspicuity “clear—confusing”, for dependability “predictable—unpredictable”, for stimulation “motivating—demotivating”, and for novelty “innovative—conservative”.
We analyzed our data with Partial Least Squares (PLS), a composite-based approach to structural equation modeling. For our analyses, PLS structural equation modeling was superior to the more common covariance-based structural equation modeling. First, the authors of the TAM3 also worked with this approach, second, it should be preferred when the goal is theory development as opposed to theory testing (Hair et al. 2017), and third, it is better suitable for small sample sizes (Hair et al. 2011). All constructs were modeled using reflective indicators. The software for data analysis was smartPLS (version 3.2.8; Ringle et al. 2015). We conducted bias-corrected and accelerated bootstrapping with 5000 randomly selected subsamples, path weighting scheme, and pairwise deletion.
The descriptive statistics, correlations, and internal consistencies of study variables are depicted in Table 1. All variables were significantly positively related. The lowest, but still significant, correlations were with the variable use.
5.1 Measurement model
To assess the psychometric properties of the constructs, we investigated reliability, convergent validity, and discriminant validity. The reliability values (Cronbach’s α) were between 0.71 and 0.98 and indicate good to excellent internal consistency (see Table 1). Almost all item loadings were greater than or equal to 0.70. There were three exceptions with loadings between 0.56 and 0.69. Thus, convergent validity was given. To assess discriminant validity, we looked at the heterotrait-monotrait ratio (HTMT). It is more sensitive than investigating cross-loadings or the Fornell-Larcker criterion and is therefore better able to detect a lack of discriminant validity (Henseler et al. 2015). As all HTMT values were below 0.90, discriminant validity was given.
5.2 Hypothesis testing
All results regarding hypothesis testing are reported in Table 2. Consistent with Venkatesh and Bala (2008), we were able to replicate the original TAM. Regarding our extension of the TAM, we found that only output quality was a further significant predictor of perceived usefulness (H1b: β = 0.35, f2 = 0.15). The effect size was medium. Thus, the results provide support for Hypothesis 1b, Hypothesis 1a was rejected. Furthermore, we found that perspicuity (H2a: β = 0.67, f2 = 0.66) and dependability (H2b: β = 0.16, f2 = 0.04) were significant predictors of perceived ease of use. Perspicuity showed a large effect, dependability a small effect. The results provide support for Hypothesis 2a and 2b. Lastly, we found that novelty, but not stimulation, was a further significant predictor of behavioral intention (H3b: β = 0.22, f2 = 0.04). The effect was small. Thus, the results provide support for Hypothesis 3b, Hypothesis 3a was rejected.
The UX predictors resulted in 38.2% of explained variance in perceived usefulness, 60.6% of explained variance in perceived ease of use, 25.8% of explained variance in behavioral intention, and 1.9% of explained variance in technology use. The fact that the adjusted R2 is almost identical to the original R2, indicates that the amount of explained variance is not due to the number of predictors in our model.
The aim of this study was to extend the TAM (Venkatesh and Bala 2008) by user experience characteristics and test the resulting user experience technology acceptance model. The results provided support for most of the expected relationships in the UX TAM. Perceived ease of use and output quality were significantly associated with perceived usefulness. The usability criteria perspicuity and dependability were significantly related to perceived ease of use. Furthermore, there were significant associations between perceived usefulness, perceived ease of use, as well as the hedonic quality criterion novelty and behavioral intention. Lastly, behavioral intention was significantly associated with use. Although there were positive significant correlations between efficiency and perceived usefulness, and between stimulation and behavioral intention, these associations could not be found in the overall model. This indicates that the other predictors were more relevant in explaining perceived usefulness and behavioral intention. When working with the investigated software R, it seems more important that the system has a high output quality and is easy to use than that it operates quickly. This could be different for time-sensitive tasks, for example, when a deadline needs to be met. With regard to behavioral intention, it is conceivable that the novelty-related characteristics—along with the well-established predictors perceived usefulness and ease of use—are more dominant than the stimulative quality because the participants in our study had known R for only seven months. Thus, its characterization as new was probably still very prevalent.
6.1 Theoretical implications
This study makes important theoretical contributions. First, we were successful in extending the well-established TAM by technology-inherent characteristics. Therefore, our results support the socio-technical systems approach (e.g., Trist and Bamforth 1951) which claims that in order to achieve a well-functioning system, the social and technical parts should be jointly considered. We showed that characteristics of the technological system affect the social system, in this case the attitude and behavior of the users. This is new because in the original TAM, the technology is seen as given and not susceptible to change.
Second, the fact that our investigated technology features were UX characteristics, closes a research gap that Hornbæk and Hertzum (2017) had identified in a recent literature review. They argued that TAM and UX, which are both part of human-computer interaction research, should be combined because they focus on different aspects of technology use that complement each other: The TAM has a stronger focus on utilitarian aspects, whereas UX research investigates the experiential component. A combination of both research strands can help inform further research in both areas.
6.2 Practical implications
The increasing number of advanced technologies in the workplace compels organizations to deal with questions of technology design, change management, and technology acceptance. The results of our study give guidance in these fields, and may be relevant for three target groups in particular: technology designers, change managers, and users. First, our extension of the TAM by specific UX characteristics helps technology designers in their work because they get a clearer picture of how to design acceptable technologies when testing a first version. For example, if users rated the ease of use of a first version of a certain technology as low, the original TAM would not give any definite indication as to what should be changed about the technology. In contrast, the results of our study illustrate that the designers should improve the perspicuity and dependability of the technology so that the user can, for example, easier learn how to use it.
Second, change managers can use the UX TAM as a means to facilitate participation in change processes. Employees who have a say during change processes have greater organizational commitment, less anxious affect and thus show less resistance to change (Lines 2004; Reiss et al. 2019). Thus, change managers should include the future users’ opinions in the technology design process (Paruzel et al. 2020). They can do so by using the questions of the UX TAM in early stages of the technology design process, for example, in a workshop with employees who are affected by the technology change. The results can be used to improve the technology, and the employees see that their feedback is valued and implemented.
Third, referring to the users of technologies, the use of the UX TAM in organizations can promote job crafting. Job crafting describes behaviors that employees show to modify their own jobs in a way that they better fit their needs, abilities, and preferences (Tims et al. 2013). Employees who craft their jobs have increased engagement and job satisfaction, and decreased burnout (Tims et al. 2013). The UX characteristics that we added to the TAM help the user to identify which facets of the technology do not meet his or her needs. The user can test different versions of the technology, or different technologies, to find the one that best fits his or her needs and preferences. This is especially relevant when the employee can choose autonomously among a variety of technologies to perform his or her work tasks.
6.3 Limitations and directions for future research
We believe that our study provides important insights into technology-inherent antecedents of technology acceptance. However, there are also some limitations. Our study investigated only the UX and acceptance of the software R. Thus, the results might not be generalizable to other software or hardware. Future studies should replicate our findings with other technologies. However, in contrast to earlier studies (e.g., Ha and Stoel 2009), the investigated UX characteristics can be applied to all kinds of technologies, and are not specific to a single technology, such as criteria for the design of websites.
The generalizability of our results might further be impaired by the sample which consisted of only students. It is conceivable that results between students and employees differ because students might have more autonomy regarding which software they use for their studies. Therefore, future studies should investigate employees, and also add voluntariness as a possible moderating factor.
Furthermore, compared to other TAM extensions, our model explained less variance in perceived usefulness, behavioral intention, and technology use. The TAM3 (Venkatesh and Bala 2008), for example, was able to explain 52–60% variance in PU (compared to 38% in our study), 43–45% in PEOU (61% in our study), 40–48% in behavioral intention (26% in our study), and 31–36% in use (2% in our study). One explanation for this result is that context-related and interindividual characteristics are better predictors than technology-inherent characteristics. Another explanation could be that our model of technology-inherent determinants is not yet comprehensive enough. Thus, future research should investigate further technology-inherent predictors of technology acceptance. As an example, future studies could investigate if technologies that are used in the workplace and have the ability to make decisions (e.g., which task is allocated to whom) are better accepted if they are designed in adherence with organizational justice criteria. For example, it is conceivable that employees prefer a technology that treats every employee equally, allows them to express their views, and explains decisions transparently (Ötting and Maier 2018). Consequently, technologies high in organizational justice are probably used more often.
With the increasing number of advanced technologies in the workplace, organizations need to make sure that employees accept these technologies. Yet, the well-established TAM (Venkatesh and Bala 2008) lacks information on specific technology-inherent characteristics that affect technology acceptance. Our results showed that a technology that fulfills the UX criteria output quality, perspicuity, dependability, and novelty is more likely to be accepted and consequently used.
Abbasi, M. S., Chandio, F. H., Soomro, A. F., & Shah, F. (2011). Social influence, voluntariness, experience and the internet acceptance: An extension of technology acceptance model within a south-Asian country context. Journal of Enterprise Information Management, 24, 30–52.
Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 16, 227–247.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.
Ajzen, I., & Fishbein, M. (1977). Attitude-behavior relations: A theoretical analysis and review of empirical research. Psychological Bulletin, 84, 888–918.
Brandon-Jones, A., & Kauppi, K. (2018). Examining the antecedents of the technology acceptance model within e‑procurement. International Journal of Operations & Production Management, 38, 22–42.
Carayon-Sainfort, P. (1992). The use of computers in offices: Impact on task characteristics and worker stress. International Journal of Human-Computer Interaction, 4, 245–261.
Davies, R., Coole, T., & Smith, A. (2017). Review of socio-technical considerations to ensure successful implementation of industry 4.0. Procedia Manufacturing, 11, 1288–1295.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 319–340.
Davis, F. D. (1993). User acceptance of information technology: System characteristics, user perceptions and behavioral impacts. International Journal of Man-Machine Studies, 38, 475–487.
Davis, S. A., & Bostrom, R. P. (1993). Training end users: An experimental investigation of the roles of the computer interface and training methods. MIS Quarterly, 17, 61–85.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1003.
Devaraj, S., & Kohli, R. (2003). Performance impacts of information technology: Is actual usage the missing link? Management Science, 49, 273–289.
Douglas, S. P., & Craig, C. S. (2007). Collaborative and iterative translation: An alternative approach to backtranslation. Journal of International Marketing, 15, 30–43.
Elwood, S., Changchit, C., & Cutshall, R. (2006). Investigating students’ perceptions on laptop initiative in higher education. Campus-Wide Information Systems, 23, 336–349.
Emam, K. E., & Koru, A. G. (2008). A replicated survey of IT software project failures. IEEE Software, 25, 84–90.
Friedli, T., & Schuh, G. (2012). Wettbewerbsfähigkeit der Produktion an Hochlohnstandorten [Competitiveness of production at high-wage locations]. Berlin, Germany: Springer.
Ha, S., & Stoel, L. (2009). Consumer e‑shopping acceptance: Antecedents in a technology acceptance model. Journal of Business Research, 62, 565–571.
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM) (2nd ed.). Los Angeles, CA: SAGE.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19, 139–152.
Han, T.-S., Chiang, H.-H., & Chang, A. (2010). Employee participation in decision making, psychological ownership and knowledge sharing: Mediating role of organizational commitment in Taiwanese high-tech organizations. The International Journal of Human Resource Management, 21, 2218–2233.
Hassenzahl, M. (2003). The thing and I: Understanding the relationship between user and product. In M. A. Blythe, K. Overbeeke, A. F. Monk & P. C. Wright (Eds.), Funology: From usability to enjoyment (pp. 31–42). Dordrecht, Netherlands: Springer Netherlands.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43, 115–135.
Hornbæk, K., & Hertzum, M. (2017). Technology acceptance and user experience: A review of the experiential component in HCI. ACM Transactions on Computer-Human Interaction, 24(5), 33.
Hughes (2011). Do 70 per cent of all organizational change initiatives really fail?. Journal of Change Management, 11, 451–464.
Karahanna, E., & Straub, D. W. (1999). The psychological origins of perceived usefulness and ease-of-use. Information & Management, 35, 237–250.
Khalid, H. M. (2006). Embracing diversity in user needs for affective design. Applied Ergonomics, 37, 409–418.
Kim, K. J., & Shin, D.-H. (2015). An acceptance model for smart watches. Internet Research, 25, 527–541.
Kim, H.-J., Mannino, M., & Nieschwietz, R. J. (2009). Information technology acceptance in the internal audit profession: Impact of technology features and complexity. International Journal of Accounting Information Systems, 10, 214–228.
Kuhn, M. (2008). Building predictive models in R using the caret package. Journal of Statistical Software, 28(5), 1–26.
Laugwitz, B., Held, T., & Schrepp, M. (2008). Construction and evaluation of a user experience questionnaire. In A. Holzinger (Ed.), 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society (pp. 63–76). Graz, Austria.
Lee, D. Y., & Lehto, M. R. (2013). User acceptance of YouTube for procedural learning: An extension of the technology acceptance model. Computers & Education, 61, 193–208.
Lines, R. (2004). Influence of participation in strategic change: Resistance, organizational commitment and change goal achievement. Journal of Change Management, 4, 193–215.
Maier, G. W., & Kauffeld, S. (2020). Digitalisierte Arbeitswelt [Digitalized work environment]. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO), 51, 1–4.
Maier, G. W., Steffen, E., & Engels, G. (2020). Einleitung [Introduction]. In G. W. Maier, G. Engels & E. Steffen (Eds.), Handbuch Gestaltung digitaler und vernetzter Arbeitswelten [Handbook for the design of digital and interconntected work environments] (pp. 3–17). Berlin: Springer.
Mariani, M. G., Curcuruto, M., & Gaetani, I. (2013). Training opportunities, technology acceptance and job satisfaction: A study of Italian organizations. Journal of Workplace Learning, 25, 455–475.
Maruping, L. M., Bala, H., Venkatesh, V., & Brown, S. A. (2017). Going beyond intention: Integrating behavioral expectation into the unified theory of acceptance and use of technology. Journal of the Association for Information Science and Technology, 68, 623–637.
Mathieson, K. (1991). Predicting user intentions: Comparing the technology acceptance model with the theory of planned behavior. Information Systems Research, 2, 173–191.
Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27–39.
Parker, S. K., & Grote, G. (2020). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology. Advance online publication. https://doi.org/10.1111/apps.12241
Paruzel, A., Bentler, D., Schlicher, K., Nettelstroth, W., & Maier, G. W. (2020). Employees first, technology second: Implementation of smart glasses in a manufacturing company. Zeitschrift für Arbeits- und Organisationspsychologie, 64, 46–57.
Paulsen, H., Zorn, V., Inkermann, D., Reining, N., Baschin, J., Vietor, T., & Kauffeld, S. (2020). Soziotechnische Analyse und Gestaltung von Virtualisierungsprozessen. Ein Fallbeispiel zur virtuellen Inbetriebnahme [Socio-technical system analysis and design of virtualization processes. A report on practice regarding virtual initial start-up]. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO), 51, 81–93.
Pikkarainen, T., Pikkarainen, K., Karjaluoto, H., & Pahnila, S. (2004). Consumer acceptance of online banking: An extension of the technology acceptance model. Internet Research, 14, 224–235.
Reich, C., & Samet, A. (2019). HALFBACK Project: The use of machine learning to achieve high-availability in production. In A. Christ & F. Quint (Eds.), Artificial intelligence: From research to application (pp. 7–13), Offenburg, Germany.
Reiss, S., Prentice, L., Schulte-Cloos, C., & Jonas, E. (2019). Organisationaler Wandel als Bedrohung – von impliziter Angst zur Annäherung durch prozedurale Gerechtigkeit [Organizational change as threat—from implicit anxiety to approach through procedural justice]. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie. GIO, Vol. 50 (pp. 145–161).
Ringle, C. M., Wende, S., & Becker, J.-M. (2015). smartPLS 3. Boenningstedt: smartPLS GmbH. Retrieved July 30, 2020, from http://www.smartpls.com
Schrepp, M. (2015). User experience questionnaire handbook. All you need to know to apply the UEQ successfully in your project. Retrieved Retrieved July 30, 2020, from https://www.ueq-online.org/Material/Handbook.pdf
Silge, J., & Robinson, D. (2016). tidytext: Text mining and analysis using tidy data principles in R. The Journal of Open Source Software, 1(3), 37.
Steil, J. J., & Maier, G. W. (2017). Robots in the digitalized workplace. In G. Hertel, D. Stone, R. Johnson & J. Passmore (Eds.), The Wiley Blackwell Handbook of the Psychology of the Internet at Work (pp. 403–422). Chichester: Wiley-Blackwell.
Steil, J. J., & Maier, G. W. (2020). Kollaborative Roboter: Universale Werkzeuge in der digitalisierten und vernetzten Arbeitswelt [Collaborative robots: Universal tools in a digitized and interconnected work environment. In G. W. Maier, G. Engels & E. Steffen (Eds.), Springer Reference Psychologie. Handbuch Gestaltung digitaler und vernetzter Arbeitswelten [Handbook for the design of digital and interconnected work environments (pp. 323–346). Berlin, Germany: Springer.
Tims, M., Bakker, A. B., & Derks, D. (2013). The impact of job crafting on job demands, job resources, and well-being. Journal of Occupational Health Psychology, 18, 230–240.
Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4, 3–38.
Turner, M., Kitchenham, B., Brereton, P., Charters, S., & Budgen, D. (2010). Does the technology acceptance model predict actual use? A systematic literature review. Information and Software Technology, 52, 463–479.
Venkatesh, V. (1999). Creation of favorable user perceptions: Exploring the role of intrinsic motivation. MIS Quarterly, 23, 239–260.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11, 342–365.
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39, 273–315.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46, 186–204.
Venkatesh, Brown, Maruping, & Bala (2008). Predicting different conceptualizations of system use: The competing roles of behavioral intention, facilitating conditions, and behavioral expectation. MIS Quarterly, 32, 483–502.
Zacher, H., & Frese, M. (2018). Action regulation theory. Foundations, current knowledge and future directions. In D. Ones, N. Anderson, C. Viswesvaran & H. Sinangil (Eds.), The SAGE Handbook of Industrial, Work & Organizational Psychology (pp. 122–143). London: SAGE.
This research was funded by the German Federal Ministry of Education and Research and the European Social Fund (grant number 01FI16133).
Open Access funding provided by Projekt DEAL.
About this article
Cite this article
Mlekus, L., Bentler, D., Paruzel, A. et al. How to raise technology acceptance: user experience characteristics as technology-inherent determinants. Gr Interakt Org 51, 273–283 (2020). https://doi.org/10.1007/s11612-020-00529-7
- Technology acceptance
- User experience
- Socio-technical systems
- User Experience
- Soziotechnisches System