1 Introduction

The research laboratory OpenAI presented ChatGPT in November 2022 (OpenAI, 2023). Yet, only a year after its launch, it has caused an enormous revolution in many areas of our life. This tool, available to users free of charge, has gained immense popularity, gathering one million subscribers in the first few days of operation (De Angelis et al., 2023; Dowling & Lucey, 2023), a phenomenon that no other online platform has achieved so far. ChatGPT belongs to the family of conversational agents (Car et al., 2020), referred to as chatbots. Its capabilities, however, significantly exceed those of previously known chatbots. ChatGPT is a large language model (LLM) built on the GPT framework (Cascella et al., 2023). The current model, accessible freely, runs on the GPT-3.5 architecture (Burger et al., 2023). There is also an upgraded GPT-4 model which operates in subscription model available from March 2023 (De Angelis et al., 2023) and which allows you to provide images as input data (Zhang et al., 2023), while the earlier free version only allows its users to work with text and numeric data. The free ChatGPT model and the paid version allow for such operations as text summary, answering questions, translation, sentiment analysis, text completion, or data classification (Zhang et al., 2023).

The introduction of ChatGPT had numerous implications for artificial intelligence (AI) in science and business. Although tools using AI, including natural language processing, have been used for years in everyday life, it was only the launch of ChatGPT that drew the attention of the public and scientists to the opportunities and threats resulting from the use of AI in work, business, academia, and science. AI has been implemented in email systems to filter SPAM messages (Ahmed et al., 2022), and in 2019 Google implemented the BERT algorithm in its search engine to support natural language understanding in user queries (Buche, 2020). However, such a technologically advanced AI tool has never been made available for such a broad audience, nor has it been emphasized in the discourse that artificial intelligence is responsible for its functionalities. Some researchers have even announced that these tools may cause a revolution in science and the academic community (Haque et al., 2022; Lund et al., 2023).

ChatGPT public release have also resulted in a very dynamic development of other models and software based on AI. The race of technology companies that work on AI solutions has begun (Zhang et al., 2023). And so, right after the launch of ChatGPT, Microsoft invested in its development and implemented ChatGPT in the Bing search engine. In March 2023, Google published its Bard chatbot as a direct response to ChatGPT. Since the beginning of 2023, numerous solutions supporting work have been created incredibly fast. ChatGPT and other language models are used as copilot tools in these new solutions. Attention was also paid to other developments, such as the DALL-E algorithm (Lund et al., 2023; Reddy et al., 2021), based on ChatGPT-3, or a Midjourney tool, which allows to generate images based on a text description.

The emergence of ChatGPT has sparked numerous debates in the academic community, including discussions about its capabilities, the opportunities and challenges it creates, with more publications focusing on the perspective of students than the university employees (Emenike & Emenike, 2023). In the course of literature research, we found that there are currently no studies on the acceptance of this technology in the academic environment. This study aims to fill the identified gap by examining the acceptance of this technology among the academic staff of Polish universities. The cross-sectional approach records the attitudes of Polish higher education faculty members toward the use of ChatGPT at a specific point in time: in the period between April 25 and May 25, 2023, providing a snapshot of the attitudes of the study population. Data was collected during the first half-year of the application’s operation, representing the early attitudinal phase and identifying key variables for tool acceptance.

It should be noted that we did not investigate the types of activities that scientists perform using ChatGPT. This decision was conditioned by the fact that at such an early stage of ChatGPT’s availability, there were numerous and heated discussions about the legality and ethics of using it in research and didactics due to its tendency to hallucinate. As university employees, we know that the role of an academic employee combines the responsibilities of a researcher conducting experiments and writing research papers, and a teacher working on the curriculum and tasks for students. Therefore, in the rest of the paper, when referring to the tasks of an academic teacher, we mean both teaching and research work. We do not focus on the experience of scientists in working with this tool in specific fields of activity (research work, preparation of teaching materials). The aim of the article is to examine the attitude of scientists in Poland towards the use of ChatGPT for professional purposes. To achieve the objective, the authors have set one research question (RQ): What factors affect the academics’ intention to use such artificial intelligence tool as ChatGPT?

Answering this research question, this paper contributes both to the theory and practice of AI usage in higher education. The review of literature has resulted in the presentation of structured knowledge regarding good and bad practices of ChatGPT application by academics in their research and didactic work. It provides a realistic view on this tool, showing that with the proper control over the content it generates, ChatGPT might be a helpful assistant. From the perspective of practice, we provide recommendations for the academics on how to introduce ChatGPT into their work routine, gradually getting familiar with it and exploring all the possibilities of work (both research and didactic) it can provide.

2 Literature review

Chatbots such as ChatGPT or Bard generate texts or continuing statements based on queries such as a prompt or a seed text (Burger et al., 2023). These language models can also change the tone of voice of generated answers or input texts, e.g., to a more formal, scientific, or business one, thus adapting it to the questioner’s needs. They can also perform an organizational function and be used to create summaries of email conversations in a thread, and, thanks to integration with other tools, even audio or video meetings. Programmers, in turn, willingly use them as support in writing, verifying, and processing codes.

In education and science, however, the most valuable functions are the ability to summarize scientific papers, write fragments or entire papers, and prepare teaching materials. Below, we consider the possible applications of ChatGPT in supporting academics in their teaching and research tasks, focusing on three major fields: the preparation of educational materials, the preparation of scientific research, and text writing and correction.

In the field of educational materials preparation, the researchers point out the opportunities that the usage of ChatGPT brings to academics. The authors often mention the creation of auxiliary and supplementary materials (Emenike & Emenike, 2023; Farrokhnia et al., 2023; Lim et al., 2023), such as quizzes (Cooper, 2023; Farrokhnia et al., 2023) or flashcards (Khan et al., 2023), development of tests and exams sheets (Cotton et al., 2024; Ivanov & Soliman, 2023), and generating incorrect answers implemented to tests (Emenike & Emenike, 2023). This tool can also be used in materials translation or summaries creation (Emenike & Emenike, 2023; Khan et al., 2023), or in finding information on a particular topic (Farrokhnia et al., 2023). The potential of ChatGPT to generate ideas for lesson plans (Farrokhnia et al., 2023; Khan et al., 2023), as well as to develop course descriptions (Emenike & Emenike, 2023; Ivanov & Soliman, 2023), including the use of specific teaching methods or models (Cooper, 2023), is also emphasized in the research works. Finally, ChatGPT is considered helpful in evaluating students’ works (Cotton et al., 2024; Farrokhnia et al., 2023;Ivanov & Soliman, 2023 ; Khan et al., 2023), since the LLM can analyze both the linguistic correctness and clarity of the text (Ivanov & Soliman, 2023; Khan et al., 2023). It can also provide feedback on students’ work (Farrokhnia et al., 2023) and responses to emails and announcements addressed to students (Emenike & Emenike, 2023). As potential disadvantages of AI usage in educational materials preparation, the threats of plagiarism, content copying, and lack of creativity are mentioned most frequently (Choi et al., 2023; Cotton et al., 2024).

In preparing scientific research, the discussion on ChatGPT usage is rather heated. On the one hand, researchers emphasize that some academics may lack the ability to use AI tools and thus feel reluctant or even be afraid to use them (Burger et al., 2023). It was also observed that ChatGPT could create untrue content with a high degree of credibility (Cascella et al., 2023), provide non-existent evidence (Ariyaratne et al., 2023; De Angelis et al., 2023), add non-existing bibliographic information to support the veracity of the cited paper (Day, 2023; De Angelis et al., 2023; Macdonald et al., 2023), or build false references, using real names of journals, and credible-sounding titles, which can be hard to detect (De Angelis et al., 2023). Furthermore, ChatGPT doesn’t understand the context of statements (Farrokhnia et al., 2023) and cannot answer more abstract questions, which was confirmed by its creators (OpenAI, 2023). It may also introduce certain simplifications in data analysis (Burger et al., 2023). ChatGPT is also incapable of deduction, has limited mathematical skills (Frieder et al., 2023), and does not assess data reliability well (Farrokhnia et al., 2023). Since its knowledge is limited to data collected until specific time in the past (Zielinski et al., 2023), it may contain errors (Burger et al., 2023; Carvalho & Ivanov, 2024; Cascella et al., 2023).

On the other hand, the researchers notice the opportunities offered by using AI in scientific work, such as improving its effectiveness, relevance, and timeliness, due to the possibility of keeping up with trends and recently published works (De Angelis et al., 2023; Zhang et al., 2023). AI can improve research methods (Burger et al., 2023) by selecting appropriate statistical tests and generating codes adequate for data analysis (Macdonald et al., 2023), identifying knowledge gaps, supporting data organization by generating tables or graphs, explaining the results, identifying patterns and trends, suggesting how to interpret the results, and checking the consistency of the results (Cheng et al., 2023). Its research suggestions can also go beyond the perspectives of individual researchers (Ivanov & Soliman, 2023), which can help to eliminate the problem of bias in the interpretation of results (Burger et al., 2023). What was also pointed out is that the AI could act as Research Assistant (Dowling & Lucey, 2023; Ivanov & Soliman, 2023), providing help with identifying research trends in the declared field of science (Heaven, 2018), showing trends in grants to find funding opportunities for the scientific projects or extracting information directly from scientific works (Gusenbauer, 2023).

In text writing and correction, it is proven that tools such as ChatGPT can quickly write a text that will sound academic (Lim et al., 2023), support writing by generating titles, abstracts, paraphrases (Isaeva, 2022), introduce a better order of scientific literature and correct text structure, which could contribute to easier search and analysis of existing literature (Arif et al., 2023), as well as to its summarization (Arif et al., 2023; Gao et al., 2023). ChatGPT can also save time for the researcher and editors by supporting them in creating metadata and indexing (Lund et al., 2023), and also by preparing the text to be understandable to the public by simplifying the language (Cascella et al., 2023). In addition, it can support authors in providing publications that meet various journal guidelines by adapting their formatting (Lund et al., 2023). Such tasks as describing the method and results of the study (Macdonald et al., 2023), editing the text and verifying its clarity (Cheng et al., 2023; Macdonald et al., 2023), suggesting alternative wording or translations (Lund et al., 2023), pointing out inconsistencies in the text, or giving examples of well-written chapters (Cheng et al., 2023), are suitable for the AI used in academic work.

However, it is the role of the researcher to accept the ideas, abstracts, or text formatting suggested by ChatGPT. The researcher is also responsible for correcting and verifying the generated text. Models do not have the knowledge or experience needed to communicate scientific concepts properly or verify the credibility of information (Wittmann, 2023). Those issues, commented on broadly in the literature, may be considered a starting point for the research conducted and presented in this paper on the acceptance of AI usage by Polish university faculty members.

3 Methodology

The widespread adoption of technology in our daily life has caused a rapidly growing interest in understanding the dynamics of its acceptance and use. “The Unified Theory of Acceptance and Use of Technology” (UTAUT), established in 2003 by Venkatesh et al., offers a well-regarded framework for interpreting such behavior. The model aggregates user experiences and integrates concepts that form the theoretical basis of acceptance of information systems by users (Yu et al., 2021). It comprises the key elements including “Performance Expectancy”, “Social Influence”, “Effort Expectancy”, and “Facilitating Conditions”, which all significantly shape an individual’s intention to use a certain technology. The model also identifies gender, age, and experience as crucial moderators.

An enhanced version, UTAUT2, was subsequently proposed by Venkatesh et al. in 2012, incorporating three additional elements: “Hedonic Motivation”, “Price Value”, and “Habit”. This refined model, developed through robust empirical research, is vital for comprehending what drives the adoption and usage of emergent technologies in various contexts (Tamilmani et al., 2021). UTAUT2 has been employed effectively in university and academia settings to recognize the factors influencing the intentions of higher education workers to employ various technological instruments, like online learning platforms (Samsudeen & Mohamed, 2019), mobile applications (S. Hu et al., 2020) and software for LMS (Raman & Don, 2013; Raza et al., 2022; Zwain, 2019). Recent research has further explored how these factors are influenced by specific contexts, like the recent COVID-19 period (Edumadze et al., 2023; Osei et al., 2022), enriching the procedure of designing and implementing research and education technology tools.

Our research postulates that the seven constructs of UTAUT2 – “Performance Expectancy”, “Effort Expectancy”, “Social Influence”, “Facilitating Conditions”, “Hedonic Motivation”, “Price Value”, and “Habit” – bear significant influence on the “Behavioral Intention” of researchers to utilize ChatGPT technology in academic work. We endeavor to broaden the scope of the well-grounded UTAUT2 framework by incorporating “Personal Innovativeness” (PI) as an influencing factor of behavioral intention toward ChatGPT usage. Personal Innovativeness, an individual’s predisposition towards exploring and adopting innovative IT developments independently, is acknowledged as an impactful element in technology adoption (Agarwal & Prasad, 1998). Many researchers affirm the substantial role of personality traits like PI in technology adoption, especially within the IT (Sitar-Taut & Mican, 2021). This characteristic is viewed as stable, context-specific, and a potent influence on the acceptance and adoption of IT (Strzelecki, 2024; Twum et al., 2022).

3.1 Hypotheses development

This study attempts to investigate the effects of UTAUT2 variables, along with PI on the behavioral intention of researchers to utilize generative AI in the form ChatGPT and assess the utility in facilitating scholarly pursuits. The objective includes not only defining these factors but also probing how researchers’ views of ChatGPT usage influence its sustained application.

Performance Expectancy (PE) stands as a significant determinant in individuals’ behavioral intention toward new technology adoption. This term refers to the perceived usefulness or the degree to which individuals believe that utilizing a system will enhance their job or learning performance (Venkatesh et al., 2003). PE has emerged as a crucial component in studying the application of novel technologies. In the educational context, El-Masri and Tarhini (2017) emphasized the substantial, direct and positive role PE plays in the adoption of educational systems. Empirical evidence of PE’s significant influence on the “Behavioral intention” of academics to embrace innovative educational tools like Google Classroom (Kumar & Bervell, 2019), virtual learning environment (Gunasinghe & Nanayakkara, 2021) and LMS (Raman & Don, 2013), has been well documented.

H1: “Performance Expectancy is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

Effort Expectancy (EE), as characterized by Venkatesh et al. (2003), signifies the perceived ease or effort involved in employing technology. It encompasses constructs such as Perceived Ease of Use and Complexity. EE has been identified as an essential determinant in technology acceptance, exerting a direct effect on the “Behavioral intention” towards technology usage. Recent research confirms the positive, direct and significant role of EE in shaping “Behavioral Intention” in various university contexts, including the adoption of mobile technology (S. Hu et al., 2020), software engineering tools (Wrycza et al., 2017) and software for LMS (Raza et al., 2022).

H2: “Effort Expectancy is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

Social Influence (SI) is the degree to which important people, including relatives, peers, etc., think a person should utilize specific technology (Venkatesh et al., 2003). The impact of such social networks is observed to enhance users’ intention to employ technology. Often referred to as a social or subjective norm in prior studies, SI stands as a statistically significant, direct and positive determinant in shaping users’ “Behavioral Intention” toward specific technology usage. This is illustrated in various academic contexts, such as the adoption of MOOCs (Tseng et al., 2022), ICT acceptance (Oye et al., 2014) and LMS (Raman & Don, 2013).

H3: “Social Influence is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

Facilitating Conditions (FC), according to Venkatesh et al. (2003), denote the degree of accessibility of necessary resources and support to accomplish a task. This construct has been extensively researched in technology adoption, highlighting its pivotal role across various IT fields. Within university contexts, FC underscores the importance of having reliable technical infrastructure, knowledge resources, library access, and IT tools, factors that can influence academics’ inclination to use them for enhancing their work. Numerous studies have acknowledged FC as a significant, direct and positive predictor of “Behavioral Intention” and “Use Behavior” among academics. It’s also one of the biggest determinants of how much a person uses technology. FC’s crucial role has been observed in engagement with academic virtual communities (Nistor et al., 2014) and the application of communication and collaboration tools among academic staff (Maican et al., 2019).

H4: “Facilitating Conditions is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

H5: “Facilitating Conditions is having a positive direct influence on the ChatGPT Use Behavior in academic work”.

The enjoyment or pleasure of technology use is known as hedonic motivation (HM), and it has a big influence on users’ intentions (Venkatesh et al., 2012). Existing research suggests an increased likelihood of continued technology use if users derive enjoyment from it. In the realm of information systems, Hedonic Motivation has been observed to directly and positively impact technology usage (Tamilmani, Rana, Prakasam, & Dwivedi, 2019b). Thus, perceiving a system as enjoyable and entertaining typically encourages its adoption and use. In the university setting, HM emerges as a key predictor of behavioral intention, particularly in relation to technology implementation. Its significance is well-documented in university contexts like MOOC adoption (Meet et al., 2022), and LMS (Raman & Don, 2013). Thus, the following is suggested:

H6: “Hedonic Motivation is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

Venkatesh et al. (2012) claim that Price Value (PV) is a person’s perception of the trade-off between the financial cost of using a system and its benefits. As a crucial determinant of “Behavioral Intention” toward technology usage, PV substantially influences the decision to adopt new technology (Tamilmani et al., 2018). Numerous studies affirm the significant positive and direct effect PV has on the “Behavioral Intention” toward adopting technologies like elearning (Mehta et al., 2019) and mobile learning (Azizi et al., 2020). Some research has reframed PV as “Learning Value”, representing the perceived worthwhile nature of time and effort invested into learning (Ain et al., 2016; Dajani & Abu Hegleh, 2019; Farooq et al., 2017). This construct impacts academics’ intention to leverage new technology for scholarly endeavors (Zwain, 2019). Hence, a hypothesis is put forward:

H7: “Price Value is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

Habit (HT) is the degree to which a person is predisposed to carry out behaviors immediately, given their previous knowledge and familiarity with the technology. Limayem et al. (2007) and Venkatesh et al. (2012) conceptualize HT as a perceptual variable that has been identified as a important, direct and positive forecaster of “Behavioral Intention” and “Use behavior” (Tamilmani, Rana, & Dwivedi, 2019a). Further, HT has been observed to impact positively academics’ “Behavioral Intention” toward technology use, specifically within teaching and learning processes (Al-Mamary, 2022), elearning platform use (Zacharis & Nikolopoulou, 2022), and application of platforms like Google Classroom (Alotumi, 2022).

H8: “Habit is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work”.

H9: “Habit is having a positive direct influence on the ChatGPT Use Behavior in academic work.”

The literature abounds with thorough examinations of the link between Personal Innovativeness (PI) and adoption along with the use of technology in the IT sector (Slade et al., 2015). PI is defined as a predisposition to embrace cutting-edge technological innovations, exhibiting an inclination for risk-taking associated with trialing new IT advancements (Farooq et al., 2017). Studies have incorporated PI into the UTAUT2 model within the context of university work, investigating the adoption of elearning platforms (Twum et al., 2022), animation usage among university students (Dajani & Abu Hegleh, 2019), and distance learning in the time pandemic (Sitar-Taut & Mican, 2021).

H10: “Personal Innovativeness is having a positive direct influence on the Behavioral Intention to use ChatGPT in academic work.”

Behavioral Intention (BI) has become a cornerstone in investigating technology adoption and utilization behaviors (Park et al., 2012). This concept captures the individual’s readiness and intent to employ a specific technology for a given work (Venkatesh et al., 2003, 2012). Previous studies affirm BI’s positive and direct role in shaping actual technology usage (Aldossari & Sidorova, 2020; Gansser & Reich, 2021). The UTAUT2 model posits that seven factors can affect BI. In our research context, we probe BI to comprehend academics’ inclination to employ ChatGPT in their professional duties. A thorough exploration of the link between BI and actual “Use Behavior” offers valuable insights into the elements fostering or impeding technology adoption within the university landscape.

H11: “Behavioral Intention is having a positive direct influence on the ChatGPT Use Behavior in academic work.”

Use Behavior (UB) explores the understanding of user acceptance and usage patterns of technology. Using technology for a specific purpose is what is meant to be indicated by actual Use Behavior, according to the model. The effectiveness of the UTAUT2 model in forecasting actual UB across various contexts is supported by empirical research, including applications in e-learning platforms (Zacharis & Nikolopoulou, 2022) and mobile education (Arain et al., 2019). However, Venkatesh et al. (2012) fell short of specifying how actual use was assessed. ChatGPT usage in this research is quantified using a seven-point scale ranging from “never” to “several times a day”.

Demographics play a significant role in shaping users’ acceptance of new products or technologies (Mustafa & Zhang, 2022). This study aims to investigate the moderating influence of various demographic factors on the model such as Gender and Age on hypotheses 1 to 9. The main area of research investigating the impact of demographics has been centered around the adoption of novel technologies (Strzelecki & ElArabawy, 2024). Prior research (Hu et al., 2020; Mustafa et al., 2022; Teo et al., 2012) has examined how different demographic factors, such as gender, age, occupation, and experience, influence the adoption of new technologies. Consequently, it is imperative to examine the influence of these demographic factors as moderators in all the relationships established in the core UTAUT2 model. Thus, considering moderation, we offer the following hypotheses:

H12: “There is a moderating effect of Gender on Behavioral Intention to use ChatGPT in academic work.”

H13: “There is a moderating effect of Age on Behavioral Intention and ChatGPT Use Behavior in academic work.”

3.2 Model

According to Venkatesh et al. (2016), UTAUT2 should be the main model for developing hypotheses about the relationships between designated factors and technology adoption. Dwivedi et al. (2019) further stated that past studies often underused UTAUT2, frequently overlooking moderators. This research responds to that gap by using a tailored version of UTAUT2, considering as moderators only age and gender.

This research used a model that included all variables from the widely used UTAUT2 scale to measure acceptance of technology (see Fig. 1). We improved the model by including the PI from the research of Agarwal and Prasad (1998) and used two moderating variables – age and gender. Data collection employed a seven-point Likert scale, ranging from “strongly disagree” to “strongly agree”; and usage behavior was gauged on a seven-point scale, from “never” to “several times a day”. Table 1 contains the descriptive statistics and the measurement scale.

Fig. 1
figure 1

A proposition for an extended UTAUT2 model for ChatGPT

Table 1 Measurement scale and factor loadings, means, and standard deviation (SD)

3.3 Sample characteristics

Each construct in the study satisfied the reliability and validity standards, and the discriminant validity was verified, with the scale tested recently in a study of ChatGPT use among students (Strzelecki, 2024). Hair et al. (2022) stated, that research employing the PLS-SEM method requires at least of 189 samples in order to show R2 values of a minimum 0.1 at a 5% level of significance. Furthermore, a statistical power of no less than 95% is typically desired in social science research, as proposed by Arnold (1990).

At the end of 2021, the number of academics employed by universities in Poland stood at 99,950 (RAD-on, 2023). To establish the sample size from a population of 99,950, with a 95% confidence level and a 5% margin of error, the Yamane’s (1967) formula “n = (z^2 * p * (1-p)) / e^2” was utilized. In the formula, “n” is equivalent to the sample size, “z” corresponds to the z-score related to the confidence level (1.96 for a 95% confidence level), “p” denotes the estimated percentage of the population with the desired characteristic (for which we used 0.5 to derive the maximum sample size), and “e” represents the error margin of (0.05). Consequently, the minimum sample size calculated was 385.

The study utilized a web survey constructed on Google Docs, which was disseminated to ten major Polish universities: the University of Adam Mickiewicz in Poznan, the University of Lodz, the University of Gdansk, Jan Kochanowski University in Kielce, the University of Warsaw, the University of Rzeszow, Jagiellonian University, Nicolaus Copernicus University in Torun, the University of Zielona Góra, and the University of Szczecin. In the period between April 25 and May 25, 2023, academics at these institutions were asked to take part in the survey through email.

The confidentiality of their answers and their free will involvement were guaranteed to the participants. After eliminating eight responses with zero variance, a total of 629 valid responses were compiled, satisfying the requirement of minimum sample size. The demographic distribution of the sample was 308 males (49.0%), 290 females (46.1%), and 31 (4.9%) who wished to remain anonymous about their gender. The participants’ average age was 45.3 years (SD = 11.36), with a median of 45 years.

4 Results

Using path weighting with default initial weights and a 3000 iteration limit, we carried out the model estimation using the PLS-SEM algorithm in SmartPLS 4 software (Ringle et al., 2022). Using 5000 samples and a nonparametric process called bootstrapping, the statistical significance of the results was computed. Indicators with loadings over 0.7, indicating more than 50% variance explained by the construct, were deemed to have acceptable item reliability, except for FC4, which was eliminated (Table 1).

We evaluated reliability through composite reliability, with scores between 0.70 and 0.95 demonstrating good to acceptable reliability (Hair et al., 2022). We also evaluated internal consistency using Cronbach’s alpha, with similar thresholds to composite reliability. Additionally, we calculated the Dijkstra and Henseler’s reliability coefficient (ρA) as an accurate alternative (Dijkstra, 2014; Dijkstra & Henseler, 2015). The convergent validity of the measurement models was evaluated by computing the average variance extracted (AVE) for each reflective variable for all connected items, with an AVE threshold of 0.50 or greater accepted (Sarstedt et al., 2022). The quality criteria were met by all measurements (see Table 2).

Table 2 Construct reliability and validity

For the discriminant validity analysis of PLS-SEM, we employed the heterotrait-monotrait ratio of correlations (HTMT) method, suggested by Henseler et al. (2015). Typically, an HTMT threshold of 0.90 is preferred for conceptually similar constructs, whereas a lower limit of 0.85 is used when constructs are more distinct. In this study, all HTMT values, as shown in Table 3, are comfortably below the 0.85 threshold, thereby indicating strong discriminant validity.

Table 3 Values of HTMT

In the subsequent phase, we examined the R2 that evaluates the percentage of variance in each construct explained by the model. The R2 scale extends from 0 to 1, with values nearer to 1 indicating stronger explanatory power. Hair et al. (2011) provided a general guide indicating R2 results of 0.25, 0.50, and 0.75 as signifying, respectively, weak, moderate, and strong explanatory power,.

The findings from the PLS-SEM, depicted in Fig. 2, illustrate standardized regression coefficients for path relations and R2 values within the squares of variables. Out of eleven hypotheses, nine were supported. Predominantly, HT (Habit) has the highest impact on Behavior Intention (H8: β= 0.361, p < 0.01), trailed by Performance Expectancy (H1: β = 0.351, p < 0.01) and Hedonic Motivation (H6: β = 0.199, p < 0.01), accounting for 74.4% of BI variance. Though Social Influence (H3: β = 0.083, p < 0.01), Personal Innovativeness (H10: β = 0.087, p < 0.01), and Price Value (H7: β = 0.048, p < 0.05) have a positive influence on BI, the effect remains minimal. On the contrary, BI significantly influences Use Behavior (H11: β = 0.436, p < 0.01), with Facilitating Conditions (H5: β = 0.282, p < 0.01) and HT (H9: β = 0.154; p < 0.01) also contributing, collectively explaining 50.2% of UB variance. The path coefficients, significance tests, and hypothesis confirmations for the structural model are detailed in Table 4.

Fig. 2
figure 2

The results for an extended UTAUT2 model for ChatGPT – results

Table 4 The structural model’s path coefficients and the significance test results

In the case of moderating variables of “Age” and “Gender”, the results show that only one moderating effect is statistically significant in the model. Age significantly moderates the path between “Price Value” and “Behavioral Intention” (β =  − 0.059, p < 0.05). Other moderating effects are not significant. Moderating effects of “Age” and “Gender” are presented in Table 5.

Table 5 Moderating effect of Age and Gender in the model

Price Value (PV) is understood as a user’s perceived trade-off between the advantages of using an AI-tool and the cost of this tool. It is also referred to as the perceived time and effort invested into learning. Since the influence of PV on the Behavioral Intention (BI) of the academics to use ChatGPT is also affected by their age, we can assume that with age, which results in more work experience, the academics may tend to be more conscious about spending their financial and personal resources on learning new tools and technologies to apply in their work process.

5 Discussion

The study, presented in this paper, was undertaken to examine the factors affecting the adoption and usage of the artificial intelligence tool, which is ChatGPT. The examination was conducted with the use of the adapted UTAUT2 model, with eight constructs. The results offer valuable insights into the impact of AI-powered tools on the educational and research processes, performed by academics.

The authors have put forward eleven hypotheses about UTAUT2 constructs: eight of them referring to the effect on academics’ BI to use ChatGPT, and three of them - to the influence on ChatGPT Use Behavior (UB). Only two hypotheses were rejected, and nine hypotheses were accepted after the study results were analyzed. It appears that Effort Expectancy does not have direct influence on BI of academics to use ChatGPT. Thus, it can be claimed that the academics are not afraid of putting more effort into employing a technology - in case they consider it valuable for their work. This results is similar with Zacharis and Nikolopoulou (2022) and recent research on adoption and use of ChatGPT by students by Strzelecki and ElArabawy (2024), where EE was also found to be not significant.

At the same time, Facilitating Conditions have no direct influence on BI. As discussed above, FC means the degree of accessibility of the resources and support for completing a certain task with the AI tool. Therefore, the previous statement is supported - the academics will not choose a technology to work with only by the characteristic of its ease of use or accessibility. It can be suggested that they will put their focus on the effects of applying this tool. This outcome is in line with previous research of Alowayr (2022) and study of students’ acceptance of ChatGPT in higher education (Strzelecki, 2024), where FC was found to be not confirmed.

The nine accepted hypotheses provide us with the following conclusions. The most influential factor in determining BI is the Habit. The positive and significant effect of HT on BI is consistent with previous research on technology usage in general (Tamilmani, Rana, & Dwivedi, 2019a), and in the teaching and learning process (Al-Mamary, 2022; Alotumi, 2022; Zacharis & Nikolopoulou, 2022). This means that for academics the frequency of using ChatGPT for work will be growing as they get more familiar with it and as they feel that it is the same common thing for them as, for instance, using a search engine to find information or an online translator to work with foreign languages.

The second most influential factor for BI is the Performance Expectancy. PE was proved to have significant influence on teachers to embrace new education tools like Google Classroom (Kumar & Bervell, 2019). In our case, Performance Expectancy influences the academics to embrace the tool that can assist them both in research and teaching. They will be much eager to use ChatGPT when they are sure that this will result in their work improvement.

Further, there is Hedonic Motivation that positively affects BI. Although, it is necessary to add that its influence is less significant compared to the previous two constructs (β = 0.199). A direct impact of HM in BI was observed for platforms and learning management systems (Raman & Don, 2013; Tseng et al., 2022). HM is not connected with the actual effects of using ChatGPT, but with the feeling of pleasure and enjoyment one may feel when working with this tool and obtaining valuable results.

The positive effect on BI, of almost the same significance, is observed for Personal Innovativeness (PI) and Social Influence (SI) - path coefficient 0.087 and 0.083, respectively. Positive influence of SI on technology acceptance (Oye et al., 2014) and learning tools acceptance (Raman & Don, 2013; Tseng et al., 2022) was proved in previous studies. Effects of PI were proved, for instance, in (Twum et al., 2022) and (Sitar-Taut & Mican, 2021). Personal Innovativeness is the construct influenced by the academics themselves - how eager they are to engage in working with technological innovations and take certain risks connected with it. Social Influence, on the contrary, includes the attitude of the academics’ environment (family, friends, colleagues, etc.) toward the analyzed technology. The more actively the environment convinces an academic worker to use ChatGPT, the more likely they are to use it. However, neither SI nor the readiness to embrace new technologies (PI) are the decisive factors for the BI of an academic to use this AI tool.

The last construct to influence BI positively, yet not significantly (β = 0.047), is the PV. PV refers to a person’s perceived compromise between the advantages of using a technology and its monetary value. Previous research proved PV’s positive influence on BI to adopt new technologies (Tamilmani et al., 2018) and, in particular, learning technologies (Azizi et al., 2020; Mehta et al., 2019). This research shows that PV will not be the first reason why the academics would intend to use or not to use ChatGPT. We may suggest that the reason for that (or one of them) is the fact that ChatGPT offers a free version, and its functionality is quite large.

The authors have put forward three hypotheses about the UB. The most influential factor in determining the UB in connection with ChatGPT is the Behavioral Intention to use this tool (β = 0.436). Use Behavior denotes the degree to which a user engages with a certain technology to perform a task, while the user’s Behavioral Intention shows their readiness and intent to employ this technology for the task. Therefore, the higher the intent to use ChatGPT, the more engaged the user will be in working with the technology. A simple suggestion comes up - the academics will only use the AI tool actively when they feel a certain high level of readiness and willingness to use it. And this willingness (which is BI) will be gained under the influence of other factors (as discussed above). The crucial role of BI in technology usage was highlighted by many researchers (Aldossari & Sidorova, 2020; Gansser & Reich, 2021).

Finally, in addition to BI, a positive and quite significant direct effect on UB have the FC (β = 0.282) and HT (β = 0.154). As mentioned above, HT has a significant influence on BI. Yet, FC only affects UB directly. Thus, the degree of accessibility of the resources and support for AI tool application (which is FC) will not affect the readiness of the academic to use ChatGPT, yet may affect the level of their engagement into using it.

Additionally, when analyzing the moderating variables – Age and Gender of the respondents, we revealed that the only significant moderating effect is that of Age on the path between PV and BI. The effect is negative (β = −0.059), which means that with age the respondents are less eager to pay for using ChatGPT, even knowing that they would be investing into the tool that might significantly facilitate their didactic and research work. We could suggest a few explanations of this phenomenon. First, as we discussed before, is the factor of general consciousness and caution that develops over time and that makes people be more careful with such decisions as spending money. Second, academics of older age may tend to oppose new technologies much more than their younger colleagues, preferring conventional tools. In this case, even if the academics give ChatGPT a try, they would not be ready to invest into its paid version. Third, we would claim that with age the academics gather more knowledge, skills, and wisdom, both in research work and in teaching, so they may (and prefer to) rely on their experience more than on ChatGPT or any other AI tool.

As the final word in the discussion, we would like to refer to the work of Emenike and Emenike (2023), who brings up the issue of payable artificial intelligence tools. If (or rather, when) the systems based on AI will be available only with paid subscriptions, there is no doubt that some educational institutions will be eager to bear such costs to provide their workers with the best tools. And while some institutions will be able to afford such investment, there will be some that will have no funds for that. The issue of equity and accessibility will then arise. And this might lead to a new wave of research dedicated to opportunities of AI-based tools application.

5.1 Contributions

The findings of this study have contributions to theory and practice. From a practical perspective, this research offers valuable insights into the acceptance of an artificial intelligence conversational agent for teaching and research purposes. The discoveries deepen our understanding of the crucial factors that influence the adoption and integration of ChatGPT at higher education institutions.

The results indicate that the factors of HT and PE play the most crucial role in shaping academics’ (teachers and researchers) intentions to accept and utilize ChatGPT. In order to start implementing this tool in their work more frequently, the academics need to get accustomed to using ChatGPT often, and start accepting it as something easy and familiar, but also as something useful and helpful, that will facilitate their work and improve its results. Once the academics start not only to feel more confident with ChatGPT, but to see the effects of their cooperation with this tool, they will grow to enjoy it. This is where HM will also positively affect their intention to use ChatGPT in future.

We would also like to pay attention to the effect that Personal Innovativeness may have on the behavioral intention of the academics. Taking a risk and experimenting with new technologies like AI might be simply interesting and exciting. In addition, when it brings valuable results and proves to be a helpful tool to facilitate work – it may become a good habit to use ChatGPT as an assistant.

Moreover, what requires highlighting is the SI factor. People usually tend to look after their friends, colleagues, or relatives that they love and respect. Once an academic reveals the advantages and disadvantages of using ChatGPT, they should share their knowledge and experience with their colleagues. It is very important that when forming their opinion about ChatGPT (as well as any other tool) the academics pay consider both good and bad practices, not focusing only on someone’s bad experience.

The theoretical contribution of this study consists, first of all, in the conducted review of literature. It assists in structuring the knowledge about the usage of ChatGPT in preparation of educational materials, in scientific research, and in text writing - all for the purposes of academics at higher education institutions. We believe that our paper may also be an example of SI and might help some academics get more familiar with what ChatGPT offers for education. Together with presenting good examples of ChatGPT helping in research and didactics, the paper emphasizes the importance of taking caution with all the content that this AI tool generates and verifying all the texts and other materials before using them further. Finally, the paper is an example of the UTAUT2 model application for analyzing the acceptance of artificial intelligence technology.

5.2 Limitations

The authors see two limitations of this research. The first relates to the issue of awareness. In the questionnaire we were asking the academics about their attitude toward ChatGPT, assuming that they are familiar with the tool and have tried it at least once. Yet, it turned out that not all the academics have tried ChatGPT, due to different reasons: they have not needed it for their work (the issue of the academics’ work area is also discussed as the second limitation), they have not been recommended to use it, they have not had time to try it yet, they have been skeptical about it. Due to this fact, the respondents were not confident about their attitude toward this AI tool. And because of that they either preferred not to fill the questionnaire at all, or provided the opinions about ChatGPT which may not be completely valid.

Secondly, there is a limitation of not distinguishing the academics by their areas of work. The authors did not divide the respondents into groups by the subjects they teach or the research areas they work in. Neither did we add a question about the research/teaching area into the questionnaire. Therefore, we could not explore, for instance, what is the difference in acceptance of ChatGPT between academics who teach languages and those who teach physics; between those who study artificial intelligence and those who conduct demographic research. Such comparative analysis would have provided us with more interesting conclusions about the utility of ChatGPT.

5.3 Future research

The first possible direction of research would be to explore academics’ opinions about AI not only through the UTAUT2 model, which presupposes specific questions, but also through the open answers of respondents. As much as it may be time-consuming to analyze such responses, they may contain some valuable insights about academics’ attitude toward ChatGPT and the reasons why they prefer to use or not to use this tool in their work.

The second possible avenue for studying the application of ChatGPT by academics would be to analyze the acceptance of this tool for teaching, and separately - for research; and then compare the results and draw conclusions about the utility of this AI tool for these two major roles of the academics.

Finally, the third direction for future research derives from the limitation of this study. As mentioned before, the authors did not analyze the areas in which the academics teach or conduct scientific research. Yet, a comparative analysis of the role of ChatGPT in assisting research and teaching in various scientific fields would be a prospective avenue for future research.

6 Conclusions

The objective of the paper was to explore the disposition of academics in Poland toward the use of ChatGPT for research and teaching purposes. The objective was achieved in the process of answering a research question. The factors that may influence the academics’ intention to use ChatGPT were analyzed based on the adapted UTAUT2 model. The model includes seven constructs (with Personal Innovativeness added as the eighth), for which the authors put forward eleven hypotheses, and nine of them were accepted. It was revealed that the BI of the academics to use ChatGPT is not influenced by the amount of effort they have to use to employ this technology, and by the accessibility of resources and support for working with ChatGPT. The rest of the constructs (HT, HM, PE, PI, PV, and SI) have a stronger or weaker effect on the intention of the academics to use ChatGPT and their engagement in applying this tool.