1 Introduction

Artificial intelligence (AI) has rapidly developed in recent years, leading to various applications in different disciplines, including education [1]. Digital transformation is becoming essential in higher education. These transformations, such as the use of AI technologies, mobile technologies, and online meeting platforms, became inevitable in the post-COVID-19 era [2]. A more recent development is Chat Generative Pre-trained Transformer (ChatGPT), an AI-powered Chabot released by OpenAI equipped with a large language model that enables it to generate original text in response to prompts given by users [3]. The relevant application of ChatGPT in higher education focuses on several areas, including developing assignments [4], supporting essay writing [5], and encouraging critical reflection on AI’s use in society [6]. Despite the advantages, concerns exist about AI-assisted cheating among students [7, 8].

The interest and use of AI technologies continue to surge worldwide [9, 10]; therefore, it is important to consider the societal and policy implications of AI, as well as the potential varying impacts on diverse populations with the deployment of these new technologies. This is particularly crucial for those residing in developing countries, where the debate about the benefits and societal impact of AI needs to be understood in the context of social and economic realities. Synchronizing AI issues and the digital divide debate may be timely for Africa, as both emphasize the use of technologies. The digital divide is a way to understand the existing inequalities not only in African societies but also in communities globally [11]. While recognizing the potential of new AI technologies to transform education, health, and businesses in Africa and potentially impact economic growth, it is vital to highlight the social and economic challenges such as access to technologies, ICT skills, broadband costs, and protecting indigenous data that remain out of reach for most African citizens [12].

Considering that AI relies on extensive data, many have concerns about access, data processing costs, and data management to protect privacy and rights. Although much research has been conducted on AI and ChatGPT in particular, the discussions have not considered the digital divide in the global south. Moreover, empirical discussions have primarily involved academics and professionals [10, 13], with minimal exploration among students. To understand the impact of ChatGPT as a learning tool within the context of the digital divide in the global south, investigating students’ experiences and perspectives is crucial. Students’ perceptions can influence motivation, engagement, and academic achievement, in addition to other factors [14, 15]. When students have positive perceptions of their learning experience, they are more likely to be engaged and motivated, leading to better academic outcomes. Conversely, negative perceptions can result in disengagement, lack of motivation, and lower academic success. This study aims to explore higher education students' perceptions of ChatGPT. The following research questions guided the study:

  1. 1.

    What is the level of awareness of ChatGPT among students?

  2. 2.

    How do students perceive ChatGPT as a learning tool?

  3. 3.

    Are there significant differences in students' perceptions of ChatGPT based on demographics such as gender, age, and level of education?

2 Literature review

This literature review explores the multifaceted dynamics of AI adoption in the global south, with a focus on its implications for higher education, students’ perceptions of ChatGPT as a learning tool, and the influence of demographic variables on AI perceptions. It examines the AI divide in the global south, highlighting disparities in access and adoption, while also delving into the opportunities and challenges of integrating AI in higher education. Additionally, the review investigates students’ awareness, perceptions, and concerns regarding ChatGPT, emphasizing the factors that shape their attitudes towards its utility and ethical implications. Moreover, it analyzes the impact of demographic variables such as gender, age, and educational level on perceptions of AI, underscoring the need for inclusive policies and initiatives to address disparities and promote responsible AI adoption in educational contexts within the global south.

2.1 AI divide in the global south

In 2019, UNESCO identified inequality as one of the world’s most pressing problems, with serious concerns that the spread of digital technology could worsen the issue. The most critical technological challenge is the concept of the digital divide—the extent to which a country, region, group, or individual is either completely or partially excluded from the benefits of digital technology [16, 17]. This has long been the primary perspective through which the relationship between digital technology and inequality is analyzed [18, 19]. There is a growing recognition of the importance of considering the impact of AI on developing countries. The main concern is that most AI systems are Western-oriented, and the potential development of a suitable one in the global south could further exacerbate existing systems of oppression [20].

The global south presents valid concerns for a reason. Despite the promising prospects, several technological interventions in many nations in the global south have failed to deliver the expected results [21,22,23]. Any technological advancement that neglects contextual factors may face failure, regardless of its novelty—a phenomenon evident in many nations in the global south. Langthaler and Bazafkan [24] argued that this region is significantly impacted by the Fourth Industrial Revolution [4IR], characterized by rapid technological advancements like automation, the internet of things, and AI. Körber [25] contended that while the 4IR narrative promises wealth and wellbeing, the global south lacks empirical evidence to substantiate these claims.

It is widely acknowledged that the readiness of the global south to embrace digitalization will greatly influence the impact of digitalization and automation on labor markets [26]. However, hurdles such as inadequate infrastructure, low skill levels, and high capital costs hinder automation in the global south, resulting in a significant digital gap [24, 27]. Without bridging the digital divide, the rapid digitalization of the global economy could lead to profound negative effects on countries within the global south, including substantial job losses. Naudé [28] emphasized that raising educational and skill standards is crucial to ensure that the global south can benefit from the 4IR rather than lagging behind.

To address these challenges, the World Bank [29] proposes an alignment system that focuses on developing robust adaptability, education, and training systems to prioritize lifelong learning policies. Developing nations should prioritize modern intellectual and socio-emotional skills like critical thinking and problem-solving, while developed nations should focus on fundamental academic and socio-emotional skills alongside basic digital literacy [24, 29]. AI holds promise for the global south in various areas such as politics, poverty alleviation, environmental sustainability, transportation, agriculture, healthcare, education, financial transactions, and religious and traditional belief systems. While many of these AI systems are no longer hypothetical but are becoming a reality in Africa, they are primarily driven by companies from the global north [30].

Nevertheless, significant economic and legal challenges hinder the adoption and implementation of AI across the global south. The benefits and risks of AI are substantial, and its development may infringe on fundamental rights and freedoms [30,31,32]. Collaborative efforts are necessary to promote the acceptance and utilization of AI in the global south, including ethical use, regulatory policies, and effective application in education.

2.2 AI in higher education

AI is essential in the academic landscape, enhancing educational efficiency, effectiveness, and productivity. Recent literature [33, 34] has reported that AI could improve education by supplementing the role of human instructors rather than replacing them. New technologies force industries to constantly innovate to keep up with the ever-changing market [35,36,37]. Students and instructors must be the pioneers in facilitating new technologies such as AI to ensure safe and proper use to support teaching and learning.

Several AIs have been rolled out, each with its potentials and drawbacks. However, one AI that has attracted widespread attention is ChatGPT [38,39,40]. Due to its flexibility and content-rich generative ability, ChatGPT has quickly become a ‘student companion’ in developing and developed nations [41, 42]. However, evidence suggests that ChatGPT is not accepted and implemented well especially in higher education. The challenge lies in overcoming the ideological barriers that exist among educators and administrators in order to facilitate successful implementation. Rudolph et al. [43] also highlighted that there is insufficient scholarly research on the implementation of ChatGPT at higher education institutions, partly due to the novelty of the topic. Woithe and Filipec [34] corroborated that even though its features, strengths, limitations, consequences, applications, possibilities, and threats have been examined, current studies cannot be relied upon or replicated.

There are few studies that have investigated the use of ChatGPT in higher education [44,45,46,47]. Higher education students could use ChatGPT as a tool for independent study. In addition to passing higher education courses, Zhai [48] echoed that ChatGPT could assist students to write cohesively and insightfully. Studies in the field of finance have discovered that ChatGPT is useful for brainstorming, literature synthesis, and data identification [49, 50]. Faculty should incorporate AI into teaching and learning activities to challenge students to think critically and creatively as they work to solve authentic challenges.

However, there is also a body of knowledge supporting how ChatGPT can affect effective teaching and learning. For example, Rudolph et al. [43] argued that lower-level cognitive skills are not as well developed. Higher education students have been reported using ChatGPT to commit academic crimes [51, 52]. The inconsistencies in research findings on ChatGPT and effective pedagogy need continuous investigations, especially in academic settings [48, 51, 53, 54]. Nevertheless, for students to successfully appreciate and use ChatGPT in learning situations, they need to be aware of its context of applicability. Their contextual awareness of how they apply ChatGPT could inform their perceptions and any potential concerns about its usage.

2.3 Awareness, perceptions, and concerns of students regarding ChatGPT as a learning tool

Awareness refers to an individual’s capacity to perceive and understand their surroundings [55, 56]. Research conducted in the global north has shed light on students’ awareness and perceptions of ChatGPT. For instance, McGee [57] discovered that university students were aware of ChatGPT and frequently used it for their academic tasks. He found that approximately 89% of students used ChatGPT for homework, while 53% used it for academic assignments. About 48% and 22% of students utilized ChatGPT during tests and for creating paper outlines, respectively.

In addition, Jowarder [58] examined the familiarity, use, perceived utility, and impact of ChatGPT on academic success among undergraduates in the United States. The study revealed that over 90% of participants in semi-structured interviews were familiar with this innovative tool. However, there was a varying level of awareness among students, with some being familiar with and using ChatGPT, while others were not acquainted with it. Cui and Wu [59] also found that respondents in China perceived AI as more beneficial than risky.

Contrastingly, Demaidi [60] found a significant lack of awareness of AI in developing countries, particularly in the global south. This may be attributed to the limited utilization of AI in various industries and the legal framework struggling to keep pace with technological advancements. Makeleni et al. [61] conducted a systematic review on the challenges faced by academics in the global south regarding AI in language teaching. Their findings highlighted four key challenges: limited language options, cases of academic dishonesty, bias and lack of accountability, and a general lack of interest among both students and teachers. Perceptions can vary between positive and negative dimensions. A handful of studies have explored students’ perceptions of ChatGPT as a learning aid. For example, Fiialka et al. [62] noted that educators in higher education who interact with students tend to have a favorable attitude towards AI and its potential. Woithe and Filipec [34] investigated students’ adoption, perceptions, and learning impact of ChatGPT in higher education and found that students had a positive view of its utility, user-friendly interface, and practical benefits.

Research by Popenici and Kerr [63] examined the impact of AI systems on education and highlighted negative perceptions among students and teachers due to concerns about privacy, power dynamics, and control. However, Perin and Lauterbach [64] found that students appreciated the quick feedback provided by an AI grading system. Positive perceptions towards AI systems could facilitate instructors in offering continuous feedback to enhance student learning. Seo et al. [65] reported mixed findings on the impact of AI in teaching and learning, emphasizing concerns about responsibility, agency, and surveillance issues. While AI systems are praised for improving communication quality and providing personalized support, worries about misunderstandings and privacy are also prevalent. These mixed beliefs about AI have been discussed in other studies as well.

Despite the existing literature on AI in education, there is a lack of comprehensive research on students’ awareness, perceptions, and concerns regarding ChatGPT, particularly in the global south. This study aimed to address this gap by investigating the state of awareness, perceptions, and concerns about ChatGPT among higher education students in the global south, recognizing their importance as major stakeholders in the education system.

2.4 The influence of demographic variables on perceptions of AI

There is limited literature on the influence of demographics and other socioeconomic variables on AI perception [66,67,68]. For instance, Brauner et al. [66] conducted a study on the public’s perception of AI and found that individuals with lower levels of trust still perceived AI more positively. These individuals believed that the outcomes of AI were more favorable, although the magnitude of this impact may be small. The authors concluded that AI is still considered a ‘black box,’ making it challenging to accurately evaluate its risks and opportunities. This lack of understanding may result in biased and irrational beliefs regarding the public’s perceptions of AI.

A large-scale study conducted by Gerlich [69], involving 1389 scholars from the US, UK, Germany, and Switzerland, revealed varied public views on AI. The author concluded that factors such as perceived risks, trust, and the scope of applications influenced public awareness and perceptions of AI. In another study focusing on the influence of demographic variables on AI perceptions, Yeh et al. [70] examined public attitudes towards AI and its connection to Sustainable Development Goals (SDGs). The results showed that the public generally held a positive view of AI, despite perceiving it as risky. Additionally, males exhibited greater confidence in AI knowledge compared to females, while more females believed that AI should have a greater impact. Respondents aged 50 to 59 years and college students, as opposed to master’s graduates, held stronger opinions on how AI would impact human lives and alter decision-making processes.

Furthermore, research by Yigitcanlar et al. [71] investigated the factors influencing public perceptions of AI, highlighting that gender, age, AI knowledge, and AI experience play crucial roles. Similar findings were reported by Miro Catalina et al. [72], indicating that females, individuals over 65 years old, and those with university education harbored greater distrust towards AI usage. These results suggest that individuals with diverse demographic backgrounds exhibit varying perceptions of AI, with limited focus on discussions in the global south. Therefore, the current study, which also explores the impact of gender, age, and educational level, holds significant importance.

3 Methods

3.1 Data collection procedures and sampling methods

The research focused on exploring higher education students’ perspectives on ChatGPT. Data was gathered using a quantitative approach through an electronic survey conducted on Qualtrics. The choice of Qualtrics was based on its familiarity to most higher education students [73] and its ability to facilitate remote data collection, which was particularly suitable for higher education students in Ghana. Participant recruitment utilized a combination of snowball and convenience sampling methods. To reach a wider audience, the survey link was shared through various online platforms such as Facebook, Twitter, and WhatsApp groups.

Identified respondents were encouraged to further distribute the survey link among their peers. All participants received consent forms and detailed information about the study prior to their participation. They were given the opportunity to review and understand the information before providing their consent. Participants were also assured that they could withdraw from the study at any point without facing any consequences. Throughout the study, confidentiality and anonymity of the participants were prioritized. Their identities were kept undisclosed, and their responses were treated with the utmost privacy and confidentiality.

3.2 Survey instrument

A developed questionnaire, known as the Students’ ChatGPT Experiences Scale (SCES), was utilized for this study. The questionnaire consisted of three parts: demographic information, students’ awareness and usage of ChatGPT, and students’ perspectives on ChatGPT. The section on students’ awareness and usage of ChatGPT included four items that were measured dichotomously, with responses being ‘Yes’ or ‘No’. The students’ perspectives on ChatGPT were assessed using the SCES comprising 33 items. The survey utilized a 4-point Likert-type scale, with responses ranging from ‘Strongly Agree’ (SA) and ‘Agree’ (A) to ‘Disagree’ (D) and ‘Strongly Disagree’ (SD).

The validity and reliability of the scale were considered. Initially, three researchers developed the questionnaires based on existing literature. Subsequently, an expert panel review, consisting of two experts, qualitatively examined the wording of the items. A reliability analysis using Cronbach’s alpha indicated a high level of internal consistency (0.906) for all 33 items. Furthermore, an Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were conducted to assess the psychometric properties of the 33 items. The outcomes of the validity and reliability analysis are detailed in the results section.

3.3 Participants

Although 441 students from Ghanaian higher education institutions (i.e., universities and colleges of education) started providing responses to the survey, 277 students completed all sections of the survey. As shown in Table 1, out of the sample of 277 students, there were more males (55.6%) than females (43%). The majority (41.9%) of the students were in their second year. Most (49.1%) of them were within the age range of 20–25 years.

Table 1 Demographic variables of the participants

3.4 Data analysis

Prior to conducting the main analysis, EFA and CFA were carried out to evaluate the psychometric properties of the measurement instrument. EFA was conducted using SPSS, with the extraction method of Principal Component Analysis (PCA) being utilized, while considering the sample sizes [74]. The Kaiser–Meyer–Olkin measure of sampling adequacy (KMO) and Bartlett’s test of sphericity were employed to assess sampling adequacy [75], with a cut-off point of 0.40 set to filter the factor loadings [76]. Eigenvalues greater than 1 and screen plots were used to determine factor solutions, and Monte Carlo’s PCA for Parallel Analysis was conducted to confirm the identified factor solutions based on eigenvalues and screen plots [77]. Items that did not meet the cut-off or did not load significantly onto a factor were eliminated. Cronbach’s alpha reliabilities (α) were used to evaluate item consistencies, with alpha values of 0.70 and above indicating acceptable internal consistency [78], serving as the criteria for evaluating the emerging factors.

The CFA was conducted in AMOS to further validate the factors identified in the EFA. Evaluation criteria such as Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error Approximation (RMSEA), and Standardised Root Mean Square Residual (SRMR) were utilized. RMSEA and SRMR were used to assess the absolute fit of the model, while CFI and TLI indicated incremental fit [79]. Lower values of Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) indicated a parsimonious model fit [80]. To determine model fit, it is recommended that CFI should be above 0.95 or 0.90, or possibly satisfactory if above 0.80. SRMR should be below 0.09, and RMSEA should be less than 0.05 for good fit, or between 0.05 and 0.10 for moderate fit [81]. Composite reliability (CR), discriminant, and convergent validity were also utilized to assess the internal consistency and validity of the confirmed items. According to Hair et al. [82], a CR value greater than 0.70, AVE greater than 0.50, and MSV less than AVE indicate acceptable psychometric properties of the developed scales.

Regarding the first research question addressing students’ awareness of ChatGPT, data analysis was conducted using frequencies and percentages in SPSS. For the second research question on students’ perceptions of ChatGPT as a learning tool, factor structures identified from EFA and CFA in AMOS, along with means and standard deviations, were utilized to describe their perceptions. These statistical tools were chosen to summarize and organize data focusing on student awareness and perceptions of ChatGPT [83]. Since the survey utilized a 4-point Likert-type scale, with responses ranging from ‘Strongly Agree’ (SA) and ‘Agree’ (A) to ‘Disagree’ (D) and ‘Strongly Disagree’ (SD), the highest possible score on any item was 4.0, representing unanimous strong agreement among all respondents. Conversely, the lowest possible score on any item was 1.0, indicating unanimous strong disagreement. The cut-off point for determining overall agreement or disagreement was set at 2.50, calculated as either 4.0–1.50 or 1.0 + 1.50. A mean score of 2.50 or higher indicated agreement with the survey statements, while a mean score below 2.50 indicated disagreement.

Both EFA and CFA were employed to explore and confirm the underlying factor structures and relationships that influence students’ perceptions of ChatGPT as a learning tool [84]. Addressing the final research question examining differences in students' perceptions of ChatGPT based on demographics, a Multivariate Analysis of Variance (MANOVA) was used to assess significant mean differences in perceptions with respect to gender, age, and education level [85]. Checks for normality (using Shapiro–Wilk, skewness and kurtosis), linearity, and homogeneity of variance were performed before analysis. Results indicated that the data approximated a normal distribution and assumptions of equal variances were met, allowing for valid interpretation and appropriate use of inferential statistics. Statistical significance was determined at a 5% alpha level.

4 Results

4.1 EFA and CFA

Based on the EFA, the 33 items that measured students’ perceptions of ChatGPT as a learning tool loaded onto six factors. The scree plot (see Fig. 1) and Monte Carlo PCA for parallel analysis were used to determine the number of factors or components to retain. These parameters supported a three-factor structure. The KMO value was 0.899, and Bartlett’s Test of Sphericity was also significant (p < 0.001), indicating the appropriateness of interpreting factor analysis results (see Table 2).

Fig. 1
figure 1

Screen plot showing explored factors

Table 2 KMO and Bartlett’s Test

Furthermore, Table 3 provides the factor structure, loadings, reliability coefficients (i.e., Cronbach’s alpha), and variance explained of the identified three-factor structure that explains students' perceptions of ChatGPT as a learning tool. Factor 1 was identified as Perceived Academic Benefits of ChatGPT, consisting of 17 items focused on the positive impacts of using ChatGPT for academic purposes. Students highlighted various benefits, such as saving time, convenient access to information, accuracy, better understanding of difficult topics, meeting deadlines, recommending it to others, finding it more useful than alternatives, and improving academic performance. These responses emphasize ChatGPT’s valuable role in supporting student learning.

Table 3 Exploratory factor analysis of students’ perceptions of ChatGPT as a learning tool

Factor 2, referred to as Perceived Accessibility and Attitude towards ChatGPT, includes seven items that demonstrate students’ positive attitudes and enthusiasm for using ChatGPT in academic settings. Students appreciate the attractiveness and enjoyment of using ChatGPT, indicating a willingness to embrace AI tools in their educational journey. Factor 3, labeled as Perceived Academic Concerns with ChatGPT, comprises 9 items reflecting students’ fears and reservations about using ChatGPT for academic purposes. Common concerns include plagiarism, hindering critical thinking development, security vulnerabilities, over-reliance on technology, lack of originality, academic policy violations, and privacy risks associated with ChatGPT use.

The reliability estimates for the three factors ranged from 0.812 to 0.917: (a) Perceived Academic Benefits: α = 0.917, (b) Perceived Academic Concerns: α = 0.833, (c) Perceived Accessibility and Attitude: α = 0.812. Collectively, these factors explain 46.5% of the variance in students' perceptions of ChatGPT as a learning tool.

The CFA confirmed the explored factors resulting from the EFA. Significant standardized estimates ranged from 0.323 to 0.769, with p < 0.001. The three-factor model showed a satisfactory fit, with TLI = 0.813, CFI = 0.825, SRMR = 0.0695, RMSEA = 0.071, and low AIC = 1378.054. Additionally, the model demonstrated excellent composite reliability, convergent validity, and divergent validity. For example, composite reliability ranged from 0.836 to 0.919, AVE values ranged from 0.501 to 0.519, and MSV ranged from 0.006 to 0.406, confirming excellent composite reliability, convergent, and divergent validity [81, 82]. Significant correlations were found between the three identified factors. The first and second factors were significantly positively correlated (r = 0.784, p < 0.001). The third and first factors were significantly negatively correlated (r = − 0.012, p < 0.001), while the third and the second factors were significantly positively correlated (r = 0.079, p < 0.001). Figure 2 illustrates the confirmed factor structure from the CFA.

Fig. 2
figure 2

A three-factor CFA on students’ perceptions on ChatGPT as learning tool. PAB Perceived ChatGPT Academic Benefits, PAC Perceived ChatGPT Academic Concerns, and PAA Perceived accessibility and attitude towards ChatGPT

4.2 Research question 1: what is the state of awareness of ChatGPT among students?

The results indicate that students are aware of ChatGPT. As shown in Table 4, approximately 77% indicated that they have heard about ChatGPT. Additionally, more than 60% indicated that they have used ChatGPT. However, the majority of students (89%) indicated that they have not received any training or guidelines on using ChatGPT.

Table 4 State of awareness of ChatGPT among higher education students

Students were asked to indicate how they have utilized ChatGPT, taking into consideration their awareness of the tool. As shown in Table 5, the majority of students (around 37%) used ChatGPT for personal learning, 22% for assignments, 5% for group work, and 3% class work. Additionally, 34% of students mentioned using ChatGPT for other non-academic reasons.

Table 5 ChatGPT usage

4.3 Research question 2: what are students’ perceptions of ChatGPT as a learning tool?

Overall, students indicated positive perceptions of using ChatGPT as a learning tool, especially regarding its perceived academic benefits (overall mean = 3.01). As shown in Table 6, they indicated that they are more likely to recommend ChatGPT as it facilitated their academic duties (Item 6, mean = 3.17, SD = 0.65). According to them, ChatGPT helps understand difficult concepts better (Item 4, mean = 3.15, SD = 0.65), saves time and effort in doing assignments (Item 15, mean = 3.14, SD = 0.72), is convenient (Item 2, mean = 3.13, SD = 0.75), and saves time when searching for information (Item 1, mean = 3.12, SD = 0.85).

Table 6 Students’ perceptions of using ChatGPT as a learning tool (N = 277)

Despite the generally positive perceptions about ChatGPT, Table 6 shows that students had concerns regarding the use of ChatGPT (overall mean = 2.89). For example, they fear relying too much on ChatGPT and not developing their critical thinking skills (Item 23, mean = 3.01, SD = 0.81). They also shared concerns about security (Item 24, mean = 2.95, SD = 0.78), privacy risks (Item 28, mean = 2.88, SD = 0.76), as well as a lack of originality in their assignments (Item 22, mean = 2.86, SD = 0.85).

Regarding students' perceptions of accessibility, they found ChatGPT accessible (overall mean = 3.09). As shown in Table 6, students indicated it does not take a long time to learn how to use ChatGPT (Item 17, mean = 3.14, SD = 0.70) and does not require extensive technical knowledge (Item 19, mean = 3.01, SD = 0.75). Students also indicated that they find GenAI like ChatGPT fun to use (Item 31, mean = 3.13, SD = 0.71).

4.4 Research question 3: are there any significant differences in students’ perceptions of ChatGPT based on their demographics, such as gender, age, and level of education?

The multivariate results presented in Tables 7, 8, and 9 indicate that there are no significant differences in students’ perceptions of ChatGPT based on gender (F(6, 544) = 1.437, p = 0.198 > 0.05; Wilk’s Λ = 0.969), age (F(12, 175) = 0.455, p = 0.940, Wilk’s Λ = 0.980), and level of education (F(9, 660) = 1.551, p = 0.127, Wilk’s Λ = 0.950). These findings suggest that regardless of students' demographics, their perceptions of ChatGPT may be  similar.

Table 7 Multivariate tests for significance difference between students’ perceptions about ChatGPT and gender
Table 8 Multivariate tests for significance difference between students’ perceptions about ChatGPT and age
Table 9 Multivariate tests for significance difference between students’ perceptions about ChatGPT and educational level

5 Discussion

The present study investigated the perspectives of Ghanaian higher education students on ChatGPT as a learning tool. It explored the students’ levels of awareness, perceptions, and variations in perceptions of ChatGPT based on gender, age, and educational level. The findings revealed that the students were indeed familiar with ChatGPT as a learning tool.

The study also found that most students use ChatGPT for their assignments; however, the majority of the students are reluctant to use it for classwork and group work. For example, the question “I often use ChatGPT as a source of information in my assignments and duties” had the highest coefficient (0.768). One possible explanation is that most higher education institutions in Ghana do not have clear guidelines on the use of technologies like smart phones (which is the common devise use among students) in the classrooms [86, 87]. Consequently, many students hesitate to use these devices during instructional sessions, as there may be faculty members who do not endorse their use within the classroom environment. Students therefore may feel comfortable using ChatGPT for their assignment, which is mostly done outside of the classroom. Another potential factor to consider is that the majority of higher education students in Ghana may not possess personal computers. Consequently, they face challenges accessing devices like laptops, which could be utilized to access ChatGPT and enhance their classwork. This finding further highlights the digital divide that continues to be one of the challenges of most developing countries in the global south [18, 19, 21, 24, 27, 30, 32, 38]. The finding also confirms previous studies regarding students’ awareness of ChatGPT, as well as how it has been used in teaching and learning [57, 58, 59]. For example, McGee [57] reported that university students demonstrated awareness of ChatGPT and used it for homework and academic assignments.

Other interesting results that emerged from this study are the perceived academic benefits, positive attitudes, and accessibility of ChatGPT that most of the participants reported. This indicates students’ general positive attitudes toward the use of ChatGPT as a learning tool. For example, students in this study indicated that using ChatGPT has helped them to improve their overall academic performance. Primarily because ChatGPT helps them in better understanding difficult topics and concepts. Also, ChatGPT saves them a lot of time when searching for information. Students also highlighted that they do not face many difficulties when using ChatGPT and find it fun to use. Additionally, ChatGPT does not require extensive technical knowledge. Students feel enthusiastic about using ChatGPT for learning and research. These results align with extant literature that supports higher education students' and instructors' positive perceptions of AI models such as ChatGPT [34, 62, 64].

ChatGPT offers personalized assistance, especially in challenging subjects, and assists students in their assignments, while suggesting areas for improvement [34, 38, 41,42,43]. For example, Baidoo-Anu and Owusu Ansah [38] stated that ChatGPT promotes personalized and interactive learning, generates prompts for formative assessment activities that provide ongoing feedback to inform teaching and learning. It helps create adaptive learning systems based on a student's progress and performance. An adaptive learning system based on a generative model (e.g., ChatGPT) could provide more effective support for students learning programming, resulting in improved performance on programming assessments. This finding is further corroborated by studies examining teachers' dispositions towards the integration of Gen AI in education. For example, the research led by Fiialka et al. [62] found that higher education instructors had a favorable disposition towards AI models and their potential to improve interactions with students. In this study, participants reported that using ChatGPT is convenient for their information search. They found ChatGPT to be accurate and improve their understanding of learning concepts, especially in difficult topics. ChatGPT is a useful alternative learning tool that supports their learning. The participants found ChatGPT to be attractive as it provided an environment of teaching and learning that makes students receptive to the use of AI models to improve their learning. It has long been established that AI models such as ChatGPT are convenient, accessible, user-friendly, fast, and responsive [43, 64, 88]. According to Perin and Lauterbach [64], AI models were perceived positively because they assisted instructors in providing continuous feedback on student learning. Equally important studies that highlighted positive AI model perceptions are those of Ross et al. [88]. They argued that with AI models such as ChatGPT, students could adapt learning contents based on their needs, which increased their motivation and engagement. When students find ChatGPT to be convenient, user-friendly, accessible, and useful in their learning, they are more likely to perceive it as an effective tool that can support their learning.

The study also found that a significant portion of students reported using ChatGPT primarily for non-academic reasons. This finding could be explained by students’ concerns regarding the use of ChatGPT as revealed in the factor analysis. Students in this study indicated that they have concerns regarding the use of ChatGPT such as risk of hindering critical thinking skill development, potential security vulnerabilities, over-dependence on technology, lack of originality in assignments, violations of academic policies, and privacy-related risks associated with using ChatGPT. Extant literature on the use of Gen AI among higher education students has shown similar findings [10, 38, 8990]. For example, Hu [10] found that higher education students were concerned about accuracy, privacy, ethical issues, and the impact on personal development. A significant factor contributing to students’ reluctance to leverage Gen AI tools like ChatGPT is the absence of clear policy guidelines from many higher education institutions [10, 38, 39]. Students may therefore hesitate to use Gen AI for academic purposes or may choose not to disclose their use of such tools in academic settings due to concerns about academic integrity.

Also, ChatGPT may fail to provide accurate information to improve student learning. The responses in ChatGPT are based on what is available in its training data and may lack a true understanding of the real academic world. There is a greater likelihood that ChatGPT may result in the propagation of misleading and inaccurate information without tangible academic sources and rigor, which can be detrimental to student learning. Bai et al. [91] study that investigated the cognitive effect of ChatGPT on learning and memory revealed that AI models, including ChatGPT, are promising in supporting personalized student learning and access to academic information. However, users of ChatGPT may be exposed to several risks such as reduced critical thinking capabilities and decline of memory retention as the results of this study have shown. It is important to acknowledge that ChatGPT can be a powerful teaching and learning tool for students in the global south; it simultaneously raises serious concerns such as overreliance on ChatGPT, academic integrity, accuracy of information, etc., especially because students in this study have not received any training or institutional guidelines on how to use ChatGPT safely and constructively.

In analyzing students’ demographics and their views on ChatGPT, there were no significant variations in how students perceive ChatGPT among different demographic groups. This suggests that regardless of factors like gender, age, or education level, students have consistent views on ChatGPT. Essentially, students from diverse backgrounds share similar opinions on the advantages, concerns, and accessibility of ChatGPT as a learning tool. This consistency in viewpoints highlights the strength and universality of students’ perspectives on ChatGPT, indicating that its benefits and challenges go beyond demographic differences. These findings emphasize the potential for ChatGPT to be widely accepted among various student populations, demonstrating its versatility and inclusivity in educational settings. Furthermore, the absence of demographic distinctions implies that efforts to incorporate ChatGPT into educational practices do not need to be customized for specific demographic groups, making implementation strategies simpler and promoting equal access to AI-supported learning opportunities for all students. Based on the findings presented, we challenge previous studies that suggest variations in the perceptions of AI models, such as ChatGPT, based on demographic factors like gender and age [70,71,72].

6 Conclusion and implications for policy and practice

The study highlights the need for education stakeholders, particularly in the global south, to embrace innovative pedagogies leveraging AI tools like ChatGPT to enhance learning outcomes and bridge the digital divide. This study provides valuable insights into the awareness and potential role of AI, specifically ChatGPT, in the global south, particularly among tertiary education students. Educational policy makers can leverage students’ awareness of ChatGPT to promote individualized learning and growth. Students could be prepared to align themselves with innovative pedagogies aimed at developing 21st-century skills and core competencies.

Evidence from this study also shows that the majority of students are using ChatGPT for their academic work. However, they are yet to receive any training or institutional guidelines on how to use it safely and constructively to support their learning. Scholars have argued that students will not benefit from the advancement of AI, especially ChatGPT, if they are not trained on appropriate use within the academic space. Atlas [9] maintained that educating students on the appropriate utilization of ChatGPT is helpful in ensuring students use it safely to support their learning. This can be accomplished through workshops, training sessions, or integrating content related to academic integrity and plagiarism into the curriculum. DeLuca et al. [91] argued that encouraging students to collectively define learning objectives and establish criteria for the task, considering the role of AI software, would empower them to assess and discern suitable contexts in which AI such as ChatGPT can serve as a valuable learning tool.

Moreover, as highlighted in this study, students' concerns (especially around academic integrity) about the use of Gen AI tools like ChatGPT can limit their potential use of these tools for academic purposes. To support students in leveraging Gen AI in higher education settings, policymakers can develop well-informed policy guidelines and strategies for the responsible and effective implementation of Gen AI tools. For example, it is important to teach students to use AI-generated content for brainstorming and understanding concepts, rather than relying on it for original research or writing. Clear direction and creating a supportive environment are essential when incorporating Gen AI tools into education. While establishing a policy framework is crucial, providing specific guidance and support within those policies is equally important to ensure their successful implementation and impact. Educators can effectively enhance student learning experiences by using Gen AI models like ChatGPT in a thoughtful and strategic manner.

Additionally, establishing a secure and efficient environment for integrating ChatGPT into educational settings requires the implementation of a comprehensive strategy that focuses on privacy, security, and academic integrity. This strategy includes providing training on ethical use, incorporating privacy measures such as encryption and consent mechanisms, and implementing content moderation to maintain accuracy and appropriateness. Integration with existing platforms allows educators to oversee usage, while feedback mechanisms enable users to promptly report any issues. Continuous monitoring and collaboration with stakeholders ensure adherence to best practices, promoting responsible usage and maximizing ChatGPT’s potential for learning and knowledge acquisition while minimizing risks. By doing so, higher education institutions can enhance teaching and learning experiences.

Other scholars have also emphasized that capacity building or institutional guidelines on the use of Gen AI tools like ChatGPT are not only beneficial to students but teachers as well. For example, according to Baidoo-Anu and Owusu Ansah [38], by enhancing their professional capabilities, teachers can acquire the necessary skills to effectively leverage ChatGPT and other Gen AI technologies for facilitating advanced pedagogical methods that enhance student learning. Moreover, as the prevalence of AI continues to grow in various professional domains, incorporating Gen AI tools into the educational setting and guiding students on their constructive and safe utilization can also equip them for success not only in educational settings but also in future workforce dominated by AI [38]. Consequently, educators could utilize Gen AI models such as ChatGPT to bolster and enhance the learning experiences of their students.

The study further highlights a paradigm shift in the educational landscape, signaling the need for education stakeholders, especially instructors and administrators in the global south, to embrace learner-centered technology-based pedagogies. Education administrators are encouraged to not only create technologically rich learning environments but also to implement student engagement programs that orient students on the productive use of ChatGPT. This approach promotes the appropriate use of ChatGPT, addressing individual learning needs, offering immediate feedback, and facilitating better understanding of concept.

Finally, the satisfactory psychometric properties and model fit demonstrated during the validation of the SCES highlight its credibility and effectiveness as a comprehensive tool for evaluating students’ perceptions of ChatGPT as an educational resource. These results not only advance research in AI-enhanced education but also provide useful insights for educators and policymakers looking to use Gen AI tools like ChatGPT efficiently in order to improve student learning experiences. With these findings, researchers can use the SCES as a reliable tool to study different aspects of students’ experiences with ChatGPT and analyze how perceptions evolve over time and factors influencing them. Educators and policymakers can use the validated SCES to assess the impact of ChatGPT implementation on student learning experiences and outcomes, identify areas for improvement, and tailor instructional strategies accordingly. These findings add to the limited research available on the development of a reliable scale to assess students’ interactions with ChatGPT in the context of AI-driven education [92, 93].

7 Limitations and future research

While this study’s cross-sectional design offers valuable insights, it is limited in its ability to establish causal relationships. The variables under examination were assessed solely through self-reported instruments distributed via an online survey, potentially introducing social desirability biases in student responses. Although the SCES has been validated among university students, it is important to recognize the potential for errors due to confounders, endogeneity biases, over reporting, underreporting biases, and other factors inherent in cross-sectional designs. Additionally, the study is constrained by its reliance on self-reporting, which may not always accurately reflect participants’ behaviors and experiences. Furthermore, the sample was drawn exclusively from Ghanaian higher education students, limiting the generalizability of the findings to other regions or demographic groups within the global south. A follow-up interview to clarify quantitative results and a consideration of faculty perspectives on ChatGPT could enhance the richness of this investigation.

Despite these limitations, this study significantly enhances our knowledge of university students’ awareness and utilization of ChatGPT in the global south. Subsequent research could explore potential technological interventions to address the educational risks associated with ChatGPT use in Africa. Longitudinal mixed-method studies involving input from higher education students and faculty could provide a more comprehensive understanding of the dynamics and long-term impacts of ChatGPT adoption among different interest groups.