1 Introduction

The digital divide is an ongoing societal challenge in many countries [1]. Studies looking at socially stratified inequalities in both access to digital technologies and digital abilities have shown that being on the wrong side of this divide — often referred to as digital exclusion — leads to lower educational outcomes, reduced physical and mental health, and a lower quality of life [2, 8]. The divide is shored up by structural barriers such as physical access and exposure to digital technology, but also by psychological inhibitors, whereby a lack of — or negative — exposure erodes confidence and trust in digital tools and technologies, resulting in further levels of disengagement [9].

Advances in artificial intelligence (AI) have progressed rapidly in recent times, opening up a vast swathe of new AI applications, from tailored educational content [10,11,12] and AI-assisted decision-making [13], to risk profiling in chronic care [14] and generative tools such as OpenAI’s ChatGPT [3]. This next wave of digitalisation driven by AI has the potential to perpetuate the existing digital divide if not implemented responsibly and inclusively. Conversely, if implemented in ways that explicitly tackle the factors that contribute to digital exclusion, it presents the opportunity to close these gaps and enhance human capabilities and quality of life for all.

Addressing both the causes and consequences of digital exclusion in the context of the evolving AI landscape is critical to ensure emerging technologies help rather than hinder the closing of digital divides [15]. To assess the potential impact of the digital divide on people’s perceptions of, experiences with, and attitudes towards AI, we need to better understand these relationships. For instance, do people’s current perceptions of digital technology shape their attitudes towards AI technology more specifically? This paper aims to address these questions by first examining the different factors that contribute to a sense of digital exclusion. We then assess the impact of these factors on people’s experiences with everyday AI tools and technologies, and their general attitudes towards AI. These general attitudes comprise (1) a basic level of acceptance of AI; (2) concerns about AI’s global impact; and (3) how much people expect AI to benefit their future. By understanding these relationships, we hope to contribute to an informed understanding of how to deliver more inclusive and responsible AI.

2 Theoretical framework

2.1 Digital technology and society

Information and communications technologies (ICT) have been increasingly integrated into the fabric of society and play a key role in the changing landscape of work, education, social interactions, and even our sense of individual identity [3, 4, 16]. On one hand, these digital technologies have been a significant driver of both economic growth and individual quality of life, particularly in terms of education and healthcare [11, 17, 18]. On the other hand, ICT developments have been associated with controversial questions of intellectual property rights, productivity, and privacy protection, to name a few [19, 20]. More insidiously, questions of affordability and subsequently access to technology and associated information are problematic, with evidence of inequities firmly aligned to wider societal inequalities, such as gender, salary, and education [8, 21, 22]. These discrepancies underscore the need to assess who exactly is included when we hear talk of how technology can provide people with a means to “exercise control over their physical and social worlds in order to achieve practical outcomes,” (p.4, Kipnis, 1990) [23].

In the last 30 years, the notion of the digital divide has come to the attention of researchers, policymakers, and the public [24]. The digital divide describes a gap between people who have adequate levels of access to affordable digital technology as well as an ability to use these technologies effectively, as opposed to people who have limited access or reduced digital skills [1, 12]. These divides have been shown to intensify societal inequities by constraining people’s social and economic capital [25]. During the recent COVID-19 pandemic, the UN Secretary-General described the digital divide as “a matter of life and death” [26, 27], for without access to online information, significant sections of society were excluded from the ability to work, study, and socialise from home, as well as the need to gain crucial health-related information and updates [1, 28, 29].

Originally conceptualised as a binary measure of those who have access and those who do not [30], research into the factors contributing to the digital divide has recently described its exclusionary nature in terms of three interconnecting levels [1]. The first represents structural access to information technology and is strongly associated with socio-economic factors, such as gender, salary, education, and age. A study by the Pew Research Centre found that, of all these factors, annual income had the biggest effect [20, 31].

The impact of a lack of structural access then contributes to the second factor, that being digital usage and skills. This level represents the intersection of access and behaviour, and encompasses the deployment of technology, both in terms of range and frequency of use [32, 33]. Here we see how cultural differences contribute to digital outcomes. At a country level, data shows associations between industrialisation and digital usage [34, 35], but within country, we also see how sub-cultures are impacted differentially, for instance, level of parental education or private versus public school attendance have been shown to be related to digital exclusion [36, 37]. Moreover, the intersection of group-based differences can be seen clearly in countries like Australia, where digital exclusion is related to geographic remoteness, and this is exacerbated by further mechanisms of cultural exclusion experienced by vulnerable populations, such as Indigenous Australian Peoples [22, 38].

Finally, scholars have identified a third factor that captures the outcomes of levels one and two and acknowledges that it is not enough to merely provide access to those previously excluded [25]. The ability to functionally leverage technology in ways that allow us to ‘exercise control over our worlds’ is often determined by internalised psychological processes that can serve to constrain ability for those on the wrong side of the digital divide [9, 33]. This third level captures these psychological factors, with a particular focus on the internalisation of a sense of digital exclusion, via reduced feelings of digital well-being [39]. For instance, research has shown that people who report inadequate levels of competence with, or control over, their digital technology, demonstrate more negative technology-related outcomes whether when learning, at work, or more generally [40,41,42,43], thereby serving to reinforce feelings of digital exclusion.

Apart from the stark inequalities evident during the recent COVID-19 pandemic in which those with no, or limited, digital access were explicitly disadvantaged [17], digital exclusion more generally is related to poorer quality of life [2, 44], worse physical and mental health [5, 18], lower social connectivity [3, 45], and significantly worse educational outcomes [4, 6]. On the other hand, digital inclusion has been shown to have a positive impact on health outcomes through the increasing use of health apps and patient portals, and for this reason is now classified as a social determinant of health [46, 47].

Digital technologies are also shown to reinforce existing socioeconomic inequalities evident offline [44,45,46]. For instance, Van Deursen and colleagues found that for people with lower levels of education, or people living with a disability, even when spending more time online, their time is spent on less economically advantageous activities, such as gaming, social media, and leisure activities. People reporting higher levels of social status on the other hand have been shown to leverage their digital technology in ways that measurably increase their educational, commercial, economic, and health-related resources [48, 49]. The intersectional effect of digital overuse with less advantageous online activities such as social media use, was further demonstrated to lead to reduced learning, less pro-social behaviour, and lower levels of well-being [50]. As to how existing disparities in digital abilities translate to the use of, and attitudes towards AI technologies, little is currently known.

2.2 Artificial intelligence and the “AI divide”

The latest surge in AI developments and applications has been fuelled by significant levels of investment, which are predicted to reach close to USD $160 billion globally by 2025 [51]. The speed of this roll-out has been particularly marked since the launch of generative AI applications such as OpenAI’s ChatGPT in late 2022 [52,53,54]. Generative AI, a form of AI that uses machine learning methods such as natural language processing or deep neural nets, is used to generate novel content, whether text, image or audio-based. In text form, generative AI applications present as an anthropomorphic entity with which humans can converse. It has been suggested that this particular form of AI will dramatically improve human productivity [55, 56], and alongside AI-augmented automation processes in factories and other settings, the integration of AI into the workplace promises significant competitive advantages [57].

The speed of recent AI developments has, however, outrun the ability for adequate governance or regulation [15, 58], and has prompted concerns regarding the ethical nature of these technologies and their data sources [53,54,55]. Questions have been raised over a potential ‘AI-divide’ in which some groups will have greater access to the advantages of this technology than others. This differential in access and adoption can manifest at multiple levels, from country to institution, or between individuals from different socio-economic groups. At a group-based level, this disadvantage may take the form of changes in labour demand. For instance, a decrease in jobs requiring low-level digital skills and an increase in jobs requiring high-level skills will disproportionately impact groups from a lower socio-economic status [55, 59,60,61]. It is likely that for those starting their AI journey from a digitally disadvantaged position, their ability to leverage this technology will be significantly reduced [15]. To date, there is little research on the group-based differences that might influence experiences, perceptions, and attitudes towards AI technologies [63], particularly those related to the digital divide. Understanding these differences is essential to the design, development, and deployment of AI systems in a manner that overcomes persisting digital disparities and minimises the risk of new inclusivity challenges.

Early indicative data from the US demonstrated overall public support for AI, but that this was significantly greater amongst wealthy, male, educated people with experience in technology [63]. In Australia, surveys have shown that despite there being support for AI, the majority of people report having little knowledge of what AI actually is [22, 64]. Further studies have revealed cultural differences among residents in China, Germany, and the UK on measures of AI acceptance as well as fear of AI, with some evidence of consistent trends in gender, although this varies according to the type of AI technology being assessed [65]. It is unknown how these perceptions and attitudes towards AI influence people’s engagement with AI technologies, nor how these positions relate to other digital experiences.

Responsible AI is a frequently cited concept that has been posited as one solution to AI’s successful uptake and integration into society. It is a highly debated topic across both scholarly circles and applied settings [66], but the pillars of AI responsibility are generally agreed to be inclusivity, transparency, explainability, and accountability [67,68,69]. While there is no consensus definition or framework around how to deliver responsible AI, mechanisms such as governance, regulation, ethical design, risk control, and literacy are often cited [63, 70,71,72]. More broadly, the notion of responsible AI sits within a wider approach known as Responsible Innovation [73], in which new science and technology is articulated as an inclusive practice, designed and delivered ‘for and with society’ [74]. Today, as we navigate the early days of AI’s roll-out, we have the opportunity to apply a more nuanced understanding of the diverse nature of societal differences in AI attitudes, perceptions, and behaviours, and to use what we know about digital exclusion to help shape our understanding and delivery of a truly responsible and inclusive AI [1, 68,69,70,71,72].

3 Present study

The purpose of this study was to explore differences in aspects of digital experience that are related to digital exclusion, and to look at how these interact with people’s experiences with, and attitudes towards, AI. To capture a lived experience of digital exclusion, we created a measure of ‘digital confidence’ based on people’s self-reported level of awareness, familiarity, and sense of competence with digital technology. This measure was designed to capture variance in people’s confidence with digital technology in a way that would be meaningful in the context of people’s everyday experiences with digital technology. We first assessed which factors from the three levels of the digital divide (socio-demographic and socio-economic factors, digital engagement, and digital well-being) were contributing to this measure of digital confidence. This was important to ensure that our measure was indeed serving as a proxy outcome of digital exclusion, one in which structural determinants of exclusion, behavioural manifestations of exclusion, and also psychological internalisations of exclusion (via lower levels of digital well-being) are shown to be contributing. We then explored the relationships between digital confidence and people’s experiences with AI tools and technologies, such as how positive or competent they feel when using them. We also examined links between digital confidence and attitudes to AI more generally (e.g., beliefs that AI is ethical or beneficial). Finally, we evaluated the relationship between personal experiences with AI and these general attitudes, and tested the extent to which this relationship was moderated by digital confidence. Specifically, we evaluated whether the relationship between experiences and attitudes varied for people with high, medium, or low digital confidence.

Research questions:

  1. 1.

    What factors contribute to digital confidence?

  2. 2.

    Is digital confidence related to experiences with, and attitudes towards, AI?

  3. 3.

    Does digital confidence moderate the relationship between experiences with, and attitudes towards, AI?

4 Methods

4.1 Participants

Data was collected using the online crowd-sourcing platform Prolific. All participants (N = 303Footnote 1) were residents of Australia. The survey was advertised as a questionnaire about perceptions of AI that did not require any prior knowledge. Participants were paid in Australian dollars, and, according to the platform’s policy, at an equivalent rate above the UK minimum wage. Half the sample identified as women, 49% as men, and 1% as gender diverse. The average age of the sample was 36 (ranging from 18 to 86 years). The sample was adequately representative on gender, age, education, employment, and salary (see Appendix 3). Ethical approval for this study was obtained from the CSIRO’s Human Research Ethics Committee (#089–23).

4.2 Measures

4.2.1 Socio-demographic and socio-economic differences (digital divide level 1 variable)

Participants provided information on a number of measures capturing demographic and socioeconomic factors: specifically, gender, age, education, and salary.

4.2.2 Engagement with digital technologies (digital divide level 2 variable)

To achieve a behavioural measure of digital engagement, we gave participants a list of everyday digital technologies and asked them to select how many they regularly used (for a full list, refer to Appendix 1). We then asked participants how often they engaged with the technologies selected on a 6-item scale from ‘Rarely’ through to ‘Several times a day’ and aggregated these scores to generate a mean level of digital engagement.Footnote 2

4.2.3 Feelings of psychological well-being associated with digital technology (digital divide level 3 variable)

To measure the impact of digital technology on people’s psychological well-being, we used an adapted scale known as Technology Effects on Needs Satisfaction in Life [39, 75]. This scale was developed based on the validated Basic Psychological Need Satisfaction and Frustration scale [76], which itself is an extended measure of Self-determination [77]. The scale was designed to investigate the extent to which a user perceives technology to be negatively impacting on their basic psychological needs for autonomy, competence, and relatedness. The basic psychological need for autonomy refers to having a sense of meaning and control in one’s life. Competence refers to a sense of mastery and achievement, and relatedness to a need for social connection [77]. We adapted the Technology Effects on Needs Satisfaction in Life scale to provide a shortened measure of people’s perceptions of digital technology generally, with items such as ‘Using digital technology has made me feel less capable in my life’. Participants were asked to rate their agreement or disagreement with these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.74) [78].

4.2.4 Digital confidence

To gauge people’s sense of confidence with digital technologies, we created a three-item scale in which we asked people to rate their awareness of digital technology, their comfort and familiarity with digital technology, and their self-perceived level of competency with this technology. Responses were indicated using a 5-point Likert scale from ‘Low’ to ‘High’. Internal scale consistency was excellent (α = 0.95).

4.2.5 Experience of everyday AI tools and technologies

Participants were asked to think about the everyday AI tools and technologies they had indicated they used, and to rate their experiences with these technologies using a bespoke 3-item scale: ‘Thinking about the AI technologies you use, on the whole, how positive do you feel about using them?’; ‘…how much do you trust these AI technologies?’; and ‘…how competent do you feel when using these AI technologies?’. Participants were asked to respond to these questions on a 5-point Likert scale from ‘Not at all’ to ‘Very much’. Internal scale consistency was good (α = 0.80).

4.2.6 Attitudes towards AI technologies

After participants had reported on their experience of using everyday AI tools and technologies, we wanted to capture their opinions and attitudes towards AI technologies more broadly, focusing on acceptance of AI technologies, opinion of AI’s global impact, and expectations of the impact of AI on people’s personal future. For consistency of opinion, we presented participants with a standard definition of AI [79, 80], which read ‘Artificial intelligence (AI) refers to computer systems that can perform tasks or make predictions, make recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human interactions’. Once participants had read this definition, they went on to answer the following questions.

4.2.6.1 General attitudes to AI technology

We used a measure of acceptance of artificial intelligence known as the General Attitudes to AI Scale (GAAIS) [81]. To minimise survey fatigue [82], we deployed an 8-item short form of this scale [83], which included four positively valenced items such as ‘There are many beneficial applications of Artificial Intelligence’, and four negatively valenced items such as ‘Do you shiver with discomfort when you think about future uses of AI?’. Participants rated these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.83).

4.2.6.2 Attitudes to AI’s global impact

To capture people’s opinions on the impact of AI on the world, and with a particular focus on society, people, and the planet, we used a bespoke 7-item scale which included items such as ‘I believe that the deployment of AI within society is ethical’. Participants were asked to rate these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.88).

4.2.6.3 Attitudes to AI’s impact on personal life

Finally, we captured people’s perceptions of how AI technologies might impact their lives personally using a bespoke 4-item scale that included questions such as ‘Do you feel that AI will contribute positively to your own life?’. Participants were asked to respond to these questions on a 5-point Likert scale from ‘Not at all’ to ‘Very much’. Internal scale consistency was good (α = 0.83). For a summary of measures used refer to Table 1.

Table 1 Summary of measures used

5 Results

5.1 Research question 1: What factors contribute to digital confidence?

To address this question, we conducted a hierarchical regression, with digital confidence as our outcome variable. Based on existing literature on the digital divide, our predictor variables were categorised according to the three levels thought to contribute to digital exclusion and beginning with the most established measures. The first model contained the level-one factors known to contribute to the digital divide, that being gender, age, salary, and education. In the second model, we entered our measure of digital engagement, which is often thought of as a behavioural measure related to level one factors. In the third model, we entered the level three factor, that being a measure of the impact of digital technology on psychological well-being.

Results showed that the first model was significant F(4,287) = 19.16, p < 0.001, R2 = 0.21. Gender was significantly negatively associated with digital confidence (β = – 0.41, t = – 5.72, p < 0.001), as was age (β = – 0.19, t = – 5.24, p < 0.001), such that women and older people reported significantly less digital confidence.Footnote 3 Salary was significantly positively related (β = 0.13, t = 3.35, p < 0.001). Education, however, was not significantly associated with digital confidence (β = – 0.02, t = -0.50, p = 0.618).

The second model too was significant F(5,286) = 19.59, p < 0.001, R2 = 0.26, and accounted for unique variance in digital confidence over and above model 1. The level-two factor, digital engagement, was significantly and positively associated with digital confidence (β = 0.15, t = 4.13, p < 0.001).

Finally, the third model was also significant F(6,285) = 18.13, p < 0.001, R2 = 0.28, and again accounted for unique variance in digital confidence over and above model 2, such that feelings of digital well-being were negatively associated with digital confidence (β = – 0.10, t =  – 2.88, p = 0.004).Footnote 4 See Table 2 for a summary.

Table 2 Hierarchical regression analysis of predictors of digital confidence

Overall, these results confirm the validity of our digital confidence measure by demonstrating consistency with established factors known to contribute to the digital divide. Furthermore, our results underscore the importance of looking beyond structurally derived first-level variables in an explanatory framework for digital exclusion to include more holistic measures of behavioural and psychological exclusion.

5.2 Research question 2: Is digital confidence related to experience of and attitudes towards AI

Having established the factors contributing to a person’s sense of digital confidence, we then assessed the relationship between this variable and experience of everyday AI tools and the three measures of attitudes towards AI (general attitudes to AI technology, attitudes to AI’s global impact, and attitudes to AI’s impact on personal life). As can be seen in Table 3, higher levels of digital confidence were significantly associated with more positive experiences with, and attitudes towards, AI (Table 3).

Table 3 Means, standard deviations, and correlations between digital confidence and experiences with and attitudes towards AI (general, global, personal)

5.3 Research question 3: Does digital confidence moderate the relationship between experience of, and attitudes towards, AI?

Given the moderate to strong associations (r’s = 0.62–0.75) between people’s AI experiences and their attitudes towards AI, we wanted to explore the role of digital confidence within this relationship. To this end, we ran multiple regressions assessing the potential moderating effect of digital confidence on this relationship. A separate multiple regression was conducted for each of the three attitude measures. For a visual representation of this analysis, refer to Fig. 1.

Fig. 1
figure 1

Analytic model illustrating the relationship between experiences with and attitudes towards AI moderated by digital confidence

5.3.1 General attitudes to AI

Results showed that the overall model was significant, F(3,290) = 70.34, R2 = 0.42, p < 0.001 While there was no significant main effect of experiences with AI technologies on general attitudes, t =  – 0.927, p = 0.354, there was a significant main effect of digital confidence, t =  – 3.73, p < 0.001, and a significant interaction between the two, t = 3.99, p < 0.001. The interaction (shown in Fig. 2a) was probed by testing the conditional effects of experiences with AI on general attitudes to AI at three levels of digital confidence: high, medium, and low. Results showed that while the relationship was significant at all levels of digital confidence (ps < 0.001), the relationship was stronger for those with high rather than low digital confidence (with a beta value of one standard deviation above the mean twice as large as the beta value at one standard deviation below the mean).

Fig. 2
figure 2

Graphs showing the significant interaction effect of level of digital confidence on the relationship between experience of AI technologies and attitudes to AI (general, global, personal)

5.3.2 Attitudes to AI’s global impact

Results showed that the overall model was significant, F(3,290) = 67.30, R2 = 0.41, p < 0.001. As with the scale measuring general attitudes to AI, there was no main effect of experience with AI technologies, t = 0.10, p = 0.923, a significant main effect of digital confidence, t =  – 2.80, p = 0.005, and a significant interaction, t = 2.94, p = 0.004. The interaction (shown in Fig. 2b) was again probed by testing the conditional effects of experiences with AI technologies on opinions on attitudes to AI’s global impact at three levels of digital confidence (low, medium, high). Results showed that while the relationship was significant at all levels of digital confidence (ps < 0.010), the relationship was stronger for those with high rather than low digital confidence (again with a beta value of one standard deviation above the mean twice as large as the beta value at one standard deviation below the mean).

5.3.3 Attitudes to AI’s impact on personal life

Results showed that the overall model was significant, F(3,290) = 137.7, R2 = 0.59, p < 0.001, with a marginally significant main effect of experience with AI technologies, t = 1.84, p = 0.067, no significant main effect of digital confidence, t =  – 1.04, p = 0.300, and a significant interaction, t = 2.21, p = 0.028. Conditional effects analyses showed the relationship between experience and attitudes was significant at all levels of digital confidence (ps < 0.010), and was again stronger for those with high rather than low digital confidence (refer to Appendix 2 for Table of conditional effects).

6 Discussion

The aim of this paper was to better understand digital exclusion and its relationship to people’s experiences with and attitudes towards AI. Digital exclusion has been shown to be associated with a lower quality of life, lower educational outcomes, and even reduced physical and mental health [2,3,4,5,6]. If these digital disadvantages spill over into people’s perceptions, experiences and attitudes towards AI technology, the AI revolution risks leaving behind the same groups of people who today, already find themselves on the wrong side of the digital divide [72, 84]. To undertake this exploration, we created a novel measure capturing a lived experience of digital exclusion. This measure, described as digital confidence, allowed us to assess the relationship between people’s current levels of confidence with digital technology and their experiences with everyday AI tools and technologies, as well as their attitudes towards AI more generally. This measure is important not only because it allowed us to conceptually and empirically bridge the gap between experiences with digital technology and attitudes to AI, but also because it measures the downstream consequences of digital exclusion, as captured by perceptions of awareness, comfort, and sense of competency with digital technology.

This paper used an exploratory approach to better understand these relationships, and in particular set out to investigate three research questions: (1) What factors contribute to digital confidence? (2) Is digital confidence related to experiences with and attitudes towards AI? And (3) does digital confidence moderate the relationship between experiences with and attitudes towards AI? Looking first to question one, we first demonstrated the validity of our measure of digital confidence, in which the relationships revealed confirmed associations previously studied in the digital divide literature. Our data showed that women, older people, those on lower salaries, people with less digital access, and those reporting lower levels of digital well-being scored significantly lower on digital confidence. These results provide supportive evidence of the need to include all three contributing levels (structural, behavioural, and psychological) when it comes to understanding people’s feelings of confidence with digital technology. This is particularly the case for new technologies such as AI, which tend to generate polarised and emotive responses, thus potentially exacerbating the chances of psychologically internalised levels of digital exclusion [33].

When it came to the second question, we found strong evidence of the positive relationships between levels of digital confidence and both experiences with everyday AI tools and technologies, as well as attitudes towards AI more generally. This finding underscores the notion of a spill-over effect, such that experiences with existing digital technology, whether positive or negative, are likely to impact on perceptions, experiences, and attitudes towards new digital applications such as AI [9, 33].

Finally, question three looked at the potential for different levels of digital confidence to moderate the relationship between people’s personal experiences of everyday AI tools and technologies, and their attitudes towards AI more broadly. Here we found consistent evidence of the important role that digital confidence plays within this relationship. Across all three measures capturing attitudes towards AI, higher levels of digital confidence could be seen to significantly facilitate this relationship, such that the relationship between experiences with AI and attitudes to AI were strongest when people’s level of digital confidence was high.

AI technologies [8, 12] represent a step change in digital technology. In the last year we have seen an unprecedented level of interest in and engagement with AI, particularly in the form of generative AI products such as OpenAI’s ChatGTP [52]. It is without question that the impact of AI on society will only increase, and concerns have been raised globally as to the ethical implications of this acceleration [53, 54, 62]. These concerns have focused on a range of issues, but inclusivity has often been a key focus [69]. For instance, when considering transparency of data, criticisms have centred on the question of who is under-represented in the data, or when considering issues of AI accountability, discussion often converges on values of inclusivity and fairness [68, 71, 85].

The motivation to produce inclusive AI is one of the pillars of ‘Responsible AI’ [66, 87]. This responsible approach to disruptive new science and technology defines itself as doing ‘science for and with society’ [74], and as such takes its starting position as one of societal involvement, from development through to delivery [67, 73]. However, when it comes to public experiences with and attitudes towards AI, and how these might differ across the vast range of people who comprise society, there is little empirical data available. This brings us back to the purpose of our paper. Over the last 30 years, scholars, technologists, and policymakers have concerned themselves with the digital divide — a phenomenon in which those who have reduced access to, experience with, or capabilities in digital technologies have been shown to be significantly disadvantaged [1, 24, 25]. As AI represents the next, and possibly largest wave of the digital evolution, those groups in society who find themselves on the wrong side of this divide are more likely to experience a spill-over effect from digital exclusion to AI exclusion. As we navigate the early stages of this fourth industrial revolution, pre-emptively understanding these relationships will be essential when it comes to designing, developing, and delivering AI that truly is inclusive of all in society [7, 32].

6.1 Limitations and future directions

The intersection between digital technology and society is complex and varies across counties and cultures [87, 88]. Therefore, understanding the nature of inclusive AI and how this can be delivered requires contextually varied data collection and analyses. Our paper takes a first step towards measuring one aspect of this intersection, that being the spill over from people’s confidence with digital technology to their experiences and attitudes towards AI. However, we collected our sample using an online community crowd-sourcing platform, and although our participants were representative in terms of gender, age, salary, and education, they were all residents of Australia. This sampling constraint may limit the generalisability of our findings on this global issue, particularly when considering developing countries where issues of digital exclusion are likely to be more severe. Furthermore, the nature of online data collection pre-supposes that those completing the survey already have first, a level of digital access, and second, a relatively proficient level of digital expertise. This data collection methodology thus excluded those participants for whom access to, or ability with, digital technology was inhibited. This limitation is particularly noteworthy in the Australian context, given the extreme digital inequities evidenced for Indigenous Australian Peoples or those in remote locations with limited digital access [22, 89].

Going forward with this research, future data collection with populations in different countries, subpopulations from diverse linguistic and cultural backgrounds, as well as groups with varying levels of digital access, would provide further valuable insight into the relationship between confidence with digital technology and experiences and attitudes towards AI. Furthermore, this relationship would benefit from being tested with populations experiencing more complex relationships with data, digital technology, and AI. For instance, in the context of Indigenous Australian knowledge rights, the question of data justice is particularly challenging [90]. More expansive sampling would also allow researchers to model more complex relationships, for instance, testing the reciprocity of digital confidence and attitudes to AI.

Despite the current limitations of this study, however, strong evidence emerged of the influential role that digital confidence has in shaping people’s experiences and attitudes to AI. This finding underscores the importance of studying the downstream consequences of the digital divide, even within populations experiencing lesser levels of digital exclusion. In so doing, it highlights the need to understand how structural exclusion can lead to psychological exclusion, providing a pathway for future research to test the mechanisms through which group-based differentials such as institutional, cultural, or geographical differences can transition to become individual-based differences.

7 Conclusion

AI technologies have the potential to solve some of society’s most complex challenges, whether those be environmental, economic, or health-related [91]. However, the speed of AI’s recent integration into society has prompted ethical concerns, particularly a fear that this next digital evolution will leave behind significant sections of society [84]. Tackling this concern requires leveraging our knowledge of the existing digital divide to better predict the potential for digital exclusion to spill over into emergent inequities in AI access, knowledge, and competencies. Applying an ethical approach to AI in order to “improve the lives of all people around the world” [92] therefore requires an understanding of when and how levels of digital exclusion can shape experiences of AI. Only through this understanding can we hope to avoid perpetuating or even exacerbating this significant societal schism.