Abstract
The digital divide remains an ongoing societal concern, with digital exclusion shown to have a significantly detrimental impact on people’s quality of life. Artificial intelligence (AI), the latest wave of digitalisation, is being integrated into the fabric of society at an accelerated rate, the speed of which has prompted ethical concerns. Without addressing the digital divide, the AI revolution risks exacerbating the existing consequences of digital exclusion and limiting the potential for all people to reap the benefits provided by AI. To understand the factors that might contribute to experiences of AI, and how these might be related to digital exclusion, we surveyed a diverse online community sample (N = 303). We created a novel measure of digital confidence capturing individual levels of awareness, familiarity, and sense of competence with digital technology. Results indicated that measures of digital confidence were predicted by structural, behavioural, and psychological differences, such that women, older people, those on lower salaries, people with less digital access, and those with lower digital well-being, reported significantly less digital confidence. Furthermore, digital confidence significantly moderated the relationship between people’s experiences with everyday AI technologies and their general attitudes towards AI. This understanding of the spill-over effects of digital exclusion onto experiences of AI is fundamental to the articulation and delivery of inclusive AI.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The digital divide is an ongoing societal challenge in many countries [1]. Studies looking at socially stratified inequalities in both access to digital technologies and digital abilities have shown that being on the wrong side of this divide — often referred to as digital exclusion — leads to lower educational outcomes, reduced physical and mental health, and a lower quality of life [2, 8]. The divide is shored up by structural barriers such as physical access and exposure to digital technology, but also by psychological inhibitors, whereby a lack of — or negative — exposure erodes confidence and trust in digital tools and technologies, resulting in further levels of disengagement [9].
Advances in artificial intelligence (AI) have progressed rapidly in recent times, opening up a vast swathe of new AI applications, from tailored educational content [10,11,12] and AI-assisted decision-making [13], to risk profiling in chronic care [14] and generative tools such as OpenAI’s ChatGPT [3]. This next wave of digitalisation driven by AI has the potential to perpetuate the existing digital divide if not implemented responsibly and inclusively. Conversely, if implemented in ways that explicitly tackle the factors that contribute to digital exclusion, it presents the opportunity to close these gaps and enhance human capabilities and quality of life for all.
Addressing both the causes and consequences of digital exclusion in the context of the evolving AI landscape is critical to ensure emerging technologies help rather than hinder the closing of digital divides [15]. To assess the potential impact of the digital divide on people’s perceptions of, experiences with, and attitudes towards AI, we need to better understand these relationships. For instance, do people’s current perceptions of digital technology shape their attitudes towards AI technology more specifically? This paper aims to address these questions by first examining the different factors that contribute to a sense of digital exclusion. We then assess the impact of these factors on people’s experiences with everyday AI tools and technologies, and their general attitudes towards AI. These general attitudes comprise (1) a basic level of acceptance of AI; (2) concerns about AI’s global impact; and (3) how much people expect AI to benefit their future. By understanding these relationships, we hope to contribute to an informed understanding of how to deliver more inclusive and responsible AI.
2 Theoretical framework
2.1 Digital technology and society
Information and communications technologies (ICT) have been increasingly integrated into the fabric of society and play a key role in the changing landscape of work, education, social interactions, and even our sense of individual identity [3, 4, 16]. On one hand, these digital technologies have been a significant driver of both economic growth and individual quality of life, particularly in terms of education and healthcare [11, 17, 18]. On the other hand, ICT developments have been associated with controversial questions of intellectual property rights, productivity, and privacy protection, to name a few [19, 20]. More insidiously, questions of affordability and subsequently access to technology and associated information are problematic, with evidence of inequities firmly aligned to wider societal inequalities, such as gender, salary, and education [8, 21, 22]. These discrepancies underscore the need to assess who exactly is included when we hear talk of how technology can provide people with a means to “exercise control over their physical and social worlds in order to achieve practical outcomes,” (p.4, Kipnis, 1990) [23].
In the last 30 years, the notion of the digital divide has come to the attention of researchers, policymakers, and the public [24]. The digital divide describes a gap between people who have adequate levels of access to affordable digital technology as well as an ability to use these technologies effectively, as opposed to people who have limited access or reduced digital skills [1, 12]. These divides have been shown to intensify societal inequities by constraining people’s social and economic capital [25]. During the recent COVID-19 pandemic, the UN Secretary-General described the digital divide as “a matter of life and death” [26, 27], for without access to online information, significant sections of society were excluded from the ability to work, study, and socialise from home, as well as the need to gain crucial health-related information and updates [1, 28, 29].
Originally conceptualised as a binary measure of those who have access and those who do not [30], research into the factors contributing to the digital divide has recently described its exclusionary nature in terms of three interconnecting levels [1]. The first represents structural access to information technology and is strongly associated with socio-economic factors, such as gender, salary, education, and age. A study by the Pew Research Centre found that, of all these factors, annual income had the biggest effect [20, 31].
The impact of a lack of structural access then contributes to the second factor, that being digital usage and skills. This level represents the intersection of access and behaviour, and encompasses the deployment of technology, both in terms of range and frequency of use [32, 33]. Here we see how cultural differences contribute to digital outcomes. At a country level, data shows associations between industrialisation and digital usage [34, 35], but within country, we also see how sub-cultures are impacted differentially, for instance, level of parental education or private versus public school attendance have been shown to be related to digital exclusion [36, 37]. Moreover, the intersection of group-based differences can be seen clearly in countries like Australia, where digital exclusion is related to geographic remoteness, and this is exacerbated by further mechanisms of cultural exclusion experienced by vulnerable populations, such as Indigenous Australian Peoples [22, 38].
Finally, scholars have identified a third factor that captures the outcomes of levels one and two and acknowledges that it is not enough to merely provide access to those previously excluded [25]. The ability to functionally leverage technology in ways that allow us to ‘exercise control over our worlds’ is often determined by internalised psychological processes that can serve to constrain ability for those on the wrong side of the digital divide [9, 33]. This third level captures these psychological factors, with a particular focus on the internalisation of a sense of digital exclusion, via reduced feelings of digital well-being [39]. For instance, research has shown that people who report inadequate levels of competence with, or control over, their digital technology, demonstrate more negative technology-related outcomes whether when learning, at work, or more generally [40,41,42,43], thereby serving to reinforce feelings of digital exclusion.
Apart from the stark inequalities evident during the recent COVID-19 pandemic in which those with no, or limited, digital access were explicitly disadvantaged [17], digital exclusion more generally is related to poorer quality of life [2, 44], worse physical and mental health [5, 18], lower social connectivity [3, 45], and significantly worse educational outcomes [4, 6]. On the other hand, digital inclusion has been shown to have a positive impact on health outcomes through the increasing use of health apps and patient portals, and for this reason is now classified as a social determinant of health [46, 47].
Digital technologies are also shown to reinforce existing socioeconomic inequalities evident offline [44,45,46]. For instance, Van Deursen and colleagues found that for people with lower levels of education, or people living with a disability, even when spending more time online, their time is spent on less economically advantageous activities, such as gaming, social media, and leisure activities. People reporting higher levels of social status on the other hand have been shown to leverage their digital technology in ways that measurably increase their educational, commercial, economic, and health-related resources [48, 49]. The intersectional effect of digital overuse with less advantageous online activities such as social media use, was further demonstrated to lead to reduced learning, less pro-social behaviour, and lower levels of well-being [50]. As to how existing disparities in digital abilities translate to the use of, and attitudes towards AI technologies, little is currently known.
2.2 Artificial intelligence and the “AI divide”
The latest surge in AI developments and applications has been fuelled by significant levels of investment, which are predicted to reach close to USD $160 billion globally by 2025 [51]. The speed of this roll-out has been particularly marked since the launch of generative AI applications such as OpenAI’s ChatGPT in late 2022 [52,53,54]. Generative AI, a form of AI that uses machine learning methods such as natural language processing or deep neural nets, is used to generate novel content, whether text, image or audio-based. In text form, generative AI applications present as an anthropomorphic entity with which humans can converse. It has been suggested that this particular form of AI will dramatically improve human productivity [55, 56], and alongside AI-augmented automation processes in factories and other settings, the integration of AI into the workplace promises significant competitive advantages [57].
The speed of recent AI developments has, however, outrun the ability for adequate governance or regulation [15, 58], and has prompted concerns regarding the ethical nature of these technologies and their data sources [53,54,55]. Questions have been raised over a potential ‘AI-divide’ in which some groups will have greater access to the advantages of this technology than others. This differential in access and adoption can manifest at multiple levels, from country to institution, or between individuals from different socio-economic groups. At a group-based level, this disadvantage may take the form of changes in labour demand. For instance, a decrease in jobs requiring low-level digital skills and an increase in jobs requiring high-level skills will disproportionately impact groups from a lower socio-economic status [55, 59,60,61]. It is likely that for those starting their AI journey from a digitally disadvantaged position, their ability to leverage this technology will be significantly reduced [15]. To date, there is little research on the group-based differences that might influence experiences, perceptions, and attitudes towards AI technologies [63], particularly those related to the digital divide. Understanding these differences is essential to the design, development, and deployment of AI systems in a manner that overcomes persisting digital disparities and minimises the risk of new inclusivity challenges.
Early indicative data from the US demonstrated overall public support for AI, but that this was significantly greater amongst wealthy, male, educated people with experience in technology [63]. In Australia, surveys have shown that despite there being support for AI, the majority of people report having little knowledge of what AI actually is [22, 64]. Further studies have revealed cultural differences among residents in China, Germany, and the UK on measures of AI acceptance as well as fear of AI, with some evidence of consistent trends in gender, although this varies according to the type of AI technology being assessed [65]. It is unknown how these perceptions and attitudes towards AI influence people’s engagement with AI technologies, nor how these positions relate to other digital experiences.
Responsible AI is a frequently cited concept that has been posited as one solution to AI’s successful uptake and integration into society. It is a highly debated topic across both scholarly circles and applied settings [66], but the pillars of AI responsibility are generally agreed to be inclusivity, transparency, explainability, and accountability [67,68,69]. While there is no consensus definition or framework around how to deliver responsible AI, mechanisms such as governance, regulation, ethical design, risk control, and literacy are often cited [63, 70,71,72]. More broadly, the notion of responsible AI sits within a wider approach known as Responsible Innovation [73], in which new science and technology is articulated as an inclusive practice, designed and delivered ‘for and with society’ [74]. Today, as we navigate the early days of AI’s roll-out, we have the opportunity to apply a more nuanced understanding of the diverse nature of societal differences in AI attitudes, perceptions, and behaviours, and to use what we know about digital exclusion to help shape our understanding and delivery of a truly responsible and inclusive AI [1, 68,69,70,71,72].
3 Present study
The purpose of this study was to explore differences in aspects of digital experience that are related to digital exclusion, and to look at how these interact with people’s experiences with, and attitudes towards, AI. To capture a lived experience of digital exclusion, we created a measure of ‘digital confidence’ based on people’s self-reported level of awareness, familiarity, and sense of competence with digital technology. This measure was designed to capture variance in people’s confidence with digital technology in a way that would be meaningful in the context of people’s everyday experiences with digital technology. We first assessed which factors from the three levels of the digital divide (socio-demographic and socio-economic factors, digital engagement, and digital well-being) were contributing to this measure of digital confidence. This was important to ensure that our measure was indeed serving as a proxy outcome of digital exclusion, one in which structural determinants of exclusion, behavioural manifestations of exclusion, and also psychological internalisations of exclusion (via lower levels of digital well-being) are shown to be contributing. We then explored the relationships between digital confidence and people’s experiences with AI tools and technologies, such as how positive or competent they feel when using them. We also examined links between digital confidence and attitudes to AI more generally (e.g., beliefs that AI is ethical or beneficial). Finally, we evaluated the relationship between personal experiences with AI and these general attitudes, and tested the extent to which this relationship was moderated by digital confidence. Specifically, we evaluated whether the relationship between experiences and attitudes varied for people with high, medium, or low digital confidence.
Research questions:
-
1.
What factors contribute to digital confidence?
-
2.
Is digital confidence related to experiences with, and attitudes towards, AI?
-
3.
Does digital confidence moderate the relationship between experiences with, and attitudes towards, AI?
4 Methods
4.1 Participants
Data was collected using the online crowd-sourcing platform Prolific. All participants (N = 303Footnote 1) were residents of Australia. The survey was advertised as a questionnaire about perceptions of AI that did not require any prior knowledge. Participants were paid in Australian dollars, and, according to the platform’s policy, at an equivalent rate above the UK minimum wage. Half the sample identified as women, 49% as men, and 1% as gender diverse. The average age of the sample was 36 (ranging from 18 to 86 years). The sample was adequately representative on gender, age, education, employment, and salary (see Appendix 3). Ethical approval for this study was obtained from the CSIRO’s Human Research Ethics Committee (#089–23).
4.2 Measures
4.2.1 Socio-demographic and socio-economic differences (digital divide level 1 variable)
Participants provided information on a number of measures capturing demographic and socioeconomic factors: specifically, gender, age, education, and salary.
4.2.2 Engagement with digital technologies (digital divide level 2 variable)
To achieve a behavioural measure of digital engagement, we gave participants a list of everyday digital technologies and asked them to select how many they regularly used (for a full list, refer to Appendix 1). We then asked participants how often they engaged with the technologies selected on a 6-item scale from ‘Rarely’ through to ‘Several times a day’ and aggregated these scores to generate a mean level of digital engagement.Footnote 2
4.2.3 Feelings of psychological well-being associated with digital technology (digital divide level 3 variable)
To measure the impact of digital technology on people’s psychological well-being, we used an adapted scale known as Technology Effects on Needs Satisfaction in Life [39, 75]. This scale was developed based on the validated Basic Psychological Need Satisfaction and Frustration scale [76], which itself is an extended measure of Self-determination [77]. The scale was designed to investigate the extent to which a user perceives technology to be negatively impacting on their basic psychological needs for autonomy, competence, and relatedness. The basic psychological need for autonomy refers to having a sense of meaning and control in one’s life. Competence refers to a sense of mastery and achievement, and relatedness to a need for social connection [77]. We adapted the Technology Effects on Needs Satisfaction in Life scale to provide a shortened measure of people’s perceptions of digital technology generally, with items such as ‘Using digital technology has made me feel less capable in my life’. Participants were asked to rate their agreement or disagreement with these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.74) [78].
4.2.4 Digital confidence
To gauge people’s sense of confidence with digital technologies, we created a three-item scale in which we asked people to rate their awareness of digital technology, their comfort and familiarity with digital technology, and their self-perceived level of competency with this technology. Responses were indicated using a 5-point Likert scale from ‘Low’ to ‘High’. Internal scale consistency was excellent (α = 0.95).
4.2.5 Experience of everyday AI tools and technologies
Participants were asked to think about the everyday AI tools and technologies they had indicated they used, and to rate their experiences with these technologies using a bespoke 3-item scale: ‘Thinking about the AI technologies you use, on the whole, how positive do you feel about using them?’; ‘…how much do you trust these AI technologies?’; and ‘…how competent do you feel when using these AI technologies?’. Participants were asked to respond to these questions on a 5-point Likert scale from ‘Not at all’ to ‘Very much’. Internal scale consistency was good (α = 0.80).
4.2.6 Attitudes towards AI technologies
After participants had reported on their experience of using everyday AI tools and technologies, we wanted to capture their opinions and attitudes towards AI technologies more broadly, focusing on acceptance of AI technologies, opinion of AI’s global impact, and expectations of the impact of AI on people’s personal future. For consistency of opinion, we presented participants with a standard definition of AI [79, 80], which read ‘Artificial intelligence (AI) refers to computer systems that can perform tasks or make predictions, make recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human interactions’. Once participants had read this definition, they went on to answer the following questions.
4.2.6.1 General attitudes to AI technology
We used a measure of acceptance of artificial intelligence known as the General Attitudes to AI Scale (GAAIS) [81]. To minimise survey fatigue [82], we deployed an 8-item short form of this scale [83], which included four positively valenced items such as ‘There are many beneficial applications of Artificial Intelligence’, and four negatively valenced items such as ‘Do you shiver with discomfort when you think about future uses of AI?’. Participants rated these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.83).
4.2.6.2 Attitudes to AI’s global impact
To capture people’s opinions on the impact of AI on the world, and with a particular focus on society, people, and the planet, we used a bespoke 7-item scale which included items such as ‘I believe that the deployment of AI within society is ethical’. Participants were asked to rate these statements on a 5-point Likert scale from ‘Strongly disagree’ to ‘Strongly agree’. Internal scale consistency was good (α = 0.88).
4.2.6.3 Attitudes to AI’s impact on personal life
Finally, we captured people’s perceptions of how AI technologies might impact their lives personally using a bespoke 4-item scale that included questions such as ‘Do you feel that AI will contribute positively to your own life?’. Participants were asked to respond to these questions on a 5-point Likert scale from ‘Not at all’ to ‘Very much’. Internal scale consistency was good (α = 0.83). For a summary of measures used refer to Table 1.
5 Results
5.1 Research question 1: What factors contribute to digital confidence?
To address this question, we conducted a hierarchical regression, with digital confidence as our outcome variable. Based on existing literature on the digital divide, our predictor variables were categorised according to the three levels thought to contribute to digital exclusion and beginning with the most established measures. The first model contained the level-one factors known to contribute to the digital divide, that being gender, age, salary, and education. In the second model, we entered our measure of digital engagement, which is often thought of as a behavioural measure related to level one factors. In the third model, we entered the level three factor, that being a measure of the impact of digital technology on psychological well-being.
Results showed that the first model was significant F(4,287) = 19.16, p < 0.001, R2 = 0.21. Gender was significantly negatively associated with digital confidence (β = – 0.41, t = – 5.72, p < 0.001), as was age (β = – 0.19, t = – 5.24, p < 0.001), such that women and older people reported significantly less digital confidence.Footnote 3 Salary was significantly positively related (β = 0.13, t = 3.35, p < 0.001). Education, however, was not significantly associated with digital confidence (β = – 0.02, t = -0.50, p = 0.618).
The second model too was significant F(5,286) = 19.59, p < 0.001, R2 = 0.26, and accounted for unique variance in digital confidence over and above model 1. The level-two factor, digital engagement, was significantly and positively associated with digital confidence (β = 0.15, t = 4.13, p < 0.001).
Finally, the third model was also significant F(6,285) = 18.13, p < 0.001, R2 = 0.28, and again accounted for unique variance in digital confidence over and above model 2, such that feelings of digital well-being were negatively associated with digital confidence (β = – 0.10, t = – 2.88, p = 0.004).Footnote 4 See Table 2 for a summary.
Overall, these results confirm the validity of our digital confidence measure by demonstrating consistency with established factors known to contribute to the digital divide. Furthermore, our results underscore the importance of looking beyond structurally derived first-level variables in an explanatory framework for digital exclusion to include more holistic measures of behavioural and psychological exclusion.
5.2 Research question 2: Is digital confidence related to experience of and attitudes towards AI
Having established the factors contributing to a person’s sense of digital confidence, we then assessed the relationship between this variable and experience of everyday AI tools and the three measures of attitudes towards AI (general attitudes to AI technology, attitudes to AI’s global impact, and attitudes to AI’s impact on personal life). As can be seen in Table 3, higher levels of digital confidence were significantly associated with more positive experiences with, and attitudes towards, AI (Table 3).
5.3 Research question 3: Does digital confidence moderate the relationship between experience of, and attitudes towards, AI?
Given the moderate to strong associations (r’s = 0.62–0.75) between people’s AI experiences and their attitudes towards AI, we wanted to explore the role of digital confidence within this relationship. To this end, we ran multiple regressions assessing the potential moderating effect of digital confidence on this relationship. A separate multiple regression was conducted for each of the three attitude measures. For a visual representation of this analysis, refer to Fig. 1.
5.3.1 General attitudes to AI
Results showed that the overall model was significant, F(3,290) = 70.34, R2 = 0.42, p < 0.001 While there was no significant main effect of experiences with AI technologies on general attitudes, t = – 0.927, p = 0.354, there was a significant main effect of digital confidence, t = – 3.73, p < 0.001, and a significant interaction between the two, t = 3.99, p < 0.001. The interaction (shown in Fig. 2a) was probed by testing the conditional effects of experiences with AI on general attitudes to AI at three levels of digital confidence: high, medium, and low. Results showed that while the relationship was significant at all levels of digital confidence (ps < 0.001), the relationship was stronger for those with high rather than low digital confidence (with a beta value of one standard deviation above the mean twice as large as the beta value at one standard deviation below the mean).
5.3.2 Attitudes to AI’s global impact
Results showed that the overall model was significant, F(3,290) = 67.30, R2 = 0.41, p < 0.001. As with the scale measuring general attitudes to AI, there was no main effect of experience with AI technologies, t = 0.10, p = 0.923, a significant main effect of digital confidence, t = – 2.80, p = 0.005, and a significant interaction, t = 2.94, p = 0.004. The interaction (shown in Fig. 2b) was again probed by testing the conditional effects of experiences with AI technologies on opinions on attitudes to AI’s global impact at three levels of digital confidence (low, medium, high). Results showed that while the relationship was significant at all levels of digital confidence (ps < 0.010), the relationship was stronger for those with high rather than low digital confidence (again with a beta value of one standard deviation above the mean twice as large as the beta value at one standard deviation below the mean).
5.3.3 Attitudes to AI’s impact on personal life
Results showed that the overall model was significant, F(3,290) = 137.7, R2 = 0.59, p < 0.001, with a marginally significant main effect of experience with AI technologies, t = 1.84, p = 0.067, no significant main effect of digital confidence, t = – 1.04, p = 0.300, and a significant interaction, t = 2.21, p = 0.028. Conditional effects analyses showed the relationship between experience and attitudes was significant at all levels of digital confidence (ps < 0.010), and was again stronger for those with high rather than low digital confidence (refer to Appendix 2 for Table of conditional effects).
6 Discussion
The aim of this paper was to better understand digital exclusion and its relationship to people’s experiences with and attitudes towards AI. Digital exclusion has been shown to be associated with a lower quality of life, lower educational outcomes, and even reduced physical and mental health [2,3,4,5,6]. If these digital disadvantages spill over into people’s perceptions, experiences and attitudes towards AI technology, the AI revolution risks leaving behind the same groups of people who today, already find themselves on the wrong side of the digital divide [72, 84]. To undertake this exploration, we created a novel measure capturing a lived experience of digital exclusion. This measure, described as digital confidence, allowed us to assess the relationship between people’s current levels of confidence with digital technology and their experiences with everyday AI tools and technologies, as well as their attitudes towards AI more generally. This measure is important not only because it allowed us to conceptually and empirically bridge the gap between experiences with digital technology and attitudes to AI, but also because it measures the downstream consequences of digital exclusion, as captured by perceptions of awareness, comfort, and sense of competency with digital technology.
This paper used an exploratory approach to better understand these relationships, and in particular set out to investigate three research questions: (1) What factors contribute to digital confidence? (2) Is digital confidence related to experiences with and attitudes towards AI? And (3) does digital confidence moderate the relationship between experiences with and attitudes towards AI? Looking first to question one, we first demonstrated the validity of our measure of digital confidence, in which the relationships revealed confirmed associations previously studied in the digital divide literature. Our data showed that women, older people, those on lower salaries, people with less digital access, and those reporting lower levels of digital well-being scored significantly lower on digital confidence. These results provide supportive evidence of the need to include all three contributing levels (structural, behavioural, and psychological) when it comes to understanding people’s feelings of confidence with digital technology. This is particularly the case for new technologies such as AI, which tend to generate polarised and emotive responses, thus potentially exacerbating the chances of psychologically internalised levels of digital exclusion [33].
When it came to the second question, we found strong evidence of the positive relationships between levels of digital confidence and both experiences with everyday AI tools and technologies, as well as attitudes towards AI more generally. This finding underscores the notion of a spill-over effect, such that experiences with existing digital technology, whether positive or negative, are likely to impact on perceptions, experiences, and attitudes towards new digital applications such as AI [9, 33].
Finally, question three looked at the potential for different levels of digital confidence to moderate the relationship between people’s personal experiences of everyday AI tools and technologies, and their attitudes towards AI more broadly. Here we found consistent evidence of the important role that digital confidence plays within this relationship. Across all three measures capturing attitudes towards AI, higher levels of digital confidence could be seen to significantly facilitate this relationship, such that the relationship between experiences with AI and attitudes to AI were strongest when people’s level of digital confidence was high.
AI technologies [8, 12] represent a step change in digital technology. In the last year we have seen an unprecedented level of interest in and engagement with AI, particularly in the form of generative AI products such as OpenAI’s ChatGTP [52]. It is without question that the impact of AI on society will only increase, and concerns have been raised globally as to the ethical implications of this acceleration [53, 54, 62]. These concerns have focused on a range of issues, but inclusivity has often been a key focus [69]. For instance, when considering transparency of data, criticisms have centred on the question of who is under-represented in the data, or when considering issues of AI accountability, discussion often converges on values of inclusivity and fairness [68, 71, 85].
The motivation to produce inclusive AI is one of the pillars of ‘Responsible AI’ [66, 87]. This responsible approach to disruptive new science and technology defines itself as doing ‘science for and with society’ [74], and as such takes its starting position as one of societal involvement, from development through to delivery [67, 73]. However, when it comes to public experiences with and attitudes towards AI, and how these might differ across the vast range of people who comprise society, there is little empirical data available. This brings us back to the purpose of our paper. Over the last 30 years, scholars, technologists, and policymakers have concerned themselves with the digital divide — a phenomenon in which those who have reduced access to, experience with, or capabilities in digital technologies have been shown to be significantly disadvantaged [1, 24, 25]. As AI represents the next, and possibly largest wave of the digital evolution, those groups in society who find themselves on the wrong side of this divide are more likely to experience a spill-over effect from digital exclusion to AI exclusion. As we navigate the early stages of this fourth industrial revolution, pre-emptively understanding these relationships will be essential when it comes to designing, developing, and delivering AI that truly is inclusive of all in society [7, 32].
6.1 Limitations and future directions
The intersection between digital technology and society is complex and varies across counties and cultures [87, 88]. Therefore, understanding the nature of inclusive AI and how this can be delivered requires contextually varied data collection and analyses. Our paper takes a first step towards measuring one aspect of this intersection, that being the spill over from people’s confidence with digital technology to their experiences and attitudes towards AI. However, we collected our sample using an online community crowd-sourcing platform, and although our participants were representative in terms of gender, age, salary, and education, they were all residents of Australia. This sampling constraint may limit the generalisability of our findings on this global issue, particularly when considering developing countries where issues of digital exclusion are likely to be more severe. Furthermore, the nature of online data collection pre-supposes that those completing the survey already have first, a level of digital access, and second, a relatively proficient level of digital expertise. This data collection methodology thus excluded those participants for whom access to, or ability with, digital technology was inhibited. This limitation is particularly noteworthy in the Australian context, given the extreme digital inequities evidenced for Indigenous Australian Peoples or those in remote locations with limited digital access [22, 89].
Going forward with this research, future data collection with populations in different countries, subpopulations from diverse linguistic and cultural backgrounds, as well as groups with varying levels of digital access, would provide further valuable insight into the relationship between confidence with digital technology and experiences and attitudes towards AI. Furthermore, this relationship would benefit from being tested with populations experiencing more complex relationships with data, digital technology, and AI. For instance, in the context of Indigenous Australian knowledge rights, the question of data justice is particularly challenging [90]. More expansive sampling would also allow researchers to model more complex relationships, for instance, testing the reciprocity of digital confidence and attitudes to AI.
Despite the current limitations of this study, however, strong evidence emerged of the influential role that digital confidence has in shaping people’s experiences and attitudes to AI. This finding underscores the importance of studying the downstream consequences of the digital divide, even within populations experiencing lesser levels of digital exclusion. In so doing, it highlights the need to understand how structural exclusion can lead to psychological exclusion, providing a pathway for future research to test the mechanisms through which group-based differentials such as institutional, cultural, or geographical differences can transition to become individual-based differences.
7 Conclusion
AI technologies have the potential to solve some of society’s most complex challenges, whether those be environmental, economic, or health-related [91]. However, the speed of AI’s recent integration into society has prompted ethical concerns, particularly a fear that this next digital evolution will leave behind significant sections of society [84]. Tackling this concern requires leveraging our knowledge of the existing digital divide to better predict the potential for digital exclusion to spill over into emergent inequities in AI access, knowledge, and competencies. Applying an ethical approach to AI in order to “improve the lives of all people around the world” [92] therefore requires an understanding of when and how levels of digital exclusion can shape experiences of AI. Only through this understanding can we hope to avoid perpetuating or even exacerbating this significant societal schism.
Notes
Using the R package InteractionPoweR (https://davidbaranger.com/software/), based on α = .05, and a medium effect size, a sample of 300 participants gave power of 99.
The list of digital technologies all used various forms of AI, and once participants had selected which they used, subsequent questions (not analysed here) were asked about their awareness of the presence of AI.
For this analysis, due to the small numbers of people identifying as ‘gender diverse’ we created a dichotomous man/woman variable.
Of note, the behavioural level two factor entered in model 2 demonstrated a positive relationship with digital confidence, such that the higher a person’s level of engagement, the greater their level of digital confidence. In model 3 however, in which the digital well-being measure was included, this relationship was reversed, such that the higher the level of engagement, the lower the level of digital confidence. We would suggest that this finding requires further analysis beyond the scope of this current paper, but with no significant association between our level 2 and level 3 measures (r = -.03), this finding may reflect the dynamic nature of the relationships existing between factors. More particularly, the effect that feeling psychologically disempowered by digital technology (as measured by our level three factor) served to reverse the normally positive association between engagement and confidence, such that the more a person engages with a technology that reduces their well-being, the less they feel digitally confident. However, this supposition would need to be further investigated.
References
Lythreatis, S., Singh, S.K., El-Kassar, A.N.: The digital divide: a review and future research agenda. Technol. Forecast. Soc. Chang. 175, 121359 (2022). https://doi.org/10.1016/j.techfore.2021.121359
Ali, M.A., Alam, K., Taylor, B., Rafiq, S.: Does digital inclusion affect quality of life? Evidence from Australian household panel data. Telemat. Inform. 51, 101405 (2020). https://doi.org/10.1016/j.tele.2020.101405
Ahn, D., Shin, D.H.: Is the social use of media for seeking connectedness or for avoiding social isolation? Mechanisms underlying media use and subjective well-being. Comput. Human. Behav. 29(6), 2453–2462 (2013). https://doi.org/10.1016/j.chb.2012.12.022
Skryabin, M., Zhang, J., Liu, L., Zhang, D.: How the ICT development level and usage influence student achievement in reading, mathematics, and science. Comput. Educ. 85, 49–58 (2015). https://doi.org/10.1016/j.compedu.2015.02.004
Yoon, H., Jang, Y., Vaughan, P.W., Garcia, M.: Older adults’ internet use for health information: digital divide by race/ethnicity and socioeconomic status. J. Appl. Gerontol. 39(1), 105–110 (2020). https://doi.org/10.1177/0733464818770772
González-Relaño, R., Lucendo-Monedero, A.L., Ivaldi, E.: Household and individual digitisation and deprivation: a comparative analysis between Italian and Spanish regions. Soc. Indic. Res. (2023). https://doi.org/10.1007/s11205-023-03151-4
Atkinson, R.D., Castro, D.: Digital quality of life: understanding the personal and social benefits of the information technology revolution (2018)
Van Djik, J.: Closing the digital divide: The role of digital technologies on social development, well-being of all and the approach of the Covid-19 pandemic (2020)
Van Dijk, J.A., Hacker, K.: The digital divide as a complex and dynamic phenomenon. Inf. Soc. 19(4), 315–326 (2003). https://doi.org/10.1080/01972240309487
Kabudi, T., Pappas, I., Olsen, D.H.: AI-enabled adaptive learning systems: a systematic mapping of the literature. Comput. Educ.: Artif. Intell. 2, 100017 (2021). https://doi.org/10.1016/j.caeai.2021.100017
Chen, L., Chen, P., Lin, Z.: Artificial intelligence in education: a review. IEEE Access 8, 75264–75278 (2020). https://doi.org/10.1109/ACCESS.2020.2988510
Soomro, K.A., Kale, U., Curtis, R., Akcaoglu, M., Bernstein, M.: Digital divide among higher education faculty. Int. J. Educ. Technol. High. Educ. (2020). https://doi.org/10.1186/s41239-020-00191-5
Lysaght, T., Lim, H.Y., Xafis, V., Ngiam, K.Y.: AI-assisted decision-making in healthcare. Asian Bioeth Rev 11(3), 299–314 (2019). https://doi.org/10.1007/s41649-019-00096-0
Lin, Y.K., Chen, H., Brown, R.A., Li, S.H., Yang, H.J.: Healthcare predictive analytics for risk profiling in chronic care: a Bayesian multitask learning approach. MIS Q. 41(2), 473–495 (2017). https://doi.org/10.25300/MISQ/2017/41.2.07
Carter, L., Liu, D., Cantrell, C.: Exploring the Intersection of the digital divide and artificial intelligence: a hermeneutic literature review. AIS Trans. Human-Comput. Interact. 12(4), 253–275 (2020). https://doi.org/10.17705/1thci.00138
Lupton, D.: Digital Sociology. Routledge (2014)
Ramsetty, A., Adams, C.: Impact of the digital divide in the age of COVID-19. J. Am. Med. Inform. Assoc. 27(7), 1147–1148 (2020). https://doi.org/10.1093/jamia/ocaa078
Saeed, S.A., Masters, R.M.R.: Disparities in health care and the digital divide. Curr. Psychiatry Rep. 23(9), 1–6 (2021). https://doi.org/10.1007/s11920-021-01274-4
Lee, K.R.: Impacts of information technology on society in the new century. Structure, pp. 1–6 (2002). Available: https://www.zurich.ibm.com/pdf/Konsbruck.pdf
Riggins, F., Dewan, S.: The digital divide: current and future research directions. J. Assoc. Inf. Syst. 6(12), 298–337 (2005). https://doi.org/10.17705/1jais.00074
Peras, I., KlemenčičMirazchiyski, E., JapeljPavešić, B., MekišRecek, Ž: Digital versus paper reading: a systematic literature review on contemporary gaps according to gender, socioeconomic status, and rurality. Eur. J. Investig. Health Psychol. Educ. 13(10), 1986–2005 (2023). https://doi.org/10.3390/ejihpe13100142
Thomas, J. et al.: Measuring Australia’s Digital Divide: Australian Digital Inclusion Index: 2023. Melbourne: ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, Swinburne University of Technology, and Telstra (2023). https://doi.org/10.25916/528s-ny91
Kipnis, D.: Technology and human needs. In: Technology and Power. Springer (2012)
Ganesh, S., Barber, K.F.: The silent community: organizing zones in the digital divide. Hum. Relat. 62(6), 851–874 (2009). https://doi.org/10.1177/0018726709104545
Ragnedd, M.: The Third Digital Divide: A Weberian Approach to Digital Inequalities. Routledge (2017)
United Nations: Digital Divide ‘a Matter of Life and Death’ amid COVID-19 Crisis, Secretary-General Warns Virtual Meeting, Stressing Universal Connectivity Key for Health, Development. Available: https://press.un.org/en/2020/sgsm20118.doc.htm#:~:text=But%2C. The digital divide is, and minorities of all kinds. Accessed 28 Nov 2023
Mena, G.E., Martinez, P.P., Mahmud, A.S., Marquet, P.A., Buckee, C.O., Santillana, M.: Socioeconomic status determines COVID-19 incidence and related mortality in Santiago, Chile. Science (1979) (2021). https://doi.org/10.1126/science.abg5298
Pandey, N., Pal, A.: Impact of digital surge during Covid-19 pandemic: a viewpoint on research and practice. Int. J. Inf. Manag. 55, 102171 (2020). https://doi.org/10.1016/j.ijinfomgt.2020.102171
Eruchalu, C.N., et al.: The expanding digital divide: digital health access inequities during the COVID-19 pandemic in New York City. J. Urban Health 98(2), 183–186 (2021). https://doi.org/10.1007/s11524-020-00508-9
Leavitt, H.J.: Technol. Organ. 44(2), 126–140 (2002)
Lenhart, A., Horrigan, J., Raine, L., Allen, K., Boyce, A., Madden, M., O’Grady, E.: The ever-shifting internet population: a new look at Internet access and the digital divide. Pew Research Centre (2003)
van Deursen, A., van Dijk, J.A.: Internet skills and the digital divide. New Media Soc. 13(6), 893–911 (2011). https://doi.org/10.1177/1461444810386774
van Dijk, J.A.: Digital divide research, achievements and shortcomings. Poetics 34(4–5), 221–235 (2006). https://doi.org/10.1016/j.poetic.2006.05.004
Dahlman, C., Mealy, S., & Wermelinger, M.: Harnessing the digital economy for developing countries (2016)
Sassi, S.: Cultural differentiation or social segregation? Four approaches to the digital divide. New Media Soc. 7(5), 684–700 (2005)
Recabarren, M., Nussbaum, M., Leiva, C.: Cultural divide and the Internet. Comput. Hum. Behav. 24(6), 2917–2926 (2008)
McLaren, J., Zappala, G.: The 'digital divide' among financially disadvantaged families in Australia. First Monday (2002)
Samaras, K.: Indigenous Australians and the ‘digital divide’ (2005)
Burnell, R., Peters, D., Ryan, R.M., Calvo, R.A.: Technology evaluations are associated with psychological need satisfaction across different spheres of experience: an application of the METUX scales. Front. Psychol. (2023). https://doi.org/10.3389/fpsyg.2023.1092288
VandenAbeele, M.M.P.: Digital wellbeing as a dynamic construct. Commun. Theory 31(4), 932–955 (2021). https://doi.org/10.1093/ct/qtaa024
Kosycheva, M.A., Tuzhba, T.E., Gaydamashko, I.V., Yesaulova, K. S.: Influence of poor digital competence on procrastination of university teachers. In: ACM International Conference Proceeding Series, pp. 73–77 (2022). https://doi.org/10.1145/3416797.3416832
Kumpikaitė-Valiūnienė, V., Aslan, I., Duobienė, J., Glińska, E., Anandkumar, V.: Influence of digital competence on perceived stress, burnout and well-being among students studying online during the covid-19 lockdown: a 4-country perspective. Psychol. Res. Behav. Manag. 14, 1483–1498 (2021). https://doi.org/10.2147/PRBM.S325092
Katsarou, E.: The effects of computer anxiety and self-efficacy on L2 learners’ self-perceived digital competence and satisfaction in higher education. J. Educ. Elearn. Res. 8(2), 158–172 (2021). https://doi.org/10.20448/JOURNAL.509.2021.82.158.172
Van Winkle, B., Carpenter, N., Moscucci, M.: STATE of the art and science why aren’t our digital solutions working for everyone? AMA J. Ethics 19(11), 1116–1124 (2017)
Park, S.: Digital Capital. Palgrave Macmillan, UK (2017)
Clare, C.A.: Telehealth and the digital divide as a social determinant of health during the COVID-19 pandemic. Netw. Model. Anal. Health Inform. Bioinform. (2021). https://doi.org/10.1007/s13721-021-00300-y
Sieck, C.J., Sheon, A., Ancker, J.S., Castek, J., Callahan, B., Siefer, A.: Digital inclusion as a social determinant of health. NPJ. Digit. Med. 4(1), 5–7 (2021). https://doi.org/10.1038/s41746-021-00413-8
van Deursen, A.J., van Dijk, J.A.: The digital divide shifts to differences in usage”. New Media Soc. 16(3), 507–526 (2014). https://doi.org/10.1177/1461444813487959
van Deursen, A.J., Helsper, E.J.: The third-level digital divide: who benefits most from being online? Commun. Inf. Technol. Annu. 10, 29–52 (2015). https://doi.org/10.1108/s2050-206020150000010002
Bohnert, M., Gracia, P.: Digital use and socioeconomic inequalities in adolescent well-being: longitudinal evidence on socioemotional and educational outcomes. J. Adolesc. 95(6), 1179–1194 (2023). https://doi.org/10.1002/jad.12193
Sachs Goldman: AI investment forecast to approach $200 billion globally by 2025. Sachs Goldman. Available: https://www.goldmansachs.com/intelligence/pages/ai-investment-forecast-to-approach-200-billion-globally-by-2025.html. Accessed 30 Nov 2023
Sætra, H.S.: Generative AI: here to stay, but for good? Technol. Soc. 75, 102372 (2023). https://doi.org/10.1016/j.techsoc.2023.102372
Bogani, R., Theodorou, A., Arnaboldi, L., Wortham, R.H.: Garbage in, toxic data out: a proposal for ethical artificial intelligence sustainability impact statements. AI Ethics 3(4), 1135–1142 (2023). https://doi.org/10.1007/s43681-022-00221-0
Huang, C., Zhang, Z., Mao, B., Yao, X.: An overview of artificial intelligence ethics. IEEE Trans. Artif. Intell. 4(4), 799–819 (2023). https://doi.org/10.1109/TAI.2022.3194503
Bughin, J., Seong, J., Manyika, J., Chui, M., Joshi, R.: Notes From the AI frontier: modeling the Impact of Ai on the World Economy. Modeling the global economic impact of AI | McKinsey, no. September, pp. 1–61 (2018). Available: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy. Accessed 03 April 2021
Rotatori, D., Lee, E.J., Sleeva, S.: The evolution of the workforce during the fourth industrial revolution. Hum. Resour. Dev. Int. 24(1), 92–103 (2021). https://doi.org/10.1080/13678868.2020.1767453
Cervini, P., Farri, E., Rosani, G.: The Infinite Potential of Generative AI. Harvard Business Review Italia, pp. 1–65 (2023)
Bertomeu, J., Lin, Y., Liu, Y., Ni, Z.: Capital market consequences of generative AI: early evidence from the Ban of ChatGPT in Italy. SSRN Electron. J. (2023). https://doi.org/10.2139/ssrn.4452670
Cullen, R.: Addressing the digital divide”. Online Inf. Rev. 25(5), 311–320 (2001). https://doi.org/10.1108/14684520110410517
Barton, D., Woetzel, J., Seong, J., Tian, Q.: Artificial intelligence: implications for China. McKinsey Global Institute. www.mckinsey.com/mgi (2017)
Goedhart, N.S., Broerse, J.E.W., Kattouw, R., Dedding, C.: ‘Just having a computer doesn’t make sense’: the digital divide from the perspective of mothers with a low socio-economic position. New Media Soc. 21(11–12), 2347–2365 (2019). https://doi.org/10.1177/1461444819846059
Roche, C., Wall, P.J., Lewis, D.: Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI thics 3(4), 1095–1115 (2023). https://doi.org/10.1007/s43681-022-00218-9
Zhang, B., Dafoe, A.: Artificial intelligence: American attitudes and trends. SSRN Electron. J. (2019). https://doi.org/10.2139/ssrn.3312874
Selwyn, N., Cordoba, B.G., Andrejevic, M., Campbell, L.: AI for Social Good? Australian public attitudes towards AI and society, Melbourne (2020)
Sindermann, C., et al.: Assessing the attitude towards artificial intelligence: introduction of a short measure in German, Chinese, and English Language. KI - Kunstliche Intelligenz 35(1), 109–118 (2021). https://doi.org/10.1007/s13218-020-00689-0
Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham (2019)
Buhmann, A., Fieseler, C.: Towards a deliberative framework for responsible innovation in artificial intelligence. Technol. Soc. 64, 101475 (2021). https://doi.org/10.1016/j.techsoc.2020.101475
Cossette-Lefebvre, H., Maclure, J.: AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI Ethics 3(4), 1255–1269 (2023). https://doi.org/10.1007/s43681-022-00233-w
Manheim, K., Kaplan, L.: Artificial intelligence: risks to privacy and democracy by Karl M. Manheim, Lyric Kaplan. SSRN Yale J. Law Technol. 21, 106–188 (2019)
Shams, R.A., Zowghi, D., Bano, M.: AI and the quest for diversity and inclusion: a systematic literature review. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00362-w
Novelli, C., Taddeo, M., Floridi, L.: Accountability in artificial intelligence: what it is and how it works. AI Soc. (2023). https://doi.org/10.1007/s00146-023-01635-y
Buccella, A.: ‘AI for all’ is a matter of social justice. AI Ethics 3(4), 1143–1152 (2023). https://doi.org/10.1007/s43681-022-00222-z
Stilgoe, J., Owen, R., Macnaghten, P.: Developing a framework for responsible innovation. Res. Policy 42(9), 1568–1580 (2013). https://doi.org/10.1016/j.respol.2013.05.008
Owen, R., Macnaghten, P., Stilgoe, J.: Responsible research and innovation: from science in society to science for society, with society. Sci Public Policy 39(6), 751–760 (2012). https://doi.org/10.1093/scipol/scs093
Peters, D., Calvo, R.A., Ryan, R.M.: Designing for motivation, engagement and wellbeing in digital experience. Front. Psychol. 9(May), 1–15 (2018). https://doi.org/10.3389/fpsyg.2018.00797
Chen, B., et al.: Basic psychological need satisfaction, need frustration, and need strength across four cultures. Motiv. Emot. 39(2), 216–236 (2015). https://doi.org/10.1007/s11031-014-9450-1
Deci, E., Ryan, R.M.: Self-determination theory. Handb. Theor. Soc. Psychol. 9(20), 416–436 (2012)
George, D., Mallery, P.: IBM SPSS Statistics 26 Step by Step: A Simple Guide and Reference. Routledge, NY (2019)
OECD: Artificial intelligence in society. Available: https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-in-society_eedfee77-en (2019)
Gillespie, N., Lockey, S., Curtis, C.: Trust in Artificial Intelligence: A Five Country Study (2021). https://doi.org/10.14264/e34bfa3
Schepman, A., Rodway, P.: Initial validation of the general attitudes towards artificial intelligence scale. Comput. Hum. Behav. Rep. 1, 100014 (2020). https://doi.org/10.1016/j.chbr.2020.100014
Pecararo, J.: One Good Idea - Survey Fatique. Quality Progress, vol. 45, no. 10 (2012)
Bergdahl, J., et al.: Self-determination and attitudes toward artificial intelligence: cross-national and longitudinal perspectives. Telemat. Inform. 82, 102013 (2023). https://doi.org/10.1016/j.tele.2023.102013
Abdelaal, A.: Grand research challenges facing ethically aligned artificial intelligence. In: 27th Annual Americas Conference on Information Systems, AMCIS 2021, no. 2019, pp. 1–10 (2021)
Ozmen Garibay, O., et al.: Six human-centered artificial intelligence grand challenges. Int. J. Hum. Comput. Interact. 39(3), 391–437 (2023). https://doi.org/10.1080/10447318.2022.2153320
Wang, Y., Xiong, M., Olya, H.G.T.: Toward an understanding of responsible artificial intelligence practices. In: Proceedings of the Annual Hawaii International Conference on System Sciences, vol. 2020-Janua. pp. 4962–4971 (2020). https://doi.org/10.24251/hicss.2020.610
Helsper, E.J., Reisdorf, B.C.: The emergence of a ‘digital underclass’ in Great Britain and Sweden: changing reasons for digital exclusion. New Media Soc. 19(8), 1253–1270 (2017). https://doi.org/10.1177/1461444816634676
Holcombe-James, I.: Digital access, skills, and dollars: applying a framework to digital exclusion in cultural institutions. Cult. Trends 31(3), 240–256 (2022). https://doi.org/10.1080/09548963.2021.1972282
Thomas, J., Barraket, J., Wilson, C. K., Holcombe-James, I., Kennedy, J., Rennie, E., MacDonald, T. Measuring Australia’s digital divide: The Australian digital inclusion index 2020 (2020)
Robinson, C.J., Urzedo, D., Macdonald, J.M., Ligtermoet, E., Penton, C.E., Lourie, H., Hoskins, A.: Place-based data justice practices for collaborative conservation research: a critical review. Biol. Cons. 288, 110346 (2023)
Goralski, M.A., Tan, T.K.: Artificial intelligence and sustainable development. Int. J. Manag. Educ. (2020). https://doi.org/10.1016/j.ijme.2019.100330
Google, “Responsible AI practices,” Google AI. [Online]. Available: https://ai.google/responsibility/responsible-ai-practices/. Accessed 05 Dec 2023
Acknowledgements
The authorship team would like to thank Naomi Mecklenburgh for her passion and drive when discussing, researching, and summarising literature related to the digital divide.
Funding
Open access funding provided by CSIRO Library Services. The authors are employed by the Commonwealth Science and Industrial Research Organisation and conduct research and advisory services for state and federal government agencies and enterprises in Australia and internationally. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Sarah Bentley and Claire Naughtin. The first draft of the manuscript was written by Sarah Bentley and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Ethical approval
Ethical approval for this study was obtained from the CSIRO’s Human Research Ethics Committee (#089–23). Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bentley, S.V., Naughtin, C.K., McGrath, M.J. et al. The digital divide in action: how experiences of digital technology shape future relationships with artificial intelligence. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00452-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-024-00452-3